What Is This?
Stuart W. Mirsky
Kirby Urner
Join Us!

Stuart W. Mirsky (Stuart W. Mirsky is the principal author of this blog).
Last 10 Entries:

Sean Wilson's Blog:

Ludwig Wittgenstein:

Search Archives:
Every Entry

Duncan Richter's Blog:

Entries in consciousness (5)


The Science and Philosophy of Brains

. . .Where science gives us a way to understand how the mental mechanisms we rely on in our daily and professional lives are made possible by brain functionalities, philosophy offers a way to understand how our intellect, our grasp of things, works in us. That is, an account of those features or elements in our mental lives (what goes on in us when we think about and understand things) which make up our grasp of the world around us (our subjective placement in the objective world we recognize as our milieu) requires a conceptual inquiry more suited to philosophy than science. Of course, the two cannot be divorced because science surely requires conceptual clarity in the formulation of its hypotheses and theories while philosophy depends on agreement with the best available empirical knowledge (scientific information) if it's to provide viable conceptual accounts.

If science can tell us what brains have to do to generate the elements we experience in our mental lives, and how our brains got that way, philosophy is needed to understand what the mental features caused by our brains' being "that way" actually consists of. That is, we need to know what's going on with us when we conceptualize the world around us in terms of spatial and temporal pictures and plan our actions, within that layered context, and evaluate the possibilities accordingly. Only with that sort of account can we know just what it is the brain's structure and functional behaviors make possible.

Click to read more ...


The Moral and the Mental

One of the issues that has come into focus for me while exploring the best way of accounting for (and so of explaining) how moral valuing works is the importance, in all this, of a robust picture of the self. That is, the elements we associate with subjectivity, with being a subject, seem to be critical in any account of moral valuing, not only because valuing itself implies the presence of a subject but because what is of particular interest in the moral game is the value placed on the self, i.e., the acting subject. Thus there is a need to presume the reality of the self in a way that sometimes seems to imply "entity." But, of course, given the insights of many modern philosophers, especially Wittgenstein, we don't want to do that for selves aren't things, aren't existents that parallel the bodies which have them!

The species of valuing which we call "moral" considers the quality of agents' acts and that quality can only be assessed if the acts in question are seen in their entirety and not in piecemeal fashion (which is how acts gain value for us when we are valuing the things they can obtain, achieve or produce for us). To make a moral judgment about an act, we have to go beyond the derivative value accorded the act as means to end. We have to consider the act as a whole. So what's involved in seeing an act in its entirety? Well, to the extent an act consists of certain physical events brought about by an agent, and, in a more extended sense, in certain outcomes those events achieve for the agent, it also consists of what the agent intends, i.e., what the agent undertakes the act in order to accomplish. And intentions, whatever else we may want to say of them, are mental phenomena. They happen in the minds of agents, in the thoughts, beliefs, wishes and inclinations which agents have and which underlie, in a generative sense, the acts performed. . . .

Click to read more ...


On Intention

In order to avoid the possibility of granting the reality of "mental existents," Hall, on page 153, speaks of intentions as dimensional (or, as he had written earlier, of having the nature of being an aspect of something else). He writes:

I have already suggested an escape from this by confining 'events' to physical happenings, some of which (certain neural ones) have an intentional dimension.

This is a position that Walter on this list has sometimes espoused himself. Hall goes on:

We could now add to this that when we loosely speak of a total mental event or state, such as is involved in an emotional experience, what we correctly refer to is a total cerebral event with all its intentional complexity, from which perceptions can be considered as abstractions.

This raises the interesting question of how we are to think about whatever it is that we consider the core feature of what we call "consciousness" or "mind" or "the mental."

Click to read more ...


Hall, Dennett and the Problems of Reference and Intentionality

I've taken up Walter's suggestion to begin reading Everett Wesley Hall's book on-line, pending a decision to obtain a hard copy from Amazon. I've found it quite interesting, as Walter suggested, though partly because of various synchronicities I've found with earlier highly energized debates some of us have participated in on other lists. Interestingly and in light of a longstanding argument on this and other sites, Hall, early on in his book, Our Knowledge of Fact and Value, uses "refers" precisely as I have often done, i.e., to pick out what one has in mind, rather than what actually is the case.

He writes:

A cognitive verb with a substantival clause as objective complement may be taken, then, to refer to an act whose object is a fact or a 'non-fact,' that is, a fact that does not obtain. (page 19, chapter 2)

Here he uses "refers" precisely as we do in ordinary language, and as I had done when I wrote, to the consternation of some of my interlocutors, things like 'a referent is what I have in mind when I make a referring statement, i.e., it's that to which I am referring by making the statement, gesture, etc., and can be understood based on my description of what I have in mind.' . . .

Click to read more ...


Can Machines Get It?

Considers the nature of understanding and meaning in light of John Searle's argument (in the CRA) against the possibility of computational cognition.

It's long seemed to me that one of the serious flaws in [John Searle's Chinese Room Argument] . . . is its failure to elucidate [the] mysterious feature he calls semantics (i.e., the meaning of a symbol, word or statement, etc.). After all, if machines like computers can't have semantics, we ought to at least know what it is we think they are unable to possess. It can hardly be enough to suppose that what we find in ourselves, at moments of introspection, when we are aware of understanding a symbol, word or statement, isn't available to computers merely because we can't imagine it . . .

Click to read more ...