Monday, November 1, 2010

Eight Questions... (Part 3)

This is a continuation of my answers to Michael Egnor's Eight Questions for the New Atheists.

5) Why do we have subjective experience, and not merely objective existence?

I suspect that Egnor is referring to the "Hard Problem of Consciousness." I think the Hard Problem can be solved fairly easily. Our brains are information-processing units. "Qualia" (a name for types of internal experiences that we have) are identical to perception, and perception is merely the process of our brains correctly reacting to incoming information. Here's a quote from an essay by Evan Louis Sheehan that illuminates the issue:

"Suppose an alien were to declare that “the hard problem” is determining what it is about those printed slips of paper that gives them their intrinsic value. Indeed, the value of pleasure may be no more difficult to understand than the value of a dollar. Just as a dollar has value only because we believe it has value, I’ll argue that a feeling of pleasure also has value only because we believe it has value. This simple realization allows us to model consciousness as merely a system of beliefs, and beliefs are easy to implement computationally.

"While we cannot imagine how to program a computer to feel pleasure, we can easily program a computer to have a belief system, and we can easily install a belief in pleasure that becomes true when certain circuits are active. We cannot understand how a thermostat could possibly feel cold, but we can easily understand how a computerized thermostat could hold a belief that it feels cold.

"We may easily validate this simple idea by realizing that one cannot feel pain without believing that one feels pain. And neither can one believe one feels pain without actually feeling it. It seems that a feeling of pain and a belief in the feeling of pain are intimately associated, if not identical. Consider also that a hypnotist can sometimes eliminate one’s feeling of pain by simply installing a countervailing belief. And consider that a psychosomatic pain can result from a simple belief in illness. It is even true that a psychosomatic pain can be reduced by a simple belief in the effectiveness of a pill that is in fact just a placebo."

I'd also like to point out that it is not as intuitive as you think to suggest that a physically functioning human being might be able to lack inner experience (somehow).

One author suggests a thought experiment in which we imagine someone who can see/hear/smell (in the sense that their sensory organs can take respond to sound/light/odor and their brain can register information from those organs) but has no inner experience of what it is like to see/hear/smell:

"You wake up one morning, open your eyes, and what do you notice first? That the sun is streaming in the window, that your alarm clock says 7:30, and that your partner is already getting dressed on the other side of the room — or that, despite registering all this in a moment, you can’t actually see anything? Or try a more radical variant, one that takes us a step back towards full zombiehood. You wake up one morning to find that all of your sensory modalities have blanked out subjectively (though you’re still getting all the information alright). Again, what do you notice first: the sunlight, the sounds of birdsong and of traffic in the distance, the smell of coffee brewing — or the total absence of sensory qualia?"

It's just not feasible to think that humans could lack internal experience, not even from a materialist point of view.

6) Why is the human mind intentional, in the technical philosophical sense of aboutness, which is the referral to something besides itself? How can mental states be about something?

I'm not sure exactly how to interpret this question, but the best way I know how is to interpret it this way: How can you have mental states (like thoughts) about others things (your dog, your car, the weather, etc)? A thought is basically just a reminder, inquiry, observation, etc. that comes to your conscious stream. Thoughts are ultimately information exhanges inside the brain that are normally there to affect decision making. And sometimes those information exchanges are exchanging information that is intended to represent facts about one particular thing (and that is what it means for a thought to be 'about' something).

7) Does Moral Law exist in itself, or is it an artifact of nature (natural selection, etc.)?

The 'Moral Law' is an artifact of nature. If there were no sentient beings in the universe, there would be no moral facts in the universe, because morality is a set of abstractions concerning how we should behave if we value other sentient beings.

8) Why is there evil?

The universe is indifferent to human pain and suffering, and so as luck has it some events will lead to the suffering and death of sentient beings like humans.

14 comments:

Dr. Günter Bechly said...

Concerning the "hard problem" I fear that the problem would not have been that hard in the history of philosophy of mind, if the proposed "solution" would be convincing to more people. It is not for no reason, that Daniel Dennett's major opus was considered as "Consciousness Explained AWAY" by many critics. Beliefs need a subjective believer who holds those beliefs. It does not make sense to say that a computer holds beliefs. It is not at all plausible to claim that our inner experiences do not exist, because they are all we really know at all. You claim that precisely those only things we know for sure (our inner feelings) are just illusions of biological robots, while the only indirectly inferred external world shall be the only thing that really exists. This turns things upside down and only can make any sense to someone who presupposes materialism and ignores all contrary evidence. Our subjective experiences indeed are the strongest evidence against materialism.

Dr. Günter Bechly said...

The problem of "aboutness" is more tricky as well. Aboutness is immaterial. Beliefs are immaterial. Arguments are immaterial. In the world of materialism none of those exist, but only matter and energy. Thus you are not a materialist because of convincing reasons, because reasons are immaterial and have no causal power in materialism. Materialism undermines rationalism and thus is selfrefuting in the end. This is the real problem of "aboutness", that was e.g. elaborated by Victor Reppert as the Argument From Reason.

AIGBusted said...

Hi Gunter,

For the record, I believe that our inner experiences are real. I also believe that those inner experiences reduce to information processing in the brain.

"Beliefs need a subjective believer who holds those beliefs. It does not make sense to say that a computer holds beliefs."

Well, let's define "belief" in order to make things clearer. I would say that a belief is a proposition that one relies upon in order to make decisions.

Taken this way, you are right that it doesn't make sense to think of a computer being able to hold beliefs. Or at least, computers that we have NOW cannot hold beliefs. Nonetheless, we can imagine future advances in artificial intelligence in which computers begin selecting available options in order to achieve goals that the computer is programmed to have. If AI goes that far (or if it is capable in principle of going that far) then programmers would want to ensure that the computer had a base of knowledge by which to make informed selections. And the base of knowledge that was relied upon to make selections could properly be called "belief".

And if belief can exist, so can consciousness. Some philosophers have reasoned that one thing we know for sure, one thing that we can never be mistaken about, is when we feel something subjectively. If we think we believe we feel pain, then we do feel pain. And thus conscious feelings can be reduced to beliefs. And therefore a thing which can believe can feel. If a machine could believe, it could feel. Machines can (in principle) believe (established by my thought experiment) therefore a machine can (in principle) feel.

AIGBusted said...

"The problem of 'aboutness' is more tricky as well. Aboutness is immaterial. Beliefs are immaterial. Arguments are immaterial. In the world of materialism none of those exist, but only matter and energy. Thus you are not a materialist because of convincing reasons, because reasons are immaterial and have no causal power in materialism. Materialism undermines rationalism and thus is selfrefuting in the end."

I understand what you're saying completely, but I disagree. Sometimes things that seem immaterial really are material, or refer to material processes/interactions.

The computer I am using is not my own and my time is running out, but here is a way I can put that might help you understand: A snowflake is not identical to water molecules. Nonetheless, snowflakes are composed purely of water molecules. A snowflake is just a description of many water molecules arranged and spaced in a certain way. Likewise, I believe that all things are physical. Some words and concepts (which describe things we know are real) don't appear to refer to physical things, but if you look closely enough at them you see that ultimately they do refer to physical things, or physical processes, or interactions between physical things.

I hate to end here but we can discuss this more tomorrow.

AIGBusted said...

Hi Gunter,

After thinking about your Argument from Reason for a while, I think I understand where the argument goes wrong. The argument goes wrong in that arguments can be represented physically (in a book, in your brain, etc.) even if arguments might be thought of as abstract in some sense. If I come to believe an argument, I must at least have some conception of that argument in my mind, and if the argument is present in my mind, it must be represented somewhere in my physical brain. Thus it is only the physical representation of that argument that affects me.

Dr. Günter Bechly said...

A physical representation of what? In materialism there exists no argument that could be physically represented, but only the physical neuronal brain states that represent nothing at all. Only those brain states are causally efficient, but not their (non-existent) propositional contents. Again: You do not believe in atheism and materialism BECAUSE of convincing arguments, but only because of efficient causation by some non-propositional neuronal brain states. You can't have you cake and eat it.

AIGBusted said...

"In materialism there exists no argument that could be physically represented,"

Arguments can be represented in a variety of ways: through letters (printed by ink) arranged into words, for example.

"...only the physical neuronal brain states that represent nothing at all."

Neuronal brain states represent nothing? That can't be right. For simplicity, think of a computer, which both of us agree is a material thing: a computer can send signals and can physically represent more abstract things such as the games played on computers and the virtual worlds that can be represented on a computer.

Dr. Günter Bechly said...

Letters in ink represent nothing without a conscious reading and understanding them.
Likewise the electronic states within a computer represent nothing without a conscious being using the software. The representation is immaterial in both of your examples!

Dr. Günter Bechly said...

To explain a bit more: A game in a computer is nothing but an arbitrary pattern of electronic events without any sense. Only when a human uses the computer and plays a game on it, the electronic events become a representation of something. The same is with letters: Consider the following: in Chinese every character more or less represents a word. However instead of attributing sounds to alphabetical letters or complete words to pictorial signs, you could make the convention that an apple represents the content of a certain poem. Everybody who learned this poem knows that a figure of an apple in a certain context represents this poem. However, would the apple without humans really represent this poem? No, of course not. There are no physical representations or instances of aboutness without intelligent conscious beings as observers.

AIGBusted said...

Hi Gunter,

You're saying that "meaning" exists only when there is something (or someone, rather) to read the dots and lines on a page and be affected by it (emotionally, intellectually, etc.).

I suppose I agree, but I think that that process of translating the dots and lines into intellectual/emotional reaction is, at rock bottom, a completely natural phenomenon.

I think it is concievable to both of us (or at least to me) that in the future Artificial intelligence may advance to the point in which a robot or computer can be invented which can analyze scientific hypotheses, philosophical conjectures, and so on, and can deduce implications that these conjectures would have on other scientific/philosophical disciplines and so on. And in that case we could properly say that this computer understood things.

Dr. Günter Bechly said...

There is no TRANSLATING, because intellectual or emotional reactions do not exist in materialism. I think you are begging the question in presupposing that strong AI is possible (thus functionalism is true), which I dispute. In my view interiority, aboutness, feeling, qualia, etc. are the ultimate defeaters of materialism. And this latter view is not begging the question because it can be directly verified by introspection by any human being.
Your example of future robots who understand scientific theories is just a more sophisticated version of my apple-poem example (with a more complex apple = AI computer; and a more complex poem = scientific theories). But even though being more complex, it still lacks any genuine aboutness.

AIGBusted said...

Hi Gunter,

"intellectual or emotional reactions do not exist in materialism."

They do, but under naturalism (or what you call materialism) they are just reducible to other things.

"I think you are begging the question in presupposing that strong AI is possible (thus functionalism is true), which I dispute."

I didn't mean to beg the question as I assumed that most people would share my intuition about the robot example. Nonetheless, I think a better example could be given, one which I will develop after explaining my thoughts on understanding.

Understanding comes about when a being knows the content and implications of a proposition. For example, If I'm on the phone with a friend explaining the theory of evolution, and he responds, "So all of life is just one big family tree! I suppose this also means that deep in the past we had ancestors that belonged to a completely different species!" Such comments would allow me to know that he understood what I was saying.

Calculators can be said to have a very rudimentary understanding of numbers because if type in 100 and hit the minus sign and then one, the calculator displays "99".

Dr. Günter Bechly said...

You say "Understanding comes about when a being knows the content and implications of a proposition."

AMEN, I fully agree.

However, a calculator does not UNDERSTAND the proposition of numbers and mathematical logic, but only is a causal chain of electronic events that generates an output that WE can understand. No CONTENTS of propositions play any causal role in the calculator. The same holds for any conceivable more complex calculators (computers, robots, turing machines).

AIGBusted said...

"a calculator does not UNDERSTAND the proposition of numbers and mathematical logic, but only is a causal chain of electronic events that generates an output that WE can understand."

Just because something is a causal chain does not mean that it cannot lead to understanding.

And when you say that "we" can understand the numbers, what do you mean? You mean that both of us know the content and implications that certain numerical statements and equations have. And how do you know that I understand numbers? Presumably because you could ask me questions like: "What is five take away four?" And I would answer "one", indicating that I understood.

But if you could discount the calculator's understanding because of it being simply a causal chain of events, you could discount my "understanding" as well because it is just a causal chain of events. When you asked what five take away four equaled, that was just vibrations in the air that reached a vibration detector (my ear) which transmitted signals to my brain which caused my brain to react in such a way so as to cause my speech and the use of my vocal chords that resulted in my reply of "one". I suppose you'll think that this is begging the question, but it isn't: What goes on in the brain is the result of deterministic (or perhaps highly probablistic) physics. If it weren't, there would be some evidence that the brain did not follow the laws of physics that everything else does.

Anyway, it's late at night and I should get some sleep.