How Do We Define Consciousness

If a robot can decide its own decision like it can have a nap or have some foods if the battery goes down a certain amount, it can decide whatever decision based on enormous database it has, and that the database keeps on changing depending on its experiences. Can we say the robot is conscious?

Hanbi kim

A robot’s deciding to do something might be evidence that it’s conscious. But I don’t know if that gets us any closer to understanding or defining consciousness itself.

This might be because the notion of decision making is already assuming lots of contentious philosophical concepts in the philosophy of mind.

We could mean at least two things by decision making, on the basis of the examples you’ve provided; the first would be a causal process that involved some kind of information processing - presumably this is what computers do, and in my opinion it doesn’t constitute consciousness. The second thing that we could mean by a robot “deciding” something - say, as in your example, recharging itself when low on energy - is that it intends, desires or perhaps understands the causal process of recharging. But such understanding, knowledge, intentionality, and agency, are all complex notions which don’t really shed any light on consciousness. Consciousness is dependent on these other notions (though philosophers disagree about there importance) and so an example that implicitly assumes them is unlikely to be useful.

What you seem to be suggesting by “an enormous database” that “keeps changing” due to experience, is that it might be possible (in practice or theory) for a robot to exactly mimic human behaviour, (and presumably, in certain respects, human thought). Whether we would then be justified in calling this robot (computer, machine, artificial brain, whatever) “conscious” is a really interesting and difficult question.

Ben f

—-—

Good. Thanks.

There are two Bens, so please add your surname or student ID (to all your posts). Thanks!

Jason


I think that a line needs to be drawn when defining ‘what is conscious’ between decisions made based on this ‘database’, and something extra. i’m not really explaining myself well, but I think this database you talk of that is constantly changing due to experience can indeed be contrasted to our brain.

however, consciousness is more of an awareness of ones surroundings. It’s not JUST an ability to smell, taste, touch, or see, but to be able to THINK about, and ANALYSE, and PONDER and APPRECIATE and QUESTION what it is you have just smelt, tasted, touched, or seen. it’s sort of this ‘extra’ bit. For example, you would say that a dog, or a cat is conscious (like the database you talk of, they can experience and their brain is constantly changing due to their experiences - they can also decide to sleep when they please). but can they truly experience consciousness? Can they question, appreciate, ponder or analyse their experiences? do you see the difference i’m trying to make?

therefore, I think a machine in your case would be conscious, but it wouldn’t be able to experience consciousness.

Natalie Graf


Right, Natalie. Although in philosophical terminology, we usually say that something’s only conscious if it’s experiencing consciousness, or at least experiencing something.

Jason


This topic is probably off track, but I couldn’t help myself. Consciousness is such an interesting topic. Sorry if it’s too long (clearly I need a real life!!)

Whether an artificial intelligence (eg a robot) can be conscious depends on what is meant by consciousness, whether it’s based on physical biology or something else non-physical and the limitations of artificial intelligence. Firstly, a quick discussion on types of consciousness. According to the Stanford Encyclopedia of Philosophy, there are several types or levels of consciousness. Two are common among many animal species – first is the opposite of unconscious ie not being asleep or in a coma. Second is the ability to perceive and therefore be able to respond to the environment. The third level is known as “access” consciousness which is used for higher cognitive tasks such as reasoning and communication. Some animals besides humans are thought to possess this.

The fourth is known as phenomenal consciousness (also known as “qualia” I believe?), which relates to the way mental states feel, including things like colour. Phenomenal consciousness is thought to be more likely in mammals and birds according to the Stanford Encyclopedia of Philosophy.

Finally, self consciousness refers to “an organism’s capacity for second-order representation of the organism’s own mental states” or, in other words, self-awareness.

Now a quick discussion on whether consciousness is simply a physical biological process (“materialism”) or something non-physical (whatever that may be). First a couple of definitions. According to Wikipedia, “materialism holds that the only thing that exists is matter or energy; that all things are composed of material and all phenomena (including consciousness) are the result of material interactions” whereas “reductionism is a philosophical position which holds that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents.” The two concepts appear to be closely linked in philosophy of consciousness.

ANU Professor of Philosophy David Chalmers (home page at http://consc.net/online/1.1b)) argues in “Consciousness and its place in nature” (http://consc.net/papers/nature.pdf)) that things such as perceptual experience, bodily sensation, mental imagery, emotional experience, etc, (ie phenomenal consciousness), are not based on materialism. Chalmers argues against the idea that phenomenal consciousness is a physical process in the brain. (The arguments are quite complex so I might not be entirely correct here, but that seems to be his overriding argument).

Chalmers argues that a being that is physically identical to a conscious being (ie a “philosophical zombie”) could behave like a fully conscious person but “things will be different from the first-person point of view” and “what it is like to be a… zombie will differ from what it is like to be the original being”. However, this is a circular argument. How does he know it will be different for a zombie?

Chalmers accepts, however, that there “seems to be no deep problem in principle” with the idea that a physical system could be conscious in the sense of discriminating stimuli, reporting information, monitoring internal states and controlling behaviour.

Daniel Dennett, a philosopher and cognitive scientist, argues in “Consciousness Explained” (see Wiki discussion at http://en.wikipedia.org/wiki/Consciousness_Explained)) that consciousness is the result of physical processes in the brain. Dennett says the properties attributed to qualia by philosophers (ie incorrigible, ineffable, private, directly accessible, etc) are incompatible, so the notion of qualia is incoherent. In fact he says that we are all “zombies” in the sense referred to above – ie there is no good reason to believe that our consciousness is anything more than a result of neural/synaptic interactions and chemical/electrical processes.

Mark Pharoah argues against Chalmers and provides a detailed reductive argument for consciousness in his 2007 Paper (see http://homepage.ntlworld.com/m.pharoah/hstsimplified.html for a less technical interpretation). I must admit I have not yet read this in detail. The abstract says “This paper provides a reductive explanation of phenomenal experience that is coherent with exhaustive stipulated philosophical criteria and theories. Phenomenal experience, in being the contextual identity of human consciousness, has been described as the ‘hard problem of consciousness’, and to some is an insurmountable enigma. Consequently, a reductive explanation solves a mystery of the individual’s experience of ‘consciousness’. This is done here by identifying an evolving dynamic systems hierarchy. Although not a requirement of reduction, the explanation I provide is consistent with our understanding of evolution and, consequently, explains the physical origins and purpose of organisms that possess higher-order thought”

Pharoah states that his reductive explanation of phenomenal experience “…links uncontroversial physical facts to uncontroversial phenomenal conclusions”. On the other hand, he provides a good overview of the current state of the debate about consciousness at http://mind-phronesis.co.uk/chaos-in-the-philosophy-of-consciousness-debate. “Whilst some leading experts in the field spend their life explaining why consciousness is unexplainable, others spend their life saying it is already explained!”

Penrose and Hamerhoff proposed that consciousness is based on quantum mechanical activity in microtubules (filamentous protein polymers that form the cytoskeleton of cells, for any chem students out there). Pharoah provides a detailed discussion of QM and consciousness at http://en.wikipedia.org/wiki/Consciousness_Explained. However, Kristoff Koch and Klause Hepp argue in a 2006 article in “Nature” journal (http://www.klab.caltech.edu/news/koch-hepp-06.pdf)) that “it is far more likely that the material basis of consciousness can be understood within a purely neurobiological [classical] framework”.

Now, to get back to the possibility of consciousness in machines. If consciousness is the result of purely physical processes, then surely it can be reproduced, at least in principle. Maybe it would be unrealistic to expect artificial intelligence to have self-awareness, but why would it be unlikely that it could have lower order levels of consciousness? This claim makes a lot of sense from the materialist perspective (eg what Daniel Dennett is saying about consciousness being an illusion based only on physical processes).

Drew McDermott points out in “Artificial Intelligence and Consciousness”, 2007 (http://cs-www.cs.yale.edu/homes/dvm/papers/conscioushb.pdf),,) that the brain may operate using a combination of computation and other means. He points out that many argue phenomenal consciousness may require non-computational methods, whereas other forms of consciousness can be achieved computationally. He then provides detailed arguments to suggest that this is not the case. McDermott concludes as follows: “There are plenty of critics who don’t want to wait to see how well AI succeeds, because they think they have arguments that can shoot down the concept of machine consciousness, or rule out certain forms of it, right now. We examined three…In each case the basic computationalist working hypothesis survived intact: that the embodied brain is an “embedded” computer, and that a reasonably accurate simulation of it would have whatever mental properties it has, including phenomenal consciousness.”

McDermott says real progress on creating conscious programs awaits further progress on enhancing the intelligence of robots. Perhaps quantum computing may enable such progress. Researchers at UNSW believe a demonstrator quantum computer is only 10 years away. Robert Nowotniak (http://robert.nowotniak.com/en/quantum-computing/)) suggests that “quantum computing and artificial intelligence mutually inspire and enrich each other”.

David Clarke


Very very good summary. Thank you.

Jason

——–—

David, I really like your point that if viewed from a materialist perspective, then there’s no problem with attributing consciousness to a robot.

This is since from a materialist perspective (well my understanding at least) it’s sort of saying this is all there is. You have a brain, you reason, you function, you do things, but that’s only because of these physical and chemical processes. and honestly, that’s most likely the case

However, a dualist would say that no, there’s something more to that. there’s this something “other” that allows us to do all these “extra” stuff like well, drawing on previous points, self-awareness, appreciation, being about to ponder: all this extra stuff is our “mind”.

In sum, from a materialist perspective we have a “brain” and therefore of course a robot could experience consciousness because this brain can be replicated.

however, from a dualist perspective we have a “brain” AND a “mind”. We cannot replicate a mind, and since this is the source of what is it is to be conscious, no, a robot cannot experience true consciousness.

Materialist/ dualist, take your pick but…..

dualist is so much cooler.

Natalie Graf

—-—

I tend to think that the physicalist/dualist distinction is unjustified, and possibly meaningless.

There would seem to be two obvious ways to interpret the term “physicalism”, if we understand physicalism to be the idea that consciousness is “non-physical”; 1. consciousness is ultimately reducible to the subject matter of science, and 2. consciousness is reducible to little balls bumping into each other (materialism).

The latter is obviously stupid (and no one ACTUALLY takes physicalism to be this caricature of materialism - the point is that we no longer have something clear, like billiard balls, to contrast with the mind, soul, spirit, etc.) but the first option is completely open and possibly vacuous; who’s to say the “mind” isn’t going to be a part of the ultimate subject matter of science?. The ultimate subject matter of science surely can’t be worked out in advance, so putting an a priori limit on what science can possible tell us about the world(i.e the second interpretation of physicalism) would seem to only make sense through additional arguments or discoveries about the nature of consciousness; hence back to little balls (atoms, neurons, molecules, energy, quantum states) and whatever they’re supposed to contrast with. As I’ve already indicated this collection of ultimately reducible stuff doesn’t seem to permit of an interesting ontological distinction between mind and matter.

And, even if consciousness turns out to be scientifically inexplicable, does that really imply some kind of division between “mind and body”, soul and matter, etc.? Surely it would only imply an epistemic distinction, not an ontological one.

I knew I was thinking of Chomsky’s (!) comments on the mind body problem; I’ve just pasted these quotes from a random blog:

“The mind-body problem can be posed sensibly only insofar as we have a definite conception of body. If we have no such definite and fixed conception, we cannot ask whether some phenomena fall beyond its range. The Cartesians offered a fairly definite conception of body in terms of their contact mechanics, which in many respects reflects commonsense understanding. Therefore they could sensibly formulate the mind-body problem… (p. 142)”

“[However] the Cartesian concept of body was refuted by seventeenth-century physics, particularly in the work of Isaac Newton, which laid the foundations for modern science. Newton demonstrated that the motions of the heavenly bodies could not be explained by the principles of Descartes’s contact mechanics, so that the Cartesian concept of body must be abandoned. (p. 143)”

There is no longer any definite conception of body. Rather, the material world is whatever we discover it to be, with whatever properties it must be assumed to have for the purposes of explanatory theory. Any intelligible theory that offers genuine explanations and that can be assimilated to the core notions of physics becomes part of the theory of the material world, part of our account of body. If we have such a theory in some domain, we seek to assimilate it to the core notions of physics, perhaps modifying these notions as we carry out this enterprise. (p. 144)

Ben F


Ben, I totally agree.

A lot of people don’t, though. And even the position you label “obviously stupid” is very popular, including among very smart people!

Jason


Yes, as I understand it, a lot of people do actually take the view that consciousness can be fully described in terms of “little balls” (molecules) etc, just as the rest of the body can be. In my comments under “Wanted Dead or ALive…” I made a similar point, that perhaps “mind” or “consciousness” or whatever people call whatever they think it is (as opposed to “body”), perhaps it is just an intuitive feeling that it exists. As we know, intuitive feelings often turn out to be wrong when investigated by science. It was once believed (and probably still is by some) that a force called ’vitalism” was required to enable us to stay alive. We now know this to be wrong, but it was an obvious, intuitive understanding a couple of hundred years ago. Maybe they will laugh at us in another 50 years or so for believing in “consciousness”, whatever we think we mean by that word.

David Clarke


If I’m not conscious, I won’t mind them laughing at me.

Hey, anything you feel like writing on Vitalism will be very relevant for the final part of the course. Great timing!

Jason


I love this - “Louise’s brain”

http://bunny.xeny.net/linked/louisesbrain.gif

Also, “Alex the parrot”

“…….She then retrieved a green key and a small green cup from a basket on a shelf. She held up the two items to Alex’s eye.

“What’s same?” she asked.

Without hesitation, Alex’s beak opened: “Co-lor.”

“What’s different?” Pepperberg asked.

“Shape,” Alex said.”

http://www.nytimes.com/1999/10/09/arts/a-thinking-bird-or-just-another-birdbrain.html?showabstract=1

David C.

Definition of Vitalism in Henderson’s Dictionary of Biology - “a belief that phenomena exhibited in living organisms are due to a special force distinct from physical or chemical forces”.

I’ll start a new topic on vitalism.

David C.


Louise’s Brain: Isn’t it wonderful? I didn’t cover it properly because I forgot to set it up before the lecture - I can’t include animations directly in slides. I’ll try to pop it in at the beginning of a lecture.

Alex the parrot: wonderful! My wife is a linguist who knows a lot about non-human animal language. If you want to chat to her, open up a new topic and I’ll point her to it.

Jason

I will when I finish the assignment.

Dave

orpeth.com