Skip to main content
Humanities LibreTexts

Consciousness -- Introduction to Philosophy: Philosophy of Mind

  • Page ID
    25525
  • 6

    Consciousness

    Tony Cheng

    <!– pb_fixme –>

    Introduction

    The term “consciousness” is very often, though not always, interchangeable with the term “awareness,” which is more colloquial to many ears. We say things like “are you aware that …” often. Sometimes we say “have you noticed that … ?” to express similar thoughts, and this indicates a close connection between consciousness (awareness) and attention (noticing), which we will come back to later in this chapter. Ned Block, one of the key figures in this area, provides a useful characterization of what he calls “phenomenal consciousness.” For him, phenomenal consciousness is experience. Experience covers perceptions, e.g., when we see, hear, touch, smell, and taste, we typically have experiences, such as seeing colors and smelling odors. It also covers bodily awareness, e.g., we typically have experiences of our own bodily temperature and positions of limbs. Consciousness is primarily about this experiential aspect of our mental lives.

    Most discussions of philosophy of mind rely on the idea of conscious experience on some level. Descartes reported his conscious experiences in his Meditations on First Philosophy. These figured centrally into his arguments that he has a mind (Chapter 1). Behaviorism, materialism, functionalism, and property dualism seek to explain our mental lives, so they will need to include consciousness, as it is one of the most important elements of mentality (Chapter 2, Chapter 3, Chapter 4). Qualia and raw feels are one way to understand consciousness; since they have been covered earlier (Chapter 5), we will not discuss them here. Knowledge, belief, and other mental states are sometimes, though not always conscious, so it is important to understand the difference between (say) conscious and unconscious beliefs. This also applies to concepts and content (Chapter 7). Whether freedom of the will and the self require consciousness is highly debated (Chapter 8). In this way, it can be seen that consciousness has a central place in philosophy of mind.

    Concepts of Consciousness

    There are many concepts of consciousness; in general there are two approaches. First, one can survey folk concepts of consciousness: how lay people use the term, and how they use other related terms (e.g., awareness) to refer to similar phenomena. Second, one can search for useful concepts of consciousness for explaining or understanding the mind. The former approach is conducted in experimental philosophy, a relatively new branch of philosophy, which invokes experimental methods to survey people’s concepts, including those who are from diverse cultural and linguistic backgrounds. In this chapter, we will rather focus on the latter approach, which is traditionally favored by philosophers of mind.

    Philosophers have different ways of picking out various concepts of consciousness, and they tend to strongly disagree with one another. No division is entirely uncontroversial. However, there is one distinction that tends to be the starting point of philosophical discussions about consciousness; even those who disagree with this way of carving out of the territory often start from here. It is the distinction from Block (1995) on phenomenal and access consciousness:

    Phenomenal consciousness [P-consciousness] is experience; what makes a state phenomenally conscious is that there is something “it is like” (Nagel 1974) to be in that state. (Block 1995, 228)

    A perceptual state is access-conscious [A-conscious], roughly speaking, if its content—what is represented by the perceptual state—is processed via that information-processing function, that is, if its content gets to the Executive System, whereby it can be used to control reasoning and behavior. (1995, 229)[1]

    Block also discusses a third concept of consciousness, called “monitoring consciousness” (1995, 235).[2] In order to be focused, we will restrict ourselves to the division between phenomenal and access consciousness.

    Block’s main point is to dissociate P-consciousness and A-consciousness: he seeks to make the case that these two kinds of consciousness are different in kind. In order to do so, he first tries to find cases in which P-consciousness exists while A-consciousness is absent:

    [S]uppose you are engaged in intense conversation when suddenly at noon you realize that right outside your window there is—and has always been for sometime—a deafening pneumatic drill digging up the street. You were aware of the noise all along, but only at noon are you consciously aware of it. That is, you were P-conscious of the noise all along, but at noon you are both P-conscious and A-conscious of it. (1995, 234; original emphasis)

    For A-consciousness without P-consciousness, Block argues that it is hard to find any actual case, but it is “conceptually possible” (1995, 233), meaning that there is no incoherence in the scenario in which A-consciousness exists while P-consciousness is absent. This strategy is effective since what he wants to argue is that these two kinds of consciousness are distinct: for this purpose, there is no need to have actual cases in which one exists while the other is absent, though actual cases do help as they serve as existential proofs.

    Another distinction that needs to be in place is from David Chalmers (1995). He poses a challenge to researchers of consciousness with the distinction between the “easy problems” and the “hard problems” of consciousness. According to Chalmers,

    [t]he easy problems of consciousness are those that seem directly susceptible to the standard methods of cognitive science, whereby a phenomenon is explained in terms of computational or neural mechanisms. The hard problems are those that seem to resist those methods. (1995, 4)

    Here are some examples of the easy problems he provides:

    The integration of information by a cognitive system;

    The reportability of mental states;

    The ability of a system to access its own internal states;

    The focus of attention (1995).

    Both the easy problems and the hard problems interest philosophers. Block’s discussion of P- and A-consciousness can be seen as primarily in the territory of easy problems, while Chapters 1 to 5 of this book can be seen as more about the hard problems.

    Now, with these two basic distinctions at hand, it is time to see how philosophers and scientists theorize about different kinds of consciousness, especially phenomenal consciousness.

    Theories of Consciousness

    Block seeks to dissociate P- and A-consciousness. He has several argumentative lines; the most relevant one has it that P-consciousness cannot be explained by representational contents. To understand what this amounts to, one needs to have some basic grip on what representational contents are. Again, examples will help. Two beliefs are different because they have different contents: my belief that tomorrow will rain and that the day after tomorrow will not rain are different beliefs because their contents—“tomorrow will rain” and “the day after tomorrow will not rain”—are different. These contents are said to represent states of affairs, including actual ones and imaginary ones. Contents can be true or false: my belief that tomorrow it will rain can fail to be true simply because tomorrow it will not rain. Representational content itself is a complex topic that cannot be handled in this chapter; it will be the subject matter of Chapter 7. There can be many reasons for believing that P-consciousness cannot be explained by representational contents, one being that an experience and a belief can share the same content, but have different phenomenology. This is debatable. Some would argue that experiences do not have representational content.[3]

    Now, content and consciousness are two major topics in philosophy of mind. Another main figure in this area, Daniel Dennett, has them as his first book’s title (1969). They are often studied separately, but some philosophers have attempted to invoke one to explain the other. The most prominent position, exemplified by Fred Dretske (1995), holds that representational content is relatively easier to understand, since it can be explained by naturalistic notions such as information; it is naturalistic in the sense that natural sciences would find those notions scientifically respectable. This view further holds that consciousness should be understood through representational content, so that it is fully naturalized. This view is representationalism. The canonical statement of it is that “all mental facts are representational facts” (Dretske 1995, xiii). This is the “naturalizing the mind” project. Now, although Block is all for the naturalization project, he objects to this specific way of naturalizing consciousness. The basic intuition is that representationalism leaves something crucial out: the what-it-is-like-ness of experience. This is because, for example, beliefs with representational contents can be unconscious. Or again: some hold that experiences do not have content.

    Although Block and others have been resisting representationalism, it is still the most prominent view in this area. This is presumably because it offers the most promising line of naturalizing the mind, according to many. This is important as one of the main motives in twentieth-century philosophy is to situate the mind in the physical world (more on this in Chapters 1-5). Now, this prominent theory comes in various forms, which each of the following sections will summarize.

    First-Order Representationalism

    This is the view that representational content can exclusively explain phenomenal consciousness (Dretske 1995, Tye 1995). It is either argued that they are identical, or the latter supervenes on the former. Supervenience is another technical concept that can be seen in many areas in philosophy. Suppose A is the supervenience base, and B is said to supervene on A. In this case, if B has any change, it has to be because there is some change in A. But the other way around is not true: it can happen that B stays the same while A has changed. This is a specific way to explain the dependence relation. It would help to see this with a concrete example. In ethics, it has been argued that facts in ethics, e.g., torturing is wrong, have solid status because they supervene on facts in physics. In this case, if ethical facts have any change, it has to be because there are some changes in physical facts. But the other way around is not true: it can happen that ethical facts stay the same while physical facts have changed. The same move has been invoked in explaining aesthetic facts. This notion of supervenience seems to capture what we need for the dependence relation: physical facts are the most fundamental, so if other facts change, it has to be due to changes in physical facts. But different physical facts can sustain the same ethical, aesthetic, and mental facts. This is one powerful thought that is behind representationalism.

    Higher-Order Representationalism

    Higher-order theories in general hold that a state is conscious in virtue of being accompanied by other states. How to characterize the relevant sense of “accompany” is of course a difficult and controversial matter (Rosenthal 2005). One crucial motivation for higher-order theories is David Rosenthal’s observation that “mental states are conscious only if one is in some way conscious of them” (2005, 4; my emphasis). He calls this the “transitivity principle.” Notice that “only if” signifies a particular logical relation: “A only if B” means B is necessary for A. So this principle says that one’s consciousness of some mental states is a necessary condition of those mental states being conscious.[4]

    Higher-order theories come in many varieties. The basic question is about the nature of the relevant higher-order states. They are either conceived as perception or thought. The former can be found in Armstrong (1968) and Lycan (1996), and it is also called the “inner sense theory.” The idea is that just like ordinary perceptions (outer sense), the internal consciousness-making states are also perceptual (inner sense). What is crucial here is that perception is a lower level state comparing with thought. One merit of this version is that perception is more primitive than thoughts, so it can more easily accommodate the case of non-linguistic animals, since they can perceive but might not be able to think.

    The latter—higher-order thought theories—has two versions. Rosenthal (2005) holds that phenomenally conscious mental states are the objects of higher-order thoughts. This is actualist. Carruthers (2005) holds that phenomenally conscious mental states are available to higher-order thoughts. This is dispositionalist. The distinction between actual and dispositional is also important in many areas in philosophy. Think about documents in your laptop. Since you have the relevant passwords, those documents are accessible or available to you, but it does not mean that at any specific moment you are accessing any specific document. Putting it bluntly, for the actualist, only those mental states that I am actually accessing at any given moment are conscious, while for the dispositionalist, any mental states that I could at some point access are conscious. In general, the dispositional accounts are less demanding than the actualist accounts, simply because dispositional notions are in general weaker. But all these higher-order theories can be classified as versions of representationalism, since both perceptions and thoughts have contents, according to most views.

    Reflexive Representationalism

    This view might be difficult differentiate from higher-order theories. The basic idea is that phenomenally conscious mental states themselves possess higher-order representational contents that represent the states themselves (Kriegel 2009). The main merit of this view is that it does not duplicate mental states: the contents are part of the relevant conscious states. For example, my visual experience of seeing the book in front of me has some specific conscious phenomenology, and also the content that there is a book in front of me. This view would say that this visual experience has the phenomenology it does due to the content it possesses. This group of ideas come in so many varieties that we cannot cover them here, but it is worth bearing in mind that this should not be conflated with higher-order theories.

    It is controversial whether the next two groups should be classified as representationalism. This chapter doesn’t take a stand with regard to this further question.

    Cognitive Theories

    This group of ideas invokes cognition to understand consciousness. In a way it is quite similar to standard representationalism, since contents are often attributed to cognitive states such as beliefs. However, they are crucially different in that cognitive theories typically do not invoke representational content, which is primarily a notion from philosophy. The most famous cognitive theory is proposed by scientist Bernard Baars (1988): according to this view, consciousness emerges from competitions among processors and outputs for a limited working memory capacity that broadcasts information for access in a “global workspace.” One can think of the model with the analogy from digital computers. This is quite similar to Dennett’s multiple drafts model, according to which different probes would elicit different answers about the subject’s conscious states (1991). They are similar in the sense that both theories invoke cognitive notions to explain consciousness. Cognitive theories tend to be quite naturalistic, though they do not use representational contents to explain consciousness. Block argues against cognitive theories with similar reasons against representational views, i.e., they cannot capture the what-it-is-like-ness of experience.

    Information Integration Theory

    Most researchers agree that information must play some role in the complete theory of consciousness, but exactly what role it plays is controversial. Information Integration Theory, or IIT, is a view that assigns a very significant role to information proposed by neuroscientist Giulio Tononi (2008). He argues that the relevant kind of information integration is necessary and sufficient for consciousness. According to this view, consciousness is a purely information-theoretic property of cognitive systems, i.e., no other notion is more fundamental in this regard. The details of this theory are quite technical, and it needs to be further developed as it is quite young. Some have also compared it with panpsychism, the view that consciousness is one of the most fundamental properties of the world (Chalmers 1996). But we need to bear in mind that every theory is distinctive and needs to be understood in its own terms.

    This ends our summary of some major theories of consciousness. It is not supposed to be comprehensive, and each theory discussed above has many more details that need to be taken seriously. This summary, and this chapter as a whole, serve only as a starting point for further exploration.

    Attention and Consciousness

    At the beginning of this chapter, we saw that there are potential connections between attention and consciousness. Typically, chapters on consciousness do not discuss attention. Since 2010 or so, however, philosophical discussions of attention have become more widespread, so it makes sense to discuss it in relation to consciousness, even if only briefly.

    Before the 1990s, “consciousness” was a term that scientists tried to stay away from. It was regarded as unscientific, as there was no respectable way to give it a satisfactory operational definition, i.e., it was difficult to give a definition that was grounded in empirical evidence. Back then, scientists studied attention instead, as it was easier to quantify it, or so it seemed. Empirical studies of attention have been fruitful since the 1960s. Now, although scientists tried hard to avoid talking about consciousness explicitly, it was not possible to ignore it entirely. For example, when psychologist Max Coltheart defines “visible persistence” (1980), it is hard to understand what we should mean by “visible” if it is different from “consciously seen,” though there can indeed be other interpretations, such as “able to be seen.” But one wonders whether that should mean “able to be consciously seen.” Of course there are subtleties here; for example Block (2007) further distinguishes between visible persistence and phenomenal persistence, which makes one wonder how to understand visible persistence exactly. But in any case, before the 1990s or so, attention was intensely studied by scientists, and in a way it served as a surrogate for consciousness, as attention is more scientifically respectable according to many.

    The situation has dramatically changed. Nowadays consciousness studies are pervasive not only in sciences but also in philosophy. The reasons for this are complicated; it is not simply because nowadays consciousness can be better defined in sciences. (I shall not touch on this complex history here.) Now the question concerning the relation between attention and consciousness arises: Are they identical? If not, how do they relate to each other? It is hard to maintain that they are identical, as there seem to be clear cases in which a subject, not necessarily a human subject, can focus its attention while being unconscious of the target. Perhaps that subject is a simple organism that is not conscious in the relevant sense, but arguably it has certain basic attentional capacities, i.e., they can deploy their cognitive resources to focus on specific targets. Normally the question is more about whether attention is necessary and/or sufficient for consciousness. Jesse Prinz (2012) argues for this strong view, and sometimes he comes close to the identity view. This view faces two basic challenges: Some have argued that attention is not necessary for consciousness; the phenomenological overflow view held by Block (2007) is one such view. Some have argued that attention is not sufficient for consciousness; Robert Kentridge and colleagues (1999) have argued that the case of blindsight—patients who are blind in certain parts of their visual fields due to cortical damages—is attention without awareness, since these patients exemplify clear markers of attention, while the patients themselves also insist that they are unconscious of the relevant parts of the visual fields. Now, these are all highly debatable and there is much room to disagree (Cheng 2017). Both consciousness and attention are still heated topics nowadays, and will continue to be so in the foreseeable future.

    References

    Armstrong, David. 1968. A Materialist Theory of the Mind. London: Routledge.

    Baars, Bernard. 1988. A Cognitive Theory of Consciousness. Cambridge, UK: Cambridge University Press.

    Block, Ned. 1995. “On a Confusion about a Function of Consciousness.” Behavioral andBrain Sciences 18: 227-287.

    Block, Ned. 2007. “Consciousness, Accessibility and the Mesh between Psychology and Neuroscience.” Behavioral and Brain Sciences 30: 481-548.

    Carruthers, Peter. 2005. Consciousness: Essays from a Higher-Order Perspective. Oxford: Oxford University Press.

    Chalmers, David. 1995. “Facing up to the Problem of Consciousness.” Journal ofConsciousness Studies 2(3): 200-219.

    Chalmers, David. 1996. The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford University Press.

    Cheng, Tony. 2017. “Iconic Memory and Attention in the Overflow Debate.” Cogent Psychology 4.

    Coltheart, Max. 1980. “Iconic Memory and Visible Persistence.” Perception andPsychophysics 27(3): 183-228.

    Dennett, Daniel. 1969. Content and Consciousness. Oxon: Routledge & Kegan Paul.

    Dennett, Daniel. 1991. Consciousness Explained. New York: Little Brown & Co.

    Dretske, Fred. 1995. Naturalizing the Mind. Cambridge, MA: MIT Press.

    Kentridge, Robert et al. 1999. “Attention without Awareness in Blindsight.” Proceedings ofthe Royal Society of London (B) 266: 1805-1811.

    Kriegel, Uriah. 2009. Subjective Consciousness: A Self-Representational Theory. Oxford: Oxford University Press.

    Lycan, William. 1996. Consciousness and Experience. Cambridge, MA: MIT Press.

    Nagel, Thomas. 1974. “What is it like to be a Bat?” Philosophical Review 83: 435-450.

    Prinz, Jesse. 2012. The Consciousness Brain: How Attention Engenders Experience. New York: Oxford University Press.

    Rosenthal, David. 2005. Consciousness and Mind. Oxford: Oxford University Press.

    Tononi, Giulio. 2008. “Consciousness as Integrated Information: A Provisional Manifesto.” Biological Bulletin 215: 216-242.

    Travis, Charles. 2004. “The Silence of the Senses.” Mind 113: 57-94.

    Tye, Michael. 1995. Ten Problems of Consciousness: A Representational Theory of thePhenomenal Mind. Cambridge, MA: MIT Press.

    Tye, Michael. 2003. Consciousness and Persons: Unity and Identity. Cambridge, MA: MIT Press.

    Further Reading

    Blackmore, Susan and Emily Troscianko. 2018. Consciousness: An Introduction. Oxford: Routledge.

    Churchland, Patricia. 1989. Neurophilosopy: Toward a Unified Science of the Mind-Brain. Cambridge, MA: MIT Press.

    Hurley, Susan. 1998. Consciousness in Action. Cambridge, MA: MIT Press.


    1. For more on representation and content, see Chapter 7.

    2. For more concepts of consciousness, see Tye (2003).

    3. See Travis (2004), for example.

    4. Also see Lycan (1996).

    <!– pb_fixme –>

    <!– pb_fixme –>
    <!– pb_fixme –>