The Science and Philosophy of Consciousness
Cami Rosso: Article published by psychology today.
Delving deeper into a mystery of the human mind:
Consciousness is one of the unsolved mysteries that great thinkers across many disciplines have attempted to elucidate. The Merriam-Webster Dictionary defines consciousness simply as “the quality or state of being aware.” Yet the true definition of consciousness has eluded great minds for centuries. Several attempts at defining consciousness have been made by philosophers, physicians, psychologists, neuroscientists and scientific researchers.
One way to unravel the mystery of consciousness is to examine its opposite — the state of unconsciousness. A person can become unconscious through general anesthesia, a medically induced coma. The origins of applied anesthesia in humans in the Western Hemisphere is a brief and relatively modern history. Liquid ether was identified by Paracelsus (Theophrastus Bombastus von Hohenheim) in 1540 to induce sleep in animals, however, it was not until centuries later in 1842 when American surgeon Dr. Crawford Williamson Long first used diethyl ether, a gas, as anesthesia on humans. Dr. Long later published his discovery in 1849. In 1846 a Boston dentist named William Morton also anesthetized a surgical patient using diethyl ether. The following year, Scottish obstetrician Dr. James Young Simpson published in the London Medical Gazette his use of inhaled chloroform on more than eighty patients. Today there are a variety of intravenous and inhaled anesthetics manufactured by various pharmaceutical companies. Anesthesia medication alters the activity and communication of various brain regions by bringing on a rapid onset of brain waves, or oscillations. Yet no one knows the precise mechanisms on how anesthesia renders a person unconscious — this requires an understanding of the true nature of consciousness itself.
One theory is that anesthetics prevent the human brain from integrating information through a functional disconnection. Is consciousness a biomechanical phenomenon, intrinsically tied to the physical elements of the brain? In other words, does consciousness exist because of the brain? This biomechanical concept resonates with at least one prominent hypothesis of consciousness — the theory of global workspace (GWT).
The global workspace theory was formulated by Bernard J. Baars, a native Dutch neuroscientist at the Neurosciences Institute in La Jolla, California. Baars likened the human brain as a distributed society of computational specialists that are continuously processing information, which have a unique working memory. In his paper titled “Global workspace theory of consciousness: toward a cognitive neuroscience of human experience” published in 2005 in Progress in Brain Research, Baars characterized this memory as fleeting in nature, with only one consistent content at a time. He states that consciousness “resembles a bright spot on the stage of immediate memory, directed there by a spotlight of attention under executive guidance.” Consciousness can amplify and broadcast the content of the memory to the whole of the system. In his metaphor, Baars posits that the overall theater is dark and unconscious, and the spotlighted area on stage represents consciousness. Consciousness is “the gateway to the brain” that “enables multiple networks to cooperate and compete in solving problems.”
Australian philosopher David Chalmers considers Baars’ theory as one of “cognitive accessibility” that is lacking in explanation of the aspect of the experience. Chalmers divides the conundrum of consciousness into “easy” or “hard” problems in a paper published in the Journal of Consciousness Studies in 1995. The “easy” problems are phenomena that can be explained by either neural or computational mechanisms. For example, the difference between being awake and asleep is a phenomenon that would be considered by Chalmers as an easy problem of consciousness, as it can be explained as a cognitive function. According to Chalmers, the “hard problem of consciousness” is the subjective nature of experience, which can neither be explained by neuroscience nor cognitive science.
One way to bypass Chalmers’ “hard problem of consciousness” is to approach consciousness as a given. French philosopher, mathematician, scientist René Descartes approaches self-consciousness with two parts — awareness of thought and existence itself. This view of an inner perception of self is also reflected by British philosopher,
In a similar manner, Italian neuroscientist and psychiatrist Dr. Giulio Tononi of the University of Wisconsin-Madison evades Chalmers’ “hard problem” with an amalgamated mathematical and philosophical approach by accepting the existence of consciousness as a given in his scientific theory of the Integrated Information Theory (IIT). According to Dr. Tononi, consciousness is defined as that which “corresponds to the capacity of a system to integrate information” in his 2004 paper titled “An information integration theory of consciousness” published in BMC Neuroscience. Dr. Tononi hypothesizes that the “quantity of consciousness available to a system can be measured as the Φ (“phi”) value of a complex of elements,” where Φ is “the amount of causally effective information that can be integrated across the informational weakest link of a subset of elements.” Mathematically, IIT seems to imply that interconnected complexity is a requisite for higher consciousness. For example, the conscious portion of the human brain, with its highly integrated neuronal network, would have an associated high Φ value, whereas conventional computers, with architecture that has low interconnectivity between a few transistors, would have an associated low Φ value. This means that today’s robots powered by artificial intelligence(AI) are not conscious according to IIT.
Christof Koch, president and chief scientific officer of the Allen Institute for Brain Science in Seattle, co-authored an article with Giulio Tononi, titled “Can We Quantify Machine Consciousness?, that examines the implications of IIT on future machine intelligence. Presently, conventional computer systems lack the complexity of the architecture of human brains and therefore are not capable of a conscious experience. However, neuromorphic computing architecture, modeled on the human brain, is being developed with highly interconnected logic and memory gates. A neuromorphic machine with a high Φ could potentially be characterized as conscious based on the Integrated Information Theory. This could raise future legal and ethical concerns as technology advances towards artificial general intelligence (AGI) fueled by neuromorphic hardware and artificial neural networks with a design inspired by the architecture of the biological brain.
Consciousness remains a vague concept that has yet to be fully revealed. As scientists and researchers make progress in evidence-based studies of the biomechanics of the human brain, a greater understanding of the “easy” problem may one day be achieved. As complex as the human mind, so is the nature of consciousness.
“Consciousness cannot be accounted for in physical terms. For consciousness is absolutely fundamental. It cannot be accounted for in terms of anything else.”- Erwin Schrödinger, The Observer, 1931