A site on the future of psychology

What is the mind?

            As I sit here pondering what to write, what is it exactly that is doing the pondering? Where do the thoughts come from? How does the 3 lb mass of grey matter that is my brain give rise to the felt experience of sensations and thoughts? It sometimes seems essentially inconceivable that the water of material processes could give rise to the wine of consciousness. Indeed, it is so famous a conundrum that it has a name…the (in)famous mind-brain (or mind-body) problem. Failure to have consensual resolution to the mind-brain-body (MBB) problem remains at the heart of psychology and its difficulties as a fragmented discipline. My goal here is to briefly explain how the unified theory (TUT) resolves the MBB problem.

            TUT resolves the MBB problem by doing the following: First, it provides a clear taxonomy for the various phenomena that relate to the term mind. Prior to TUT, there has been massive definitional confusions. Second, via the ToK System, we get clear on the nature of the universe as the flow of Energy-Information (E-I), which in turn can be divided into four ontologically distinct categories. Third, the ToK System points to the human mind as consisting of two fundamentally different flows of E-I (or two fundamentally different kinds of behavioral processes). Behavioral Investment Theory (BIT) and the Justification Hypothesis (JH) frame the FUNCTIONAL nature of these two systems. Finally, I said that TUT resolves the MBB problem…that is different than solving it. At the end of this post, I note that an important part of the problem, what I call the engineering problem, remains.

            We need to first get clear about what most folks mean when they use the term the mind. What, exactly, are they referring to? In common parlance, ‘the mind’ most often refers to the seat of human consciousness, the thinking-feeling ‘I’ that seems to be an agentic causal force that is somehow related but is also seemingly separable from the body. The idea of life after death is so intuitively plausible to so many because our mental life seems so different from our bodies that we could imagine our souls existing long after our bodies decompose. This leads to a common sense dualism that is part and parcel to many religious worldviews.

            TUT suggests there are some semantic problems referring to the human self-consciousness system as ‘the mind’. One reason why has to do with what Freud ‘discovered’ over a century ago and is now well-known by modern day psychologists (see, e.g., Tim Wilson’s Strangers to Ourselves)–consciousness is only a small portion of mental processes. Consciousness and mind are thus not synonymous. So, we need to realize then that the MBB problem needs to either be the Consciousness-Brain-Body problem or the Consciousness-Mind-Brain-Body problem.

            Recognizing the need to separate the mind from consciousness is one of the keys to resolving the CMBB problem. What, then, is the relationship between mind and consciousness? TUT tells us we can turn to the cognitive revolution in psychology to ground our answer. The cognitive revolution was birthed as a mixture of work on information theory, artificial intelligence, and cybernetics. It gave rise to the computational theory of the mind (Pinker, 1997), which does indeed offer a solution to a big piece of the puzzle. The computational theory of mind posits that the nervous system is an information processing system. It works by translating changes in the body and the environment into a language of neural impulses that represent the animal-environment relationship. The computational theory of the mind was a huge breakthrough because it allows us, for the first time, to conceptually separate the mind from the brain-body. How? Because we can now conceive of ‘the mind’ as the flow of information through the nervous system and this flow of information can be conceptually separated from the biophysical matter that makes up the nervous system. To see how can we consider the separation of the information from the actual nervous system itself, think of a book. The book’s mass, its temperature, and other physical dimensions can now be considered as roughly akin to the brain. Then think about the information content (i.e., the story the book tells or claims it makes). In the computational theory, that is akin to the mind. The mind, then is the information instantiated in and processed by the nervous system.

            Although the cognitive revolution was a great move forward, problems emerged. This was in part due to the fact that now that mind could be separated from brain with relative ease, researchers became fascinated with models of disembodied or artificial algorithmic processors that had little connection with the other elements of mental phenomena, such as conscious experience, culture, overt behavior, or the brain. The problem was that these models were very far removed from the human mindbrain system. With its macro-level view and its capacity to assimilate and integrate key perspectives, TUT allows us to build off of the central insight of the cognitive revolution and simultaneously connect it back to the brain, evolution, human action, and culture.

              The depiction of four different dimensions of informational complexity offered by the ToK System (i.e., Matter, Life, Mind, and Culture) should immediately give us pause when we consider the problem of human cognition. Is human cognition a level three (Mind/neuronal) or level four (Cultural/linguistic) phenomena? The answer of course is that human cognition is a function of dual modes of information processing. It is both neuronal and linguistic. Or, more technically, linguistic information processing develops/emerges out of and loops back upon neuronal information processing. After many years of research, this view of human cognition has finally ascended to a dominant position in mainstream human cognitive science. It is useful to note that the ToK points us immediately in that direction.

            But what is the relationship between neuro or linguistic information processing (the human mind) and consciousness? Consciousness is ‘experienced’ information flow. I will return to why experienced is in quotes. But for now, let me note how congruent the dual processing models of cognition (one fast automatic, associative, reflexive and the other slower, verbal, analytic) are with our conscious experience. For although our conscious experience feels unitary, there nonetheless is an easy dichotomy to make. One aspect of our consciousness is our experience of first order awareness. Seeing red, being hungry, feeling scared. These nonverbal perceptual, motivational, emotionally experienced gestalts are the sentient elements of consciousness that some call qualia. They are different in kind than the other seat of conscious awareness found only in humans, which is the second order level of conscious awareness. This is the position of a reflective narrator, the human self that explains one’s actions and decides what is and is not legitimate.

            It is, of course, Behavioral Investment Theory, that, from a unified theory perspective, provides the conceptual frame for neuro-information processing and the sentient level of consciousness. BIT tells us the nervous system is a computational control system that guides actions on an investment value, cost-benefit ratio. Pleasure and pain are nature’s functional solution to network perceptions, motives, and action procedures together to foster behavioral guidance toward or away from benefits and costs.

            The Justification Hypothesis tells us that linguistic information processing is functionally organized into systems of justification. Moreover, TUT tells us that there will be dynamic tensions and filtering between the domains of experience, private narration and public action–a dynamic tension clinicians are (or should be) well aware of.

            Mind (with a capital M) on the ToK System is the set of mental behavior, which is the behavior of animals mediated by the nervous system. The mind is the information instantiated and processed by the nervous system. Consciousness is an emergent phenomena, a first person experience that arises out of neuro-information processing. In humans, a language based, second order consciousness emerges out of and feeds back onto sentience.

            Finally, the famous physicist Richard Feynman once said if you really want to show you understand how something works, build it. And it is here that we can clearly identify the limits of our knowledge regarding consciousness. I put experienced in quotes earlier because no one knows how to engineer the flow of information into emergent states of consciousness. The engineering problem of consciousness remains a great mystery.

Gregg

Advertisements

Comments on: "What is the mind?" (5)

  1. Cross-posted with other comments at: http://fixingpsychology.blogspot.com/

    Thoughts for here:
    One issue is what we mean by consciousness. It is especially crucial to distinguish (or explicitly not distinguish) consciousness from related terms like awareness or self-consciousness. There is certainly no need to enforce certain usages from above, but each author needs to be clear. For example, Skinner used the two terms “awareness” and “consciousness” in the ways that many other authors use “conscious” and “self-conscious”. This can lead to weird confusion. I think Gregg is using “conscious” both in the way Skinner would use “awareness” – that I respond to an object is (or is evidence of) my awareness of that object – and in the way Skinner would use “consciousness” – that I can respond to my awareness (typically verbally), is my consciousness of the object.

    Hopefully Gregg will let me know if my intuition is correct.

    • Gregg Henriques said:

      Eric,

      You are absolutely right to note the crucial importance of semantic clarity. Let me try to be clear about what I am refering to.

      I see the term consciousness as having two meanings, sentience and self-consciousness.

      Sentience is the first person, phenomenological experience of being. It is the capacity to have feeling states, like pleasure, pain, etc. Sentience relates to awareness, but I think phenomenological experience is more precise, and awareness is a much broader term. Also, awareness is possible to conceptualize from a third person perspective. It is relatively easy to conceptualize and study an animal’s awareness of a stimuli via their response to it. Sentience, in contrast, by its very ontological nature is impossible to observe directly (except our own). To see how they are separated, consider a spider feeding on a fly. By virtue of its actions, the spider must be aware of the fly. However, it certainly is possible that the spider is not a sentient creature–it might operate wholly via nonconscious (i.e., nonsentient) neurocomputational processes. In other words, it might be akin to a little neuronal robot. Do you agree with this in principle? Do you have thoughts on which animals are sentient?

      Self-consciousness is reflective awareness of one’s mental experience. Whereas sentience is the first person experience of, say feeling pleasure or pain, self-consciousness is the reflective, second order narration of “Here I am feeling pain.” Self-consciousness, channeled through language, allows humans at least indirect access to other’s mental experience. If other animals have self-consciousness, it is only in the most rudimentary form.

      I did not directly answer your question pertaining to Skinner, because in reading Skinner (which I did deeply for several years about a decade ago), I would get quite confused on what exactly he meant, even though he tried to be quite clear.

      I hope this clarifies my taxonomy some…

      Thanks for the note.
      Gregg

  2. Gregg,
    That does clarify, thank you!

    As for your question, you said:
    “It is relatively easy to conceptualize and study an animal’s awareness of a stimuli via their response to it. Sentience, in contrast, by its very ontological nature is impossible to observe directly (except our own). To see how they are separated, consider a spider feeding on a fly. By virtue of its actions, the spider must be aware of the fly. However, it certainly is possible that the spider is not a sentient creature”

    I think this is what I would explicitly reject. I reject the existence of something that, by its ontological nature, is impossible to observe. I’m fine with the fact that somethings are in-practice difficult to observe, and I am fine with the fact that there are many things we do not yet have good ways to measure. But I reject the idea that you can have a “thing” that is literally impossible to observe under any conceivable circumstances. Those interested in psychology have, for centuries, privileged the first-person view, and it leads to a myriad of problems. There are still problems to be solved if we privilege the third-person view, but I suspect they are not as deep or sinister. —- I would say that: If I can see that the spider is aware of the fly, then the spider is aware of the fly. Period.

    This is also my reply to all the weird ‘zombie’ problems that have been vogue in philosophy for a while now. Whenever starts an argument by claiming, “Now we can imagine a creature that does everything we do, but without awareness. So…..”, before they can go on, I try to interject and argue, “No! We can’t imagine that.” I don’t think anyone being chased by a zombie would be able imagine the zombie as not being aware of them, and not having a goal it is trying to achieve.

    Does that make sense?

    Eric

    • Gregg Henriques said:

      Hi Eric,
      Thanks for your note. I would generally advocate for a third person perspective for basic psychology (animal behavioral science), and I certainly agree that the scientific study of sentience in animals is tricky. But interesting arguments can be made, and I definitely would not agree that we should reject it, a priori, for philosophical reasons.

      So, do you reject, on philosophical grounds, questions like Victoria Brainwaite asks, Do Fish Feel Pain? (http://www.amazon.com/Fish-Feel-Pain-Victoria-Braithwaite/dp/0199551200)

      Gregg

  3. I certainly don’t reject the question (“Do fish feel pain?”), but I reject the oft made assumption that the only way to answer the question is to be the fish. I am not sure how exactly I would approach the problem, but I’m not special enough that you should really care about that. The question is, “How might someone approach the question without privileging the first person account?” Below is an excerpt from chapter 10 in my book on New Realism (Published last week!). I am doing the interview, Nicholas S. Thompson is giving answers. I can’t give the full context, but you should be able to get a feel for it from just a short excerpt.

    ——–

    Me: When you [Nick] get disgruntled, your arguments with students and colleagues follow a predictable format. Would you be willing to have me reproduce one of these arguments by playing the role of Devil’s Advocate.

    Nick: Sure. If you can stand thinking that way, I can stand answering.

    DEVIL’S ADVOCATE: If feelings are something that one does, rather than something that one “has inside”, then the right sort of robot should be capable of feeling when it does the sorts of things that humans do when we say that humans are feeling something. Are you prepared to live with that implication?

    Nick: Sure.

    DEVIL’S ADVOCATE: So a robot could be made that would feel pain?

    Nick: Well, you are cheating a bit, because you are asking me to participate in a word game I have already disavowed, the game in which pain is something inside my brain that I use my pain-feelers to palpate. To me, pain is an emergency organization of my behavior in which I deploy physical and social defenses of various sorts. You show me a robot that is part of a society of robots, becomes frantic when you break some part of it, calls upon its fellow robots to assist, etc., I will be happy to admit that it is “paining”.

    DEVIL’S ADVOCATE: On your account, non-social animals don’t feel pain?

    Nick: Well, not the same sort of pain. Any creature that struggles when you do something to it is “paining” in some sense. But animals that have the potential to summon help seem to pain in a different way.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Tag Cloud

%d bloggers like this: