A site on the future of psychology

Archive for December, 2011

Update and Blog Transfer

Hi All,

  As you may know, I consulted with Psychology Today and was given the opportunity to start a blog there, based on the content of this blog. I am in conversation with my fellow bloggers regarding the future of this blog. The new blog is called Theory of Knowledge and is available via this link:

http://www.psychologytoday.com/blog/theory-knowledge.

Also, I wanted to let folks know that on December 12th, I was on Virginia Insight, a local NPR radio show talking about the unified theory. Here is the link to that:

 http://www.jmu.edu/wmra/pgm/insight/VI121211.mp3

Finally, I have found it useful to be able to access the unified theory book’s chapters, so here they are:

Preface

RacingHorses

ProblemofPsych

BIT

Matrix

The Justification Hypothesis

The Tree of Knowledge

Defining Psyc

UnifiedPsychther

FifthJP

Epilogue

Hope every has a happy holiday season…

Gregg

A Guiding Template for Conducting a Semi

Advertisements

On the Origin of Language

What follows is a blog post from Eric La Freniere (lafrenea@dukes.jmu.edu), who is a JMU Graduate Student in rhetoric and communication interested in the evolution of language and human consciousness. We had some interesting exchanges and realized that we shared very similar ideas regarding the nature of culture and human consciousness, connecting the dots from his view of rhetoric and my view of justification. He shared with me his synopsis on the origin of language and slides and I thought they might make for an interesting blog post. Note that Justification Hypothesis is a theory of how the problem of social justification would have altered human self-consciousness and culture. It is not, of course, a theory of the evolution of language itself. This is what Eric is trying to work out.

 Gregg

Preface to a General Theory of the Origin of Language as a Function of Brain Hemispheric Interaction.

Language is a signaling system that involves the abstraction of information from a gestalt field for purposes of representation and communication. But language is contingency-based, as opposed to instinctive, i.e. it allows for the spontaneous creation of recombinative information and thus novel responses across the brain of an individual organism, as opposed to across generations within the gene pool of an entire species. How did language evolve?

By just over 500 million years ago, brains had evolved, and the extrusion of eyes kicked off the “Cambrian explosion,” a positive feedback-driven proliferation of organic complexity involving the emergence of most of the major animal groups existing today—including fish, from which all vertebrates evolved. Fish have non-overlapping vision and optic nerves that cross to connect each eye with the opposite side of the fish brain, which has no left-right neural bridges.

Fish engage their environments according to left-right preferences; for example, fish typically respond more quickly when predators are first seen through one eye, and they forage for food more efficiently when using the other eye (Giorgio Vallortigara has done basic research here). In other words, one eye and its contralateral half of the brain focus on what might be called “retreat” perceptions and behaviors, while the other eye and half of the brain focus on what might be called “approach” perceptions and behaviors. We may say that the retreat brain is neurologically geared towards recognizing changes in gestalt fields to generate more rapid “fight-or-flight” responses, while the approach brain is neurologically geared towards abstracting particular objects to generate less rapid “feed or f*ck” responses. (Note that in its basic form, abstraction simply means to draw or take away, to remove or isolate).

For 500 million years, evolution wrung as much adaptability—as much intelligence—as possible out of the animal brain. Yet for virtually all that vast stretch of time, there is no evidence of tool use or culture, and thus no evidence of language. Nonhuman signaling systems, including complex affective displays, seem largely incapable of recombination; they cannot, in and of themselves, generate novel responses to selective pressures.

Finally, just 2 ½ million years ago, a group of tool-making primates (H. habilis) emerged. They had big brains, but their flaked stone tool kit remained unchanged for hundreds of thousands of years. Then a new group of even bigger-brained primates (H. erectus) began reworking their own stone tools with an eye toward shape and symmetry—but other than that, again, no change for hundreds of thousands of years.

It is unlikely that language was involved in the production of these Lower Paleolithic technologies; the lack of novelty over time suggests a lack of abstract or representational recombination. We are probably looking at the maximum adaptability or intelligence possible through genetic evolution alone: a more-or-less instinctive, tool-using “culture” made possible by a perfect storm of organic factors such as binocular vision, bipedalism, an opposable thumb, complex vocalization, and a big brain. By the Middle Paleolithic, the increasingly expensive primate big brain experiment seemed to have reached its gradualist conclusion (Homo sapiens neanderthalensis).

 But evidence indicates that punctuated equilibrium intervened: a hopeful monster was born (Homo sapiens sapiens). Archeologists have characterized the Upper Paleolithic, the period beginning about fifty thousand years ago, as the “creative explosion.” Stone was fashioned into novel shapes for novel purposes, and art emerged, along with the mastery of fire, human burial, and the bone needle for sewing. Language had clearly arrived, but how?

 Stephan Gould’s hopeful monster idea suggests that a point mutation (a single base nucleotide replacement in the DNA sequence) could have enabled the emergence of language across the hemispheres of the already-lateralized proto-human cerebral cortex. How did a single mutation make such a dramatic difference? According to chaos theory’s infamous “butterfly effect,” even a tiny initial change within a complex system can result in dramatic differences in eventual outcome. The proto-human brain was the most complex system on the planet; a single mutation could have snowballed change across its entire neurodevelopmental process, from embryogenesis to adulthood.

 Our retreat brain—nowadays, usually the right hemisphere—emphasizes gestalts and image-based processing to generate more immediate responses. Our approach brain—nowadays, usually the left hemisphere—emphasizes abstraction and word-based processing to generate more deliberative responses. Although proto-human abstraction had not been spontaneously recombinative, tool use had been gradually priming the pump for a couple million years.

 Neurobiologists Roger Sperry and his protégée Michael Gazzaniga pioneered research teasing apart the cognitive differences between the hemispheres of the human cerebral cortex. Through a series of clever experiments involving previously epileptic people whose seizures had been curtailed by the severing of left-right neural bridges, Sperry and Gazzaniga established that the left hemisphere emphasizes relatively linear, logical, focused, grammatical processing functions, while the right emphasizes relatively associative, affective, holistic, contextual processing functions.

 Functional lateralization is accompanied by structural lateralization on the molecular, cellular, tissue, and organs levels. The human planum temporale, for example, is a roughly triangular area of the cerebral cortex. On the left side, it is centered in Wernicke’s area—crucially involved in abstraction—and its tissues exhibit wider columns and more neuropil space, resulting in the most significant asymmetry of any brain on the planet: the human left planum temporale can be up to five times as large as the right.

 The left hemisphere-centered word, or contingency-based abstraction, is the most distinctive component of the distinctly human signaling system we call “language.” According to Ferdinand de Saussure—a founder of the science of linguistics and was the founder of semiotics, or the study of signs—language is based on signs (which makes sense, since language is a signaling system that involves abstracting information from a gestalt field for purposes of representation and/or communication). For Saussure, every sign has two components: a signifier and a signified.

 Here, the signifier is a more or less arbitrary word or contingency-based abstraction—thinks Wernicke’s area (LH)—and the signified is a more or less meaningful image or affectively prioritized sense data—think fusiform facial area (RH). So, for purposes of representation, communication, and recombination, the word/image-conjoined “utterance” (Saussure’s term) is the building block of linguistic vocalization or speech.

 Simple nouns and verbs were the most concrete utterances (there is a difference in hemispheric emphasis between those basic parts of speech). They were used to create the primal justification narratives / rhetorical systems that selected (often sexually) for the evolution of ever more abstract nouns and verbs, other parts of speech, and eventually (spurred on by writing) linguistic self-reference, including syntactical-symbolical constructions such as “I Am Who I Am” and “I think, therefore I am.”

Eric La Freniere

The ToK as the Ultimate TOK

Consider your reaction to each of these justifications:

George W. Bush was an outstanding president. Matter is made up of atoms. Abortion should be illegal. Science is more reliable than faith. Joseph Smith was a prophet.

Although you might not have labeled it as such, your reaction to these statements will be in part reflective of your theory of knowledge (TOK).  An individual’s TOK can be thought of in terms of both the content of their knowledge (the individual’s ontology—what they justify to be true) and the process by which they arrive at knowledge (the individual’s epistemology—how they justify what is true). Many people, of course, do not explicitly think in terms of their TOK, but there is a movement to change that, as there now is an explicit international course of study on TOK.

I believe it is crucial that we become reflective about our TOK, and perhaps one of the best ways to do that is to spend time with people who have very different worldviews. For example, I happen to be listening to The Adventures of Huckleberry Finn on tape, and it is striking to consider how different the character’s worldviews are to my own.

If people have different TOKs, how do you know if your TOK is a good one? Although there is much debate about this, philosophers have offered four basic angles to analyze one’s TOK.

1)      Coherence refers to the extent to which the knowledge system offers semantically clear constructs that relate to one another in a logically coherent way. In other words, is the system internally consistent?

2)      Correspondence refers to the extent to which the system lines up with independent evidence. In other words, does the system make predictions about facts to be discovered? (For me the difference between coherence and correspondence is seen comparing the TOK of people with disorganized schizophrenia from delusional schizophrenia. Disorganized schizophrenics lack coherence—at the extreme, there simply is no way to make sense of the semantic network. In contrast, it is often easy to understand what individuals with delusions are saying, but it simply does not correspond with external evidence.)

3)      Comprehensiveness refers to scope (breadth and depth) of the TOK. In other words, to what extent does it incorporate the various domains of knowledge or at least provide a potential explanatory framework for various possible domains.

4)      Conduciveness refers to the extent to which the TOK pragmatically fosters achieving one’s goals. Here the criterion for goodness is simply whether “it works”. Consider, for example, the contrast between myself and Huck Finn. While my TOK may well be more coherent, empirical, and comprehensive, if I were to be transported back into his time and attempted to live in his world, the conduciveness of my TOK relative to his may well be far lower. That is, I may well have floundered and died if I were confronted with the environmental (social and physical) stressors and affordances he was able to navigate.

What is your TOK? What do you think is the best TOK out there? Is there an ultimate TOK. Although I did not set out to develop the ultimate TOK, I now believe that is what the ToK System achieves.

With its novel claim that the universe is an unfolding wave of Energy-Information, and depiction of that wave as consisting of four separable dimensions of complexity, divisible because of the emergence of novel information processing systems (genetic, neuronal, symbolic), the ToK System finally gives us a deep understanding of why there are four separable classes of objects and causes: 1) the material (behavior of things like atoms, rocks and stars); 2) the organic (behavior of cells and plants); 3) the mental (behavior of animals like bees, rats and dogs); and 4) the cultural (behavior of people). And with its joint points, the ToK System provides the best TOK of why and how these broad dimensions are connected, and with its characterization of human knowledge as human justification systems, we finally have a way to conceptualize the place of the human knower in relationship to the rest of the universe.

Gregg

 

Tag Cloud

%d bloggers like this: