Book Excerpts
1. Table of Contents,
2. Preface,
3. Chapter 5
 
 
 

TABLE OF CONTENTS Back to top

Preface to Book II
Xi
Chapter One - Hominid Cognition
1

Introduction
Relational Spatial Cognition
Temporal Acoustic Cognition
Hand Cognition
Summary of What's Special about Humans

1
5
16
33
38
Chapter Two - The Natural History of Language
41

About Language
Origins of Spoken Language
The History of Written Language
Traditional View of How Language Works

41
48
81
87
Chapter Three - How Language Works (and Means)
97

Model of Nonverbal Communication
Towards a Model of Verbal Communication
Word and Sentence Regularities
A Model of Speech Communication
What Is Thinking?
Where's the Evidence?

97
100
115
155
166
170
Chapter Four - Language Evolution Revisited
171

Recent Theories of Lanuage Evolution
My Hypothesis of Language Evolution
Summary of Language Evolution

171
180
198
Chapter Five - How Language Means (and Works)
203

A Brief History of Ideas
Learning and Using Words
Metaphor
How Language Work—Revisited
You Know What I Mean?

203
217

224

255
256

Chapter Six - Number Knowledge and Cognition
257

Number Origins
Innate Number Skills
Counting Skills of Children
Errors in Simple Arithmetic
Cognitive Stuctures in Children's Arithmetics

Deductive Reasoning
Insight into Mind-Tools
Mathematics - Is It Real?
The Key Idea

258
260
266
268
273
276
283
284
293
Chapter Seven - Stories and Explanations
295

Rationality
What We Know That Just Ain't So
If You Want to Get Ahead, Get a Story
Schemes and Scripts
Event Knowledge
Cognitive Characteristics of Narrative
Explanation
The Ecplanatory Power of Casual Models
Explanation of Chance Events
Explanation in History
Explanation in Pyschology
Explanation in Cognitive Science
Explanation in Science Reconsidered
Knowledge
Explanation of Weird Beliefs
The Role of Emotions
I Am Not a Machine
Conclusion

296
297
300
302
308
311

314
322
331
336
339
345
346
356
365
372
375
381

Endnotes
383
References
409
Index
417

PREFACE Back to top

You are a special person in many different ways. You are special to me because you have started reading my book, and you are special to your mother for some very complicated reasons. Although you could say similar things about me, neither you nor I are as special as people in our shoes once thought they were. Some very distant relatives of ours were greatly dismayed when they had to face the reality that the earth was not the center of the universe and that our great, great ancestors had evolved from apes. That last revelation about evolution was so tough that there are isolated communities of people who still deny it.

Perhaps because I grew up believing that humans evolved from apes, I am not so much humbled by the idea as fascinated by it. How is such a thing possible? How are we different from the apes? What makes us special? In book I, subtitled, Thinking without Words, I characterized animal cognition in order to understand the deep cognitive origins of our humanness. Now in Book II, subtitled, Thinking with Words, I ask, And then what happened? That is, I try to link animal cognition with human cognition by tracing the ins and outs of language and by exploring what evolved with the hominids—those strange creatures that became us. My characterization of human cognition in Book II is almost entirely based on our use of language because this is the single defining aspect of our humanness. There are, however, many questions left unanswered and even unasked in Book II. This includes questions dealing with the organization of the brain, how the neurochemical system of our brains affects our cognition and how our emotions function. Many aspects of human cognition can be best explored as they unfold and therefore it is important to study the developmental psychology of children. These topics comprise Book III: Rethinking Cognitive Psychology.

My ideas about what makes us humans special are not consistent with mainstream cognitive science. Please believe me when I say that I did not intentionally set out on a path that put me in the position of being the messenger of yet another ego-deflating scientific theory. You may be relieved to know that my ideas don’t yet have the status of theories—they are merely hypotheses—so you can either rest easy or vehemently argue a while. Basically, what I have to tell you is that our human species does not have a special part of our brain that is the seat of our intelligence. Of course, we are very intelligent creatures and I’m going to explain what it is about our minds that leads us to behave in ways that we can rightfully call intelligent. All I am saying is that a lot of very experienced and diligent people who try to explain human intelligence get caught up in an intellectual circle where they end up saying that we humans are intelligent because we have some intelligence module in our brain. To break out of such circular thinking, we have to develop models of how mental entities work. Even when we try to do this, we can sometimes get caught up in poetic metaphor just as a wine enthusiast can—This Cabernet Sauvignon is tight and powerful, with a gorgeous nose and plenty of creamy texture, but it finishes hard.

To a person unfamiliar with modern ideas, the explanation that a tornado’s destruction of his house was the result of angry demons may be more compelling that it being the result of merely the wind. However, the phrase, “merely the wind,” implicitly refers to a complex set of natural phenomena that really do explain how tornadoes function. Of course, cognitive scientists and our reflective friends do not intentionally, nor explicitly,
introduce demons or a homunculus into their theories to mislead us or delude themselves, but this business about “thinking” and “intelligence”
is so subtle that we all sometimes have explanations that are simply vacuous or circular upon close scrutiny.

I find the story of the homunculus interesting because it highlights a very pervasive and subtle problem. Apparently, not long after the microscope was invented, a curious person was looking at blood cells under high magnification and thought he saw little men inside each blood cell that deformed the cell’s shape to look human. He convinced many people that these little men, which have been called homunculi, were the seat of human intelligence and consciousness. Apparently, some people thought that they simply grew in size to make new human beings.

Homunculi also tacitly show up in some theories of perception. Understanding visual imagery has always been a problem because many theories of visual perception involve some sort of internal visual display—a kind of TV screen in our heads—and this leads one to ask who or what “watches” that display screen. The homunculus is tacitly built-in to such internal screen theories and such theories are fundamentally vacuous because they simply displace our lack of understanding of the mind from the whole brain to a part of the brain, and most often a fictional part at that. Homunculus theories are pervasive because they are satisfying to the casual theorizer.

People love simple one-line theories just as they love one-line jokes. None of us can be experts on every conceivable topic, so we are quite content to accept the simple explanation on most things. We blindly trust that a one-line explanation is correct or at least a helpful summary of a verified theory. Sometimes a simple theory gets widely accepted when there wasn’t a professional theory checker doing his homework, and we all get “taken in” for a while. This is the state of affairs for many people in cognitive
science who believe that there is some sort of “intelligence module” in the brain that accounts for our human intelligence. This module has been sometimes called our “engine of reason” and more recently the module is the source of our “language of thought.” We’ll discuss these later.

Let me back up a bit here and say something about where I’m coming from. Starting in my senior year of engineering school and continuing through my master’s degree and then my doctorial studies also in engineering, I studied what was known as communication theory. This was a very abstract high-level mathematical analysis of signals used to communicate quantifiable information under adverse conditions introduced by interference, distortion and noise. The theory that I was taught grew out of research done during World War II for the development of radar, navigation, control of artillery directed at moving aircraft and, of course, radio communication systems. The big names in the field were Claude Shannon and Norbert Weiner. The buzzword in the 1960s, “cybernetics,” became well known because social scientists and psychologists became interested in applying these fundamental concepts to understanding human cognition, language and behavior.

From my perspective, this trend had some positive and some negative consequences. The good news is that it led social scientists and psychologists to develop more quantitative models in their research and this had the effect of making results easier to compare and critique, which consequently made the field flourish. The bad news is that a good deal of human behavior was characterized in quantifiable terms when it wasn’t really appropriate to do so. Defining human interpersonal communication as the abstract exchange of information would make it mathematically analyzable but would miss the point that a great deal of human communication is about emotion or simply what I’ll call social glue—for example, Good morning, how are you?


You might think that as an engineer I would press hard to apply communication
theory and “cybernetics” to anything I could. I suspect that I haven’t because I was first exposed to traditional psychology, psychoanalytic theory and cognitive development of children at a time in my life when I was trying to learn about more humanistic concerns by taking evening courses at the Harvard Extension School. I was quite dismayed when I learned that Piaget sometimes thought of children learning logic and that Chomsky thought that language was governed by a set of complex rules he called syntax. I discovered that even the ancient Greeks and many renaissance scholars had the idea that the mind could be described as some sort of logic engine. This logic engine produced what people call human thought, although amazingly no one has been able to characterize what thought actually is. The word “intelligence” also gets used a lot to describe human mental activity, but it also has turned out to be an elusive concept.

I find that people also use terms like “problem solving capability,” “generalization” or “rational reasoning” to describe some human cognitive skills without having the slightest clue about what these terms mean in terms of underlying cognitive processes. These labels might be helpful to get started talking about human cognition but they are no more helpful than saying a child is sick because one of his “humours” is out of place. Modern medicine is based on understanding and modeling the processes that sustain health. That study of underlying processes is what is now being done in the field of cognitive science and I consider this book part of that endeavor.

In trying to make sense of human cognition, I was drawn to what people call neural networks. The neat thing about them is that they are not programmed but trained, and, consequently, they don’t have rules, instructions or logic. That appealed to me a lot. I also liked the emphasis on what was happening at the local or bottom level—the level where sensory signals are processed. Since I had spent my engineering career working on signal processing of one kind or another, I felt pretty comfortable with this approach. What I discovered was that the application of neural networks to human cognition—what has come to be known as connectionist models because the networks have so many internal connections—had some strong pluses but also had some strong negatives. Neural networks seemed to have limitations that restricted their use to some aspects of human cognition, despite the fact that some very bright people tried very hard to show that they could explain virtually all aspects of human cognition.

I thought it might help me figure things out if I temporarily stopped worrying about human cognition and think about animal cognition instead. One of the things that animals don’t do is talk, certainly not using anything like a natural language similar to, say, English or Farsi. So my idea was to try to model cognition that was done without the benefit of language. This task turned out not to be as simple as I had hoped, and I ended up writing a book about it—I Am Not a Machine—Book I: Thinking without Words. The short story is that I ended up developing a model of languageless thought that includes neural networks as pattern classification and association tools but had to augment that model with a subtle mental representation process and a planning and evaluation process. For primates, the model had to be further augmented by what is called a tertiary cognition process, where a creature can represent and act on the relationship between herself and two other entities (and baby makes three) and significantly, the relationship between those entities, be they creatures or inanimate objects.

I would now like to say something about the word “model.” You know from the introduction to Book I about my ideas of what models are, what they are not and what their uses are. I now want to add some perspective based on some discussions I have had. A retired professor of biochemistry that I met socially recently told me that the brain was so hopelessly complex that any model of it had to be wrong. He, of course, is absolutely correct. The brain has some million-million neurons and a hundred times that number of connections so that anything I can say in a few words in a book has to be a gross simplification. Hence, he asks, why bother? Why not wait 200 years for neuroscience to get a handle on things and then model the mind?

I can think of two answers to his question. One answer is that people will assume some sort of model of mind in their daily or professional affairs—this is often called folk psychology—and that model can be wrong, silly or harmful. I need not remind you that as few as seventy years ago some powerful people believed that people from races other than their own were inferior to their own race and should be eliminated or denied fundamental human rights. Some folk theories of mind are wonderfully insightful and some are ludicrous, but in order to tell the difference we must all think together and develop theories that make sense to many different people, are justified on some sort of theoretical grounds, and at least consistent with some objective observations and experiments. A second answer is that some of us are just plain curious and want to learn whatever we can. We can learn some things about human nature without knowing what each neuron is doing. We won’t know what we can learn unless we try. Some of these theories are bound to be wrong, especially if there are many theories that fundamentally disagree with each other. The important thing is to discuss and contrast the different theories and not discard any because they disagree with our politics or are unpopular.

The other reaction to models that I have observed in many people is that they find them limiting. Poetry seems to expand our thinking and, in the case of poems concerning human nature, makes us feel good about ourselves. Models tend to be negative in nature and usually spell out limitations and constraints. The theory of gravity says that if you drop a glass it will fall or if you jump up you will come back down. However, people want to fly. I loved reading about Jonathan Livingston Seagull, the bird that was determined to fly higher than any other seagull, because I wanted to be just like him. For many people, fantasies are fun but models are depressing. I don’t want you to give up your fantasies and I have no intention of giving up mine. Nevertheless, I am suggesting that you use fantasy in appropriate situations and learn when it is appropriate and useful to use models.

For many people, it is challenging to learn to use models because they have little experience with them. I am sorry about that. I am sorry that you can graduate from most colleges with very little exposure to mathematics and science. However, just as I claim that virtually any adult can learn to sing and dance at a competent social level, I claim that any adult can learn the rudiments of elementary mathematics and science and can learn to understand and have fun with models. One way to do that is to read about them as they are being used in a problem that interests you, rather than abstractly from a textbook. Reading this book on the human mind is a little like hiking to the 3,166-foot peak of Mount Monadnock in southern New Hampshire. Yes, it is true that if you are a well-trained athlete or experienced hiker, you will find the hike enjoyable but rather easy, but the hike is also accessible and enjoyable to six-year-olds, mobile seventy-five-year-olds and anybody else in between who is willing to sweat a little and doesn’t mind feeling a little tired for a few hours.

The models that I discuss may be quite different from other models that you may have been exposed to, because my focus is on how the brain processes the signals from the senses. Signal processing models are different from, say, chemical models or models of how financial institutions work. In fact, even many cognitive scientists may not be familiar or comfortable with signal processing models, which is exactly why I have written my series of books. My claim is that these models are more appropriate for studying animal and human cognition, because the brain evolved to process signals from the body’s senses. This leads us to a tricky catch-22 situation. In order to understand cognition from a signal processing perspective you need to understand signal processing models but for you to learn signal processing models you need to be convinced that they are useful in understanding animal and human cognition. All I can do is hope that you find learning new things fun and that you are willing to take the small risk of exploring a novel way of viewing cognition.

Having dealt with thinking without words in Book I, I am now ready in Book II to ask what happens to the cognitive process of languageless creatures if we add the ability to use natural language such as French, Farsi or Finnish. This has turned out to be a very loaded and highly charged question because many people say that you get garbage out because I have left out the magical ingredient, namely human intelligence—whatever that is. Under that rather popular view, language merely endows creatures with the ability to communicate their thoughts to other creatures. Language is thus seen as not thought per se but rather a window into thought. To give you a feel for the charged nature of this issue, I’d like to quote out of context and without attribution what one very intelligent and highly regarded (by me too) scientist had to say about it. “The claim is a rather extraordinary form of ‘languagitis,’ a disease prevalent among philosophers, anthropologists and linguists whose major symptom is the treatment of language as the unique casual or transforming factor in the cognitive world.”

I feel like a person who loves wine, food and congenial company who was granted one magical wish and decided that he wanted to live in France. The wish was fulfilled as promised, but the person magically materialized in Paris two days before the start of the French revolution—just a case of bad timing. Please don’t chop off my head; I’m not part of the current conflict. I’m sorry to have blundered onto such a strongly felt issue. However, I can’t turn back and so I’m going to go ahead and explain what sort of cognitive processes we can model by adding a language capability to a creature, actually a pretty complex nonhuman primate, who does not think with words.

It could turn out that my model of human cognition needs to be augmented by other fundamental cognitive processes. Of course, we would not be content with calling that human thought or intelligence because those are the things we are trying to model. We just can’t be satisfied with the kind of argument that says what makes a car go is its engine and that an engine is what makes the car go. We want to know what an engine is and how it works. However, we would only complicate my model of human cognition with additional functions only if the simpler model was inadequate. Hence we first have to see how far we can go with the simpler model.

One of the problems we face is the same problem faced by Galileo and Darwin, namely that many people implicitly believe that God must have created the earth as the center of the universe and God must have created human beings from scratch and endowed them with a soul and a mind that lies beyond our comprehension. Although many scholars do not believe in the ideas taught in organized religion and may even profess to be agnostic or atheists, many of them still cling, perhaps subconsciously, to a view that holds that intelligence is special and mysterious. They appear to believe that it just isn’t right to dig deeply into matters regarding human intelligence. Well, I’m sorry, I have to try and I’m hoping that you will join me by taking these ideas seriously and to help me and others find the strengths and weaknesses of my ideas. This is a joint endeavor and I’m just a small part of the process.

Before I get started I’d like to clarify a few matters and then tell you the plan of this book. There are many differences between human beings and our chimp cousins. For the most part, the obvious differences don’t matter to our investigation of primate and human cognition. For example, our hairless skin is a big difference but it does not impinge on our underlying cognitive processes. At first glance our dexterous hands may seem equally uninteresting, but in fact I’ll be showing you the critical role played by our evolving hands in the evolution of language. Consequently, our evolved cognitive processes can possibly be described without incorporating motor control of our hands, but without our hands we would never have become language-using humans.

Humans have different tastes than our chimp cousins. Many of us love chocolate but I have not heard of this passion among chimps. For the most part we are born with many predispositions, such as our eye-blink response that evolved over hundreds of thousands and often millions of years. We are not blank slates. However, liking as much sugar and fat as we can consume may affect our life expectancy (increasing our life expectancy if sugar and fat are rare commodities in our environment, decreasing it if they are abundant and marketed to us), but disposition or taste is not an underlying cognitive process. It’s a preference supported by a pattern classification process that can be nicely modeled by neural networks, which can be trained over evolutionary time to increase the survival rate of a species using a wide variety of sensory inputs to a wide variety of ends.

What all this means is that it’s not just a simple matter of adding a language box to a chimp and then waiting a 200,000-year adjustment period before getting a human being. Some aspects of natural language draw upon cognitive capacities already present in some nonhuman primates and some cognitive capacities evolved with the hominids. Consequently, the first chapter on hominid cognition addresses some fundamental cognitive processes that appear to support speech or the human’s ability to learn speech. For example, human infants and some chimps without any training in human language can segment an acoustic speech stream into words, and they can distinguish some spoken languages but only if they are played forward. An example of a trait that facilitates the learning of language is our unique human predisposition to imitate others. We will see that none of these supporting processes are the missing magic ingredient that we would feel comfortable calling human intelligence. They are more the nuts and bolts required to get the language job done.

A rather different situation arises with music, which is supported by some unique underlying cognitive processes and which is also a rather high-level aspect of human behavior and intelligence. All human cultures have some form of music and dance, and they are highly elaborated in many cultures. The human brain, but not the chimp’s brain, is hardwired for music. Why? Again some people have run into the “don’t ask” problem because of a subconscious belief that songbirds and human music were “created” or “evolved” to please and entertain us humans. I’ll be arguing that although music may not be an essential part of the cognition of modern humans, it played a critical role in our becoming human. Music is not an accident and it isn’t just some pleasant art form but is intimately tied to human communication.In the second chapter, we are going to trace as best we can the commonly understood version of the evolution of hominids from about 4 million years ago to the dawning of fully functional human beings about 70,000 years ago. This discussion will serve as the introduction to language, and I will also say a few things about traditional linguistics and how those ideas impact cognitive science. Although I still think of human language as some form of a miracle, in the same sense that I consider the universe a miracle, life a miracle and life-sustaining water a miracle, by sketching how language could have evolved, we will have even more respect for it and will be better able to understand its role in our modern thought processes.

The third chapter is called “How Language Works” and presents a nontraditional
view of the mechanics of language. Noam Chomsky presented a compelling theory about fifty years ago that said that language was effortlessly learned by children and therefore had to be innate and, since language was governed by complex rules of syntax, that the brain had to be in some deep sense driven by the same kinds of rules. That is to say, the brain had to be a computational machine. I suspect you got the hint by my titling my three books “I Am Not a Machine” that I have some issues surrounding this idea. As much as many people don’t like that idea, it has been very hard to counter it because language is so complex that anything short of a very large formal system—that is, what scientists call abstract computational systems—is woefully inadequate in explaining how speech gets processed by the brain. I see language processing as a gargantuan signal processing problem. The field of cognitive linguistics has been making great strides towards developing a noncomputational model of speech processing and we are going to review what they have been up to. The answer, in a word, is patterns, not rules. I am aware that the idea of language patterns has been discredited many times—just as the idea of building a flying machine was discredited many times. It is as true now as in past debates that there are very bright conscientious people on both sides of the aisle, so our responsibility is to sit back, enjoy and scrutinize what both sides have to say.

The fourth chapter, “Language Evolution Revisited,” is where I put the ideas of how language works together with what we know about our ancient past to develop a new theory of how language evolved. I start by discussing some very recent ideas that have been published as a result of a renewed intense international interest in the subject. I hope you will find the ideas of the modern scholars as exciting as I do. However, because of my focus on signal processing I have my own twists to submit for your review. The most important of these twists is that it is not the recursion or embeddedness of language that makes language processing special, but rather how the distinct speech-producing structures support discrete mental representations that facilitates the easy manipulation of recursive and embedded patterns.

Calling the fifth chapter “How Language Means” is a bit of a misnomer. That’s why the full title of chapter three is “How Language Works (and Means)” and the full title of chapter five is, “How Language Means (and Works.)” A key feature of the traditional linguistics view is that language structure and language meaning are independent. That traditional assertion simplified a hopelessly complex analysis problem into a merely very complex analysis problem. It was a brilliant move. The only problem, say the cognitive linguists, is that it just isn’t true. Syntax and semantics are inexorably intertwined. There are many processes involved in interpreting the meaning of language. Some are simple associations of routinized phrases, some are based on complex word associations and some are based on a complex use of metaphor and relational analogies, many of which are rooted in body movement and awareness.

Chapter six deals with number cognition and starts our study of the highly charged issue of the relationship between thought and language. We know that animals and humans can think without language as I described in Book I, subtitled, Thinking without Words. We now characterize what it means to think with words. The concreteness of arithmetic makes it an appropriate vehicle for understanding the tremendous power of words to augment our languageless cognition—in this case what is called our innate number sense. Of course, mathematics also draws upon spatial competencies as well, but these too are aided by language. The bottom line is that there is no magical arithmetic or mathematical engine or module in our brains.

Chapter seven on human reason delves into many aspects of everyday and scientific thinking. Here I make the case that human brains do not have an Aristotelian logic processor. Instead we tell stories about our experiences, about how to do things or about how things work. We will find that family legends and our understanding of the physical world both depend on telling stories that are consistent with other stories we know (what some thinkers call theoretical import) and consistent with our interpretations of our observations in the world (empirical import). Our stories often need revision. Some stories are revised many times, and, because they ultimately come to be revised so little, we accept them as truth. is the key that defines that part of human cognition that sets us apart from other animals, and that simple statement has rather deep and disturbing ramifications.

Truth and knowledge, to some idealists’ chagrin, are ultimately based on social consensus. The scientific community usually has a set of mutually held ideas that allow ultimate convergence of scientific theories and how to do certain kinds of experiments, while other communities, say, a fundamentalist religious group, have their own shared set of ideas that guides their consensus building. Seldom will the two world views agree on basic concepts. Please understand that I am not arguing against the concept of science, mathematics or of any rational products of the human mind. It simply is not the case that there is an abstract higher-power that has determined a “truth” that humans aspire to or that the human mind has evolved special capabilities to produce truthful statements and ideas.

The concept of truth and knowledge are abstractions that are not well understood because to date no one has sufficiently understood how the mind works and what thought is, to say anything definitive about the products of the human mind. Another way to say this is that the very intelligent people who have studied epistemology, which is the philosophical study of human knowledge, have done so abstractly assuming some sort of ideal world and ideal human mind, without basing their theories on the actual capabilities and limitations of the human mind.

A direct consequence of my theory of knowledge is that I am no more privy to absolute truth than you are or Socrates was. However, by collaborating we can all better understand our human cognition. Therefore you do me as much honor by arguing against my views as arguing for them. Please feel free to contact me through my website, NotaMachine.org, with any substantive comments or with references or pointers to what you or others have written on these matters. Thank you.

CHAPTER 5 Back to top

HOW LANGUAGE MEANS (AND WORKS)

There is a world of difference between the sentence, “Jack and Jill went up the hill,” and the sentence, “The time is fast approaching when our paths will diverge.” The difference we will have to confront is the very basis of what we mean by “truth,” and how we communicate our experiences and ideas to other people. How language means is shaped by the biology of our brain and the structure of our mind.

A BRIEF HISTORY OF IDEAS

If one studies linguistics at a modern university she usually studies either its syntax, the formal grammar we discussed in the previous chapter, or its cultural aspects, history and sociology. However, there is another side of linguistics called semantics, which is concerned with language meaning. Some scholars have tried to make a discipline out of the study of semantics, but their efforts have received mixed reactions. With the success of cognitive science, however, there has been a renewed attempt to understand meaning issues in language, although the word “semantics” is not necessarily used to label this new endeavor.

The new way of looking at things, which is sometimes termed the second cognitive revolution, has not yet gelled into a coherent discipline, so it has few widely accepted labels or even concepts. The new ideas challenge not only the ideas of the first cognitive revolution but also ideas that go back to the Enlightenment and ancient Greece. Although this reexamination of our basic guiding principles can be disorienting, I hope you will also find it exciting and fun.

In the previous chapter on how language works, we briefly discussed the new field of cognitive linguistics. We discussed how mental spaces and idealized cognitive models can be evoked by the grammatical constructions. While this current chapter will focus on how language means by concentrating on the role of metaphor, it also will fill out our notion of how mental spaces are evoked. How language works and how it means are interdependent. In other words, syntax and semantics must be understood as a single discipline.

SECOND-GENERATION COGNITIVE SCIENCE

Second-generation cognitive science is George Lakoff and Mark Johnson’s term for describing their research on metaphor and its implications for cognitive science.177 Most scholars call the beginning of first-generation cognitive science the cognitive revolution. See for example The Mind’s New Science: A History of the Cognitive Revolution178 by Howard Gardner. The first cognitive revolution, which occurred roughly from 1970 to 1985, unified several disciplines under a common goal and set of principles. The disciplines include linguistics, developmental psychology, computer science, artificial intelligence, general philosophy, epistemology, philosophy of science and neuroscience. The goal was to understand the human mind using a common set of principles centered on a computational symbolic processing model of mind.

The (first) cognitive revolution was complete and successful. Still, as in any revolution or period of significant social or intellectual change, there are always some who are left behind and others who are far ahead (or at least think they are). Evaluating the ideas of the reactionaries and the avant-garde is never easy, but in cognitive science, where everyone claims to be innovative and far thinking, even identifying the important actors is difficult.

In 1980, with the publication of Metaphors We Live By,179 Lakoff and Johnson made significant advances in one aspect of linguistics. They investigated
language meaning—what I would call semantics—but from a very different perspective from the abstract thinkers and linguists who traditionally
investigate that domain. By 1987, however, with the publication of Women, Fire, and Dangerous Things,180 Lakoff had begun to study linguistics in a way that challenged many of the fundamental and cherished views of both linguists and deep thinkers. By 1996, Lakoff had chosen the name “second-generation cognitive science” for the radically new perspective of cognitive science he envisioned. Before we discuss what is meant by second-generation cognitive science, we need to review how reality was envisioned by the Greeks about 2,500 years ago.

HOW GREEK THOUGHT ORGANIZES MODERN EXPERIENCE

In the following discussion I will summarize Lakoff’s big-picture perspective and focus on a modern interpretation of ancient ideas. Changing our beliefs about ideas we have held for a long time, ideas that serve as anchors for our perception of the world, can be extremely difficult. Learning the truth about Santa Claus, learning that it is okay to eat meat on Friday (for some people), and learning that our parents are imperfect human beings are some examples of shattering revelations. Perhaps the hardest one for me was learning that President John F. Kennedy was not the ideal person I wanted him to be. I still find it hard to believe the well-documented stories about him that are now widely accepted.

It is now my onerous task to tell you that parts of your beliefs about the physical and abstract world may require deep revision. I will try to show you that the very framework we often use to evaluate new ideas is itself faulty. Consequently, you may feel that everything you read in the following section is wrong. I suggest, if you possibly can, to reserve judgment until all the pieces are in place before you evaluate whether the presented ideas make sense as a new paradigm. Have you ever leaned back on a chair just a little bit too far and had that panic feeling that you were going to fall over backwards and terribly hurt yourself? Well, that’s how you might feel for the next few hours. So, please take a deep breath.

Try to imagine that you are 20 years old and trying to convince your parents that some course of action that is totally off of their radar screen is, in fact, the right course of action for you. We could be talking about a career opportunity, graduate school, military service, marriage, divorce or a trip to Thailand. Not only does your discussion deal with the merits of the path you are considering, but it deals with the whole issue of autonomy, your ability to make mature decisions and the new role your parents must adopt in your emerging adult life. Now turn this around. Imagine you are the parent trying desperately but unsuccessfully to understand your 20-year-old son or daughter. Nothing makes sense to you, but you want to understand and you realize that your whole view of life and the role you play in you children’s life may have to change. You’re going to have to learn to let go. That’s a bit like what I am asking you to do now. I am asking you to reconsider all you know about knowledge. This is not going to be easy, but I hope you will try to see how the ideas we live by, which we inherited from the ancient Greeks, can be very misleading.

Let’s begin with an idea we all think we understand, namely truth. (I urge the postmodernist reader, familiar with attacks on the concept of truth, to wade through the following ideas, which may turn out to be different than he or she might expect.) Every socially responsible bone in your body is ready to defend the idea of truth, but what exactly is truth? Is it true that: George Washington was the first president of the United States? That you don’t like Brussels sprouts? That right now you are feeling (choose one) anxious, angry, confused, bored or incredulous? That water consists of molecules containing two atoms of hydrogen and one atom of oxygen (H2O)? From these examples we can see that how you might apply standards of truth depends on the kinds of statement we are considering. There are no simple criteria for the application of the term, “truth.” We shall see that even the truth about truth is potentially confusing.

Let’s start with one dictionary’s attempt to define truth. I refer to Webster’s New Collegiate Dictionary, “Truth: the state of being the case; the body of real things, events, or facts; the property (as of a statement) of being in accord with fact or reality; fidelity to an original or a standard.” {I added the emphasis.} I have left out the definitions given that use the word “true” in them since looking up “true” adds nothing to the list. Please pay special attention to the words I italicized in the definition: real, fact, reality, original, standard. Herein lies the problem.

My name is Jack. Most people call me and have always called me “Jack” and I respond to that. There is, however, nothing in writing anywhere that verifies that truth. My paperwork, driver’s license, credit cards and so forth say that my name is John, so that too must be true, perhaps even truer. I once had to go to court because a company I worked for refused to call me Jack or John. They insisted on calling me Francis because of something written on my birth certificate. They were convinced that was the truth, but I got them to accept a new truth by going to a court. I realize the story of my name is frivolous, but it hints at a fundamental issue. Is truth defined by my experience or is it defined by an ideal standard that is independent of me?

To get a gut feeling for these ideas you might consider reading, if you haven’t already, Zen and the Art of Motorcycle Maintenance: An Inquiry into Values by Robert Pirsig.181 This book dramatizes the personal ramifications of choosing between the ideal and the experiential—two views Pirsig calls the “classical” and the “romantic.” Before clarifying the difference between classical and romantic, let’s look at what they have in common—namely, “basic realism.” Lakoff describes Basic Realism as a view he shares with many other thinkers. I also believe in Basic Realism and hope you do too. I quote from Women, Fire, and Dangerous Things.

Basic realism involves at least the following:
1. A commitment to the existence of a real world, both external to human beings and including the reality of human experience
2. A link of some sort between human conceptual systems and other aspects of reality
3. A conception of truth that is not merely based on internal coherence
4. A commitment to the existence of stable knowledge of the external world
5. A rejection of the view that ‘anything goes’—that any conceptual system is as good as any other.

Subjectivist world view

The view that Pirsig calls romantic is also known as “subjectivism.” In Metaphors We Live By, Lakoff and Johnson give a synopsis of subjectivism:

The myth of subjectivism says that:
1. In most of our everyday practical activities we rely on our senses and develop intuitions we can trust. . . .
2. The most important things in our lives are our feelings, aesthetic sensibilities, moral practices, and spiritual awareness. . . .
3. Art and poetry transcend rationality and objectivity and put us in touch with the more important reality of our feelings and intuitions. . . .
4. The language of the imagination, especially metaphor, is necessary for expressing the unique and most personally significant aspects of our experience. . . .
5. Objectivity is dangerous, because it misses what is most important and meaningful to individual people. . . .”

 

Home I Annotated Bibliography I For Pet Owners I Thought You'd Never Ask I Contact

 

Buy the Book...
Click on a vendor below to purchase

Amazon.com
Paperback I Hardcover

Barnes & Noble

Paperback I Hardcover

Books-A-Million
Paperback I Hardcover

 

Book I: Thinking Without WordsBook II: Thinking With WordsBook III: Rethinking Cognitive Psychology

 

 
 


 




Summary of Book II Excerpts from Book II