Book Excerpts
1. Table of Contents,
2. Preface,
3. Chapter 1
4. Chapter 6


Preface to Book III
Chapter One - Reflections on Cognition
Chapter Two - How Brain Topography and Chemistry influence Thinking and Emotions

Left vs. Right
Top vs. Bottom
Inner vs. Outer
Front vs. Back
Brain Biochemistry and Emotion

Chapter Three - Sensory and Motor Cognition

Visual Perception
Speech Perception
Sensory-motor Development

Chapter Four - Cognitive Development

A Way to Study Thinking
Concrete Operations
Formal Operations
Concept Acquisition
Get a Theory
Toward a Model of Adolescent Thinking

Chapter Five - Cognitive Anomalies

My Anomalies
Other’ Cognitive Anomalies
Asperger’s Syndrome—Background
What’s Awry in High Functioning Autism?



Chapter Six - Savant Syndrome

What is a Savant?
Prodigious Savant Abilities
How Calendars Work
How Do Savants Do It?
Research on Calendrical Savants

Lightning Calculators
String Processing Model
How the Brain Hijacks Speech Processing to Crunch Meaningless Numbers


Chapter Seven - Concluding Reflections

How to Have a Conversation about the Mind
Spatial Cognition and Mathematics
Never Trust a Theoretician
Dual Modes of Cognition
Thinking with Words Revisited
Thinking without Words Revisited
Cognitive Psychology Revisited
The Future of Cognitive Science




PREFACE Back to top

Book III of I Am Not a Machine completes the trilogy on cognition and fills in many ideas that did not make the final cut for the first two books. I envisioned that Book III would be short and that it would follow quickly on the heels of Book II. I had no such luck because I got distracted by studying autism. I found myself surrounded by discussions in the popular press, in books and at the dinner table. I had friends in public schools who were dealing with it and some friends had afflicted family members. The milder forms of the disorder seemed to be primarily cognitive in nature and hence tapped into my interest in modeling human cognition. It seemed to me as an outsider that the studies of autism would benefit from a deeper cognitive analysis. I thought that perhaps my models of human cognition might have failure mechanisms built in. That is, perhaps if some important aspect of a model was defective in a person then that person would exhibit some of the symptoms of high functioning autism (HFA).

My interest in autism was constantly refueled by the amazing amount of research being done in hospitals, universities and schools. Public lectures were frequently scheduled and there was a prodigious amount of information available on the Web from private and professional sources. Furthermore, the high incidence of autism and its apparent rapid increase created an atmosphere of significance to such studies. I wanted to join in and contribute in some way. The jury will be out for some time if my choice of a path makes any sense but I chose to stick with my own theory of cognition and see if the new information that was rapidly pouring in would inform those theories in some way, perhaps suggesting they were of limited value or perhaps offering key insights.

I think I am realistic enough to appreciate that a single person working alone is unlikely to develop new therapies for autism or develop new diagnostic tests or insights into the genetics and neurochemistry of autism. Consequently I am afraid that what I have to say will not be particularly helpful to those on the caretaker front lines. My hope, which is unrealistic enough, is to lend a hand to the researchers by providing a more coherent and deeper model of the cognitive aspects of high functioning autism.

In learning about autism, I accidentally stumbled upon Savant Syndrome. Savants are the rare individuals with debilitating cognitive disorders that nevertheless are outstanding in fields such as art, music or integer arithmetic. Oddly, there is little professional interest in the field of cognitive science in how the amazing behavior of these individuals gets accomplished. Of course the savants can’t tell us what they are doing and so research studies are extremely difficult to design. Furthermore, the feats of these individuals appear to be a counterexample to almost all theories of cognition, including my own, so that sensible researchers simply keep their distance. The minds of the savants appear to be very computational and hence appear to directly confront my noncomputational theory. In fact, a brilliant friend of mind remarked that he wasn’t surprised by the cognitive capacity of the savants at all and felt they did not need explaining. What needed explaining, he said, was the fact that the rest of us can’t do those feats. It looked like Book III was going to have a new focus—it was going to shoot down the theory I developed in Books I and II. Now that’s an exciting trilogy. Well, it didn’t happen.

Savant Syndrome is still a mystery but I have developed a model of it that is consistent with the models in the previous books. It’s not a pretty model because it seems that different savants use different cognitive processes. However, the models that I propose are testable with modern neuroscience techniques, so I am optimistic that in twenty years our understanding of savants will greatly increase—regardless of whether my models remain standing or others take their place. The bad news is that my models strongly suggest that you and I cannot learn from this body of research to become smarter or even do better tricks. In fact, the opposite seems to be true. Those savants who ultimately learn the rudiments of language and normal human interaction almost always lose their savant abilities. You can’t have it both ways. It looks like savants harness aspects of the early hominid cognitive system that got usurped as natural syntactic language evolved. Thus the practical implications of the theory of Savant Syndrome are negligible while the theoretical impact is enormous and warrants considerable attention to anyone deeply interested in human cognition.

CHAPTER 1 Back to top


This brings us to Book III. It would appear that whatever I had to say about animal and human cognition I said in Books I and II. After all, Thinking Without Words and Thinking With Words seem to cover the playing field pretty well. The problem is that in learning about a new way to conceptualize human cognition, there is a lot of old baggage that needs to be reconsidered. For the most part there is nothing wrong with the old baggage, but sometimes it needs to be recast in more modern terms. My original motivation for writing about cognition came from Melvin Konner’s The Tangled Wing, yet that book never got mentioned in Book I and II. His concern about the biological constraints on the human spirit mirrors my concern about the brain’s constraints on human behavior. Consequently, I thought it would be instructive to discuss some basic ideas that get classified as cognitive psychology.

Book III addresses many questions left unanswered and even unasked in Books I and II. These include questions dealing with the organization of the brain, how the neurochemical system of our brains affects our cognition, how our emotions function and how our sensory perception and motor control systems operate. Many aspects of human cognition can be best understood as they unfold and therefore Book III explores aspects of the developmental psychology of children.

Chapter Two is on Brain Topography. I analyze a variety of mental functions by considering the role played by the specialization of the brain into left and right hemispheres. A brief review of the top-bottom, innerouter, and front-back brain dichotomies is given. I also discuss the role of the brain’s biochemical mechanisms in our cognition.

Chapter Three is on Sensory, Motor, and Emotional Cognition. I discuss the body’s input and output characteristics, namely, how it perceives the outside world through its senses, how it acts through its motor movements, and how emotions play a role in linking the two. Emotions can be understood as biochemically based processes that serve both as nutrients and as broadcast signals to coordinate the body’s physical activities.

Chapter Four is on Cognitive Development. Piaget’s stage theory of concrete and formal operations is reviewed in this chapter in order to establish a framework within which to contrast alternative ways of formulating human cognitive development of conscious thinking processes. Concept acquisition and “theory” formation are described in relation to pattern-classification neural networks to model aspects of the cognitive development of children and adolescents.

Book III then takes a sharp turn and deals with what I call the anomalies of cognition. I have held the view for decades that I should avoid studying what happens if the brain and hence the mind does not function properly. The human brain is so complex that its failure mechanisms must be impossibly complex so that any attempt to understand them must be futile, given the contemporary understanding of cognition. Two things happened. The first was that in my normal travels I met teachers and other school staff who were working with children who had a variety of learning and developmental problems. I found myself being interested in what they were doing and wondering what was going wrong in the brains and minds of these children. Secondly, I wondered if my model of cognition had parts that could get broken in interesting ways. An interesting way would be where a certain kind of simple failure in one part of the model would lead to a certain kind of systematic problematic behavior that was associated with a well-known mental condition.

Autism seemed like it might be interesting in this sense, but I found that full-blown autism was too much for me to handle. High functioning autism and Asperger’s Syndrome, on the other hand, offered the possibility that a cognitive view might offer a way of bridging low-level neural aspects of the brain with high-level characterization of behavior. These reflections gave rise to Chapter Five on modeling the cognitive aspects of high functioning autism.

There is, however, one aspect of heavy-duty autism that although rare is nevertheless a colossal puzzlement. There are rare people with autism who are very low functioning, perhaps with very modest language capabilities and with IQs estimated to be in the 25 to 50 range, but who for some mysterious reason are very good at one thing such as art, music or mental arithmetic. This condition is called Savant Syndrome. Savants who are calendrical calculators are a common form—they can tell you the day of the week of a wide range of dates. They might almost instantly know, for example, that Christmas in 1865 was on a Monday and that in 2065 it will be on a Friday. That these individuals have these strange capabilities defies conventional theories of cognition and they appear on the surface to defy my theories.

It’s not surprising that most books in cognitive science never even mention the phenomenon of Savant Syndrome. I should have followed their lead because, if anything, it looks like the great mental calculators have the capability that could only be endowed by some form of digit calculation— they really appear to be computers of some kind. This would be very bad for my theories, however, if it were true I thought it best that I found it out rather than someone else. I’m glad I checked it out because I believe I have been able to develop a model of what is going on. My model is consistent with my other models so that I did not need to fall back on a computational theory of mind. There are, however, many unanswered questions and I hope the researchers will seize the opportunity to record in more detail the extraordinary capabilities of these rare individuals. My models of Savant Syndrome are covered in Chapter Six. I’m sorry that I cannot give you a two-sentence overview of what they do. In fact, it appears that different savants use different strategies and sometimes even the same savant may use a variety of strategies depending on the problem. In almost all cases, the savants have no idea of how they do what they do—it just happens. My approach is simple, however. I asked myself what kind of behavior might ensue if some part of my model was compromised. I think the general answer to that question is “not much,” but if you add the proviso that the person also had access to a very large memory for raw sensory signals then we can see the potential for some interesting behavior.

The so-called photographic memory capability of savants is legend and well documented, so the course I took was to see how that memory could be harnessed to do savant arithmetic skills without invoking a built-in computational process. I have a result to share with you but it is merely a hypothesis that needs to be evaluated by doing complex simulations and brain scan studies of savants in action. Modeling savant cognition turns out to be a good way to hone our understanding of cognitive modeling. And, speaking of cognitive modeling, the time has come to check out some aspects of the biochemistry of the brain.

CHAPTER 6 Back to top


Calendars would be a whole lot easier to remember if each month and year had the same number of days. Because it made sense in early agricultural societies for the calendar to have personal meaning, the months are approximately linked to the moon’s revolution around the earth and the year is related to the earth’s revolution around the sun. Since the link is only approximate, there must be little corrections and consequently some months have 30 days and others have 31 days, except for February which has 28 days except for leap year, when it has 29 days. Despite these corrections, the Julian calendar used in medieval times had accumulated enough error that Pope Gregory declared that ten days would be eliminated in October of 1582. That correction still serves us today, which means that should you learn how to calculate calendar dates, you’d be fine as long as you never tried anything before 1583. To reduce the need for corrections in the future, Pope Gregory’s astronomers introduced a minor complication into the rule that every fourth year was a leap year; namely that any year divisible by 100 is not a leap year, unless it is divisible by 400. Thus 1900 was not a leap year but 2000 was. Since many countries of the world were slow to adapt to the new Gregorian calendar, 2000 was the first year the 400-year exception was invoked for those countries, which by the way includes Great Britain and hence its American colonies. Who knew? I thought 2000 being a leap year was just following the simple rule.

Despite the capriciousness of our calendar system, it nevertheless has amazing regularities. You probably haven’t noticed that Christmas always falls on the day of the week that precedes the last day of February, but I bet you have noticed that New Year’s Eve always falls on the same day of the week as Christmas Eve. (Please forget Easter, which is a calendrical nightmare) You probably don’t care very much that the calendar for 2006 is exactly the same as the calendar for 1606 and 2406. This regularity occurs because calendars after 1583 are the same for intervals separated by 400 years. These and many other regularities found in calendars make the seemingly impossible task of figuring out a calendar date merely difficult.

To give you the flavor of how to calculate calendar dates, I’ll ever so briefly show you a method developed by a professor of mathematics at Princeton University named John Horton Conway. If you search on the Web under “perpetual calendar” and his name, you can find his fuller explanations. The basic approach is to figure out for any year what day of the week Doomsday falls on, where Doomsday is the last day of February (it’s the 28th in non-leap years; 29th in leap years). Once you know the day of the week of Doomsday, you can use a variety of tricks to find the day of any date for that year.

To find Doomsday you first divide the last two digits of the year we are interested in by 12 to get an integer, A, and a remainder, B. Divide that remainder by 4, call the integer value, C. Now add A, B and C to get D. Now divide D by 7 and call the remainder E. Now we need to look up the “anchor” day for the century we are interested in. These are some anchors of interest: 1800–1899—Friday; 1900–1999—Wednesday; and 2000–2099— Tuesday. We now determine Doomsday by counting up E days from the anchor day. So, for example, to calculate Doomsday for 1966 we divide 66 by 12 and get A = 5 and B = 6; then we get C = 1; D = 12; and E = 5. Thus, Wednesday + 5 days is Monday, our answer.

This probably seems a bit complicated at first, but I bet if you did it five times a day for a few days, you’d be able to do it on paper without looking up the formula, and if you did it for a few weeks you’d be able to do it slowly in your head.103 But we are not done yet.

Now we need to get from the date for Doomsday to any date in the year. To do this we define a day called a “little-doomsday,” which occurs each month on the same day as the Real Doomsday. For some months it is easy. For example, for the fourth month it’s on the fourth day. For most even months it’s on 4/4, 6/6, 8/8, 10/10 and 12/12. For some odd months it’s on 5/9, 9/5, 7/11, and 11/7, which also has a simple pattern (I work from 5 to 9 at the 7-11). So, continuing our example, in 1966, Doomsday, February 28 is on a Monday, and December 12 (little-doomsday) is also on Monday. Two weeks after that is 12 + 14 or the 26, hence the 26th of December is also a Monday, and therefore Christmas, December 25th, was on a Sunday in 1966. You should check this, as I did, by using a Web-based program for calculating dates. Just search for “perpetual calendar” and you’ll find many easy-to-use sites.


We just learned one way to figure out days of the week given a date. There are likely to be other similar calculations of similar complexity that also get the job done. The question we need to answer is, do savants do it that way. Are they merely fast at doing mental arithmetic? I can do the calculation in my head in less than a minute and I bet you could, too, with just a little bit of work. However, if we spent years practicing hours a day, you can be sure we’d get faster—but would we be as fast as a savant?

To get a handle on this, think about oyster shucking—splitting a live oyster with a special knife and removing the meat. Please do not do this experiment because it would be easy to seriously cut yourself. I suspect I could shuck an oyster in one minute, or, if motivated, do 40 to 60 in an hour. In one 2005 amateur contest, the winner did 24 oysters in 2 minutes and 47.9 seconds or 8.5 oysters per minute or 514.59 oysters per hour. In a 1998 national contest, a fellow (a professional, I assume) set a world record by shucking two dozen oysters in one minute and 18 seconds; that’s 3.25 seconds per oyster or 1,107 oysters per hour. My point is that with practice, and I mean a whole lot of practice, the human body and mind can perform amazing feats very rapidly.

It’s easy to predict that with practice I could reduce my 45 seconds to figure out a date considerably, but how far down? Let’s say that I have ten elementary calculations to do and that I could do each one in a second. The entire calculation would take me ten seconds. I’m not going to do the experiment but somebody else has. Two researchers, Barnett Addis and Oscar Parsons, from the psychiatry department of the University of Oklahoma engaged a psychology graduate student, Benj Langdon, to see if he could learn one of the methods doing calendar calculations.104 Apparently, Langdon practiced day and night but, after much practice, could not match the speed of two savant twins used as a benchmark until one special day. On that day something almost magical happened—he could do the date calculation without thinking about it. He just knew the answer and was able to match the speed of the benchmark savants. We’ll address what happened in a moment, but for now I conclude that consciously doing the sequential calculations as fast as humanly possible was not fast enough to match savants. We are still left not knowing how savants or someone who acquires a savant skill does the task subconsciously.

It might appear that Langdon is no help here because after he acquired the subconscious skill he no longer knew how he did it; however, we do know how he got there and it wasn’t by eating spinach or mind-enhancing pharmaceuticals. To explore how he might have done it, we need to step back and consider all possible mechanisms—at least in outline form. This is a useful exercise because we have no direct way to determine if other savants do it the way Langdon did it. So we need to understand the properties of alternative putative mental calendrical processes and then check if they make sense.

The simplest method to describe is to use memory for the whole set of calendars. If a savant can instantly memorize a major part of the Manhattan phone book, then memorizing fourteen different calendars (seven different days x 2 for leap and non-leap years) and the years they apply ought to be a piece of cake for a savant. There are likely to be other ways of memorizing the different calendars, so that to determine actually how a certain savant did the task, we’d need to know exactly what the person had seen in the course of his or her life—likely to be impossible. Savants sometimes make systematic errors, so researchers can deduce a great deal about what they do and have seen from analyzing those errors. For the most part, memory comes in two flavors, a memory for pictures and actions and a memory for words, corresponding naturally enough to primary and secondary cognition. Think of the picture memory as being composed of pixels of color and intensity. If you can close your eyes and imagine the face of your mother then that is a picture memory, often called an eidetic memory. I’m calling this model #1.

On the other hand, you probably can tell me something about your mother, stories of what she was like and things she did. You could even describe a picture of your mother. This is language-based memory. A calendar—don’t think of the photo of the snow scene—can be pictorially represented by the tabular layout of the numbers into days of the week listed across and weeks listed down so that you could reproduce the image with your digital camera. Alternatively, a calendar can be represented by language where each number is tagged by a meaning and a location. A typewriter or word processor can symbolically represent the calendar by typed numbers, spaces and carriage returns. Language memory is model #2.

Another way to make the day-date correspondence is by knowing extensive points of reference and knowing many calendar regularities. In a non–leap year each date is just one more day beyond the previous year. For example, in 2005 Christmas was on a Sunday so therefore in 2006 it has to be on a Monday. A larger-scale regularity is that within a century, the calendars for years 28 years apart are exactly the same. It is conceivable that there are enough of these regularities to make it possible to determine an arbitrary date given them and a large number of date facts. I am not aware, however, that anyone has actually enumerated a set of regularities and facts to back up this hypothesis. In 2006, the year I am writing this, Doomsday is the same as the 2000 anchor, which is Tuesday. Consequently, December 12 is a Tuesday; as are September 5 and 12. Although you and I can remember some of these facts if they follow a pattern, a savant can remember a vast number of unrelated facts. Hence simply by looking at calendars and figuring out dates, they accumulate vast numbers of reference points in memory that would make determining another date quite easy. This is model #3.

The hardest method to describe is by far the most interesting because I think it has been overlooked by most researchers. It’s based on subconsciously memorizing macro patterns in calculations. For example, one way to add 8 and 7 is to think of the 7 as 2 + 5, and add the 2 to the 8 to get 10, then add the 5 to the 10 to get 15. But you and I don’t have to do that because we just know the answer is 15. In other words, we could do a calculation but with experience we just memorized the answer. Savants could be doing the same thing on a much wider scale. Let’s look at the first part of the doomsday calculation—divide the last two digits of the year by 12. Instead of actually doing long division, it’s easier to use the ladder of 12s: 12, 24, 36, 48, 60, 72, 84 and 96. So to start the date calculation for 1975 you note that 75 is past 72 in the ladder and so the answer is 6, 3; that is, the integer part is 6 (since 6 x 12 = 72) and the remainder is 3. This approach to division could be quite difficult if we had to do it for all possible numbers, but dividing by 12 is the only hard division operation in the date calculation so there is just one ladder to memorize. The other division is by 4 and it’s only of numbers equal to or less than 11, so it is very easy.

The trick here is based on parsing—grouping complex things into chunks. By noting patterns in the chunks, the brain has fewer things to memorize. To get a feel for the process, please memorize the following pattern of letters, “kwqbmpzw.” Now memorize this pattern, “thankyou.” That was a lot easier because there are hundreds of thousands of words in English and you readily recognize them even when they are not separated by spaces (if there aren’t too many words). That first combination had 8 places each of which could be filled by one of 21 consonants, giving 21 to the 8th power or over ten to the eleventh, 1011, (one hundred-thousandmillion), possibilities. The scope of two-word combinations that form sentences are microscopic by comparison and hence far easier to recognize and recall. Thus parsing a complex pattern, when this is possible, can yield very large computational, memory and pattern recognition gains.

The hypothesis I am proposing is that by repeatedly doing calendrical calculations, a savant eventually runs across all the alternatives and remembers the answers. The savant starts by doing the calculations but at some point the patterns of the parts are learned and the whole process speeds up immeasurably. There are too many number combinations for us to enumerate them here but a computer simulation of this process ought to be a convincing demonstration of this approach to computation and I hope it is tackled by somebody soon. This is model #4 and is my current hypothesis for how Benj Langdon, the normal graduate student who learned to do rapid calendrical calculations, did his apparent magic.

The fifth and last alternative to date determination is the internalized computation. On the surface, this model is analogous to internalizing a skill such as driving a stick-shift car. We start being very conscious of each movement of our foot on the clutch, hand on the gear shift lever, noting the speed of the car and whether we are going up or down a hill. But after twenty years of driving we do these actions very smoothly while not having a clue about how we do what we do. It’s a bit like learning a recipe. The words consciously guide our actions as we learn to bake a cake, but, eventually, we properly respond to each situation. We learn things that aren’t envisioned in the rule-based procedure. I think of it as a complex stimulus-response activity—it’s sophisticated pattern recognition. The results of the recognition are things we do with our bodies—motorcontrol signals. The process of internalization then is transitioning from a conscious, word-based rule system to a subconscious pattern-recognition and motor-control response system. We could think of it as going from what many people think of as a classical left-brain function to a classical right-brain function. This sort of dichotomy is worthwhile if it eases our conceptualized thinking but we must remember that the brain is very complex and resists simple partitioning so that in fact either side of the actual brain could do the idealized left- or right-brain function.

I am very concerned that extending the idea of internalizing our ability to drive a stick-shift car to internalizing arithmetical calculations is a big leap. The problem is that the output of the activity isn’t exactly motorcontrol signals that operate on muscles. I’m slightly less bothered by the fact that the input is not from external sensory signals but are signals or states from inside the brain itself. In Book II we discussed metaphor theory and how the meaning of abstract words is embodied in actual actions we do with our bodies. So if we say that the senate sat on the health insurance bill, we are invited to think of our sitting on an object on the ground making it hard for someone else to move it or work with it. With this in mind, we can think of the mental process of adding the abstract digits, two and three, in more concrete terms as taking two salt shakers and moving them to a group of three salt shakers and then counting the new combined group to discover we now had five salt shakers. Thus, if asked to be creative, I can envision simple abstract addition as being embodied by actual physical actions involving sensor signals as inputs and motor-control signals as outputs. I could then use the term “internalized” to mean the transitioning from the physical embodied actions of adding real objects to doing the same operations in my head. That use of the word “internalized” is even better than the stick-shift example because here we are literally going from outside the brain to inside the brain while in the stick-shift example we are going from the conscious part inside our brain to the subconscious part inside our brain.

If we are adding numbers that we have added before, then this internalized addition can make some sense, but this is the exact situation we don’t need to invoke this model because we could just say that the brain is recognizing a pattern it has seen before and associates the pattern with the correct answer. The interesting situation is when we do something new such as add numbers we haven’t seen before. So for example, to add 17 and 23, we could use a procedure that added one to the 23 and subtracted one from the 17, and repeated that until the 17 got to zero. This is a perfectly pleasant algorithm that works for any set of numbers that we can represent. In fact, we can envision doing this algorithm with any number of salt shakers or blocks. However, there is a problem. When we do the operation with physical blocks we can see when one of the groups has gotten to zero blocks and then we can enumerate the big group to get our arithmetic answer. When we internalize the action, it is not clear how we internalize keeping track of the size of each pile. It doesn’t matter if I am going to pictorially represent 17 blocks or represent each actual word or symbol comprising the two digits of the number 17. You can think of it as a variable, as we did in high school algebra or as a pointer that points to one number in a long sequence of numbers strung out in a line. Reducing by one is just moving the pointer down one position. These are easy operations to do with a binary logic silicon computer but not so easy to do with a neuron-based biological brain. Nevertheless, “not-so-easy” is not the same as impossible, so we are required to keep this hypothesis on the table. This is model #5.

We can summarize the five possible models of doing calendrical calculations as follows.

1. Memorize the whole set of calendars as a set of pictures (eidetic memory).

2. Memorize the whole set of calendars as one big symbolically represented pattern (i.e. word memory).

3. Use calendar facts and regularities.

4. Memorize parts of the patterns of calendars and procedures to generate dates that can then be recombined to generate specific daydate correspondences.

5. Learn an algorithm with variables that can be executed subconsciously.


Home I Annotated Bibliography I For Pet Owners I Thought You'd Never Ask I Contact


Buy the Book...
Click on a vendor below to purchase

Barnes & Noble




Book I: Thinking Without WordsBook II: Thinking With WordsBook III: Rethinking Cognitive Psychology




Summary of Book III Excerpts from Book III