Saturday, May 18, 2013

The Harkness Table and Educational Field-Trips of the Mind

Museum Photo of wooden benches and blackboard
Traditional Classroom


Is knowledge still power? Is the ultimate goal and objective to turn yourself into a walking encyclopedia? Will that ensure future success and a host of job opportunities? Will you be considered smart and intelligent, as well as resourceful because you have memorized loads of data?

Then, you are no smarter than a smart-phone at best. If all you have is knowledge, you can be replaced with modern bite-sized technology. For example, if you are able to memorize significant dates in history, you will be admired for your memory storage, but I can most likely find the same information in about ten seconds using Google search. Put differently, you might be hired as a species of a bygone stone-aged information age, an exemplar that could be displayed and exhibited in a museum, but the link between knowledge and power has weakened due to the advent of technology.

In fact, think about taking computer science classes about say twenty-five years ago. What you have learned then may be somewhat useful, but seeing how things have changed, you definitely need a refresher. For one, computers have shrunk significantly in size but have expanded in memory; despite Bill Gates' comments in the past, 640 kilobytes of storage is simply not enough (although I have heard that he denies ever making that statement in the 80ies).

So what are the ramifications and effects of this new outlook on the field of education? Certainly, the field of education has accepted a shift in style and method. We have moved from a teacher-centered and lecture-saturated class to skill- and performance-based outcomes that put the student at the center of the discourse. I believe it has connections with our changing perception of knowledge for its own sake since simply knowing things does not cut it anymore. It also empowers students to learn in an active manner, which according to recent studies generally favor and benefit overall learning.

But a student-centered approach is often easier said than done, and it may be more prevalent in theory than in actual practice. There remains a sense on both sides of the teacher / student spectrum that a teacher has to instill knowledge, while teachers more often than not tend to switch rather automatically or subconsciously into the lecture mode.

The students are often seen, and may even see themselves, as "empty vessels" that need to be filled and then sent out to the world. The idea suffers from the fact that we do not fully know if any of the knowledge has actually sunk in. Our tests and exams are often knowledge-based, which means that students can memorize the answers, cram the night before, only to wipe the slates clean thereafter by hitting the delete button; they forget practically everything they have been taught come the end of the semester.

In fact, students may in many cases give the answers the instructor expects of them, and then all you have is regurgitated knowledge and very little actual analysis or reflection on the part of the student. The only thing they learned is how to please others and give others what they expect of you. Although somewhat useful in a practical sense, namely for later job situations, it is not what education should strive for or be.

On the other hand, one should also keep in mind that students are not merely empty vessels; they come filled with all sorts of stuff, some of it useful for their education, some of it not. It would be then the quest of the teacher to activate the parts that are beneficial and not other parts, such as prejudices, stereotypes or pseudo-scientific claims, so that the latter concepts and ideas do not interfere with the learning process.

Although I value knowledge, and I think it is a great idea to provide skills to our students for future success in work and life, I am not sure the current methods fully satisfy those outcomes. First off, we are expected to plan our lessons along certain guidelines. Each section has a specific function, such as to arouse curiosity, to check their previous knowledge, to provide them with new information and then to follow it all up with a post-evaluation, to see how much of the data sticks and has actually sunk in.

This is all and good, and it is very valuable for teaching. But because of a somewhat rigid structure, it also becomes limiting. I condone a teaching style that I call “framed spontaneity,” in which the lesson plan and its structure are plastic and flexible; it ought to be adjusted along the way to the needs of the students, the class in general, the current situations, as well as the teacher's needs. All these variables can interfere with the order and structure of the lesson, although the goal is still to reach the learning outcomes set in the first place. In other words, the destination is the same, but the path can be different.

I generally see the classroom as both exploration and experiment. (I also see it in poetic and metaphysical terms of a sacred space in which knowledge is bred for all the members involved.) Exploration in the sense that my goal is not so much knowledge but to give students guidance and motivation to explore their own world and ideas on and via the given topic. For example, we will engage in discussion and discourse on a given subject. The class as such will brainstorm, express and evaluate ideas.

Knowledge is secondary but still necessary to fill the gaps and to enable general discussion; yet the focus is to enable students with critical thinking skills, analysis and interpretation. As I tend to say in class - and remember this is for humanities classes and most likely would not work with science - that there are no right or wrong answers as long as the students can back it up with clear and convincing logic, examples, and evidence.

This will be an open-ended discourse that needs, of course, to be focused. But in terms of knowledge and even subject, it can lead to unexpected insights and results. It is at the same time mostly student-centered since it is directed mainly towards their needs and desires. It is an exploration, an adventure that we all embark upon, the classroom as the ship and the teacher the appointed captain. It is a field-trip of the mind and on good days, everyone will benefit from it and learn something new, including the teacher. On bad days, the teacher will have to put on his lecture hat for a bit to give the students time and opportunity to get those creative juices flowing again.

But I say good and bad days because there are other factors involved too. The first and most important one is a question of motivation and encouragement. A discussion without people willing to participate or without them feeling comfortable to discuss their ideas leads to nowhere productive. The teacher needs to show the significance of the issue and relate it to the lives of the students whenever possible (but you will be surprised how much is actually possible once the connection is established). At the same time, all opinions must be respected, and there should be a general atmosphere of acceptance and tolerance so that students will feel comfortable sharing their own opinions with others.

The other factor is evidently previous knowledge or relation to the subject. You cannot discuss Plato without knowing at least the rudiments of his philosophy in order to be able to put him into perspective with other philosophers while evaluating and checking its link with one's own convictions and beliefs. Once these points and criteria are satisfied, we can engage in productive, educational, and academic discourse.

This view that I am expressing here has been in practice for over 80 years! I was surprised to see that the Harkness Table, proposed by the oil magnate and philanthropist Edward Harkness, was a method that built upon what I believe to be a very useful way of teaching relevant skills to students. It has been the staple of various boarding schools and colleges, having its origin in Phillips Exeter Academy.

I stumbled upon this perspective while reading (and preparing an upcoming book review of) the memoirs of the successful business-leader Peter Georgescu, the CEO of Young and Rubicam. He himself professes and stresses the importance and value of this type of education. It gives students not only necessary skills, but also reliance and confidence in their own abilities, something they can take with them for their later professional and personal life. It is not about teaching them what the instructor wants them to know, but about instilling a perhaps life-long curiosity in knowledge, research, and critical analysis of relevant issues.

I have had previous discussions on this topic with directors and board-members. They generally believe that it is of utmost importance to give the objectives first and then to show them at the end that the objectives have been fulfilled. To me that takes away the thrill and exploration out of the whole deal. My method is along the lines of let us get started and then at the end, you will be surprised with yourself and what you are able to do. Its focus is on the student's own accomplishments. It is the aha-moment that puts the mirror in front of their own capabilities.

Yet the way the field of education prefers to structure itself is to have a codified plan, while its focus is mainly on what can be tested and evaluated. For example, statements that lead towards an expected future outcome, such as by the end of the course, students will be able to do the following things.

This is all fine, but there will be classes (and also teachers) that will stand out from the rest. And they will most likely be those classes that strayed from the codified restrictions and in which students got involved in exploring issues; by looking at issues with different eyes, they may have learned something new and valuable.

It should not be a picnic, but a field-trip of the mind. It should include a sense of wonder and curiosity. There is too much focus on grades and outcomes, but the most important values and benefits are those that are permanently engraved in the hearts and minds of the students. And that is what, ideally and fundamentally, education is and should be about.

Monday, May 6, 2013

Free Will, Neuroscience and Personal Responsibility


Photo of Professor of Psychology Michael Gazzaniga
Last week I was invited to two different talks. I willingly (or so I am led to believe) chose to go to both. The first one was an empty promise; both keynote speakers did not (chose not to?) show up. So the focus on my post will be on the second one that was given by renowned professor Michael Gazzaniga under the title of Who's in Charge?: Free Will and the Science of the Brain.

Whenever I hear the words “free will,” a light flashes in my brain. I have been fascinated with this topic for various years, and my perspective has changed over the years. From being convinced that we are free to be who we want to be - an overly optimistic and open-ended view I admit - I have come to espouse a perspective that limits and restricts our freedom, especially since gaining a little more insight into biological and psychological processes.

However, judging from the abstract, it seemed that Gazzaniga believed in personal responsibility regardless of the influence of our brain-machine on our thoughts, actions, and behavior. I was curious to see how he was going to achieve that feat, particularly with his neuroscientific background.

The auditorium filled up rather quickly, and I chose the front seats after ensuring that they were not reserved or taken. Even as a student, I always preferred the front row over the back; the back rows I tend to find more distracting. At the same time, in case of questions, I would be more visible and audible compared to people behind me.

I was immediately impressed with the speaker who despite a proven scientific track record and a number of significant accomplishments struck me both as a humble and humorous person. He interspersed his slides with sly comments and funny clips. He included various references to popular culture and Hollywood that illustrated and backed up his views in a clear and simple manner.

The talk started with a direct and predictable assumption: Free will is an illusion. He then gave a general philosophical definition that revealed the fantasy element of such a concept. So far we were on the same page. I do not believe that we are free to do or think as and what we wish; there are evident limitations on whatever we wish to be or to do.

For example, we are born with strengths and abilities that can be fostered through the environment, but we are not as plastic as B. F. Skinner once claimed. We cannot be anything or anyone we want to be; our choices and options are much more limited than that. I can never be a painter or a dancer (I have come to accept those facts); neither one of these abilities are in my blood so-to-speak.

In addition, free will would mean continuous conscious control and just think how much of our body is beyond our conscious scrutiny. If it were not so, it would lead to an unmitigated disaster. We would forget to breathe, forget to create cells, and so on. Since our body already does most of the work for us, it shows us that whether we wish to accept it or not, we are not as free as we think we are.

Then, Gazzaniga moved on to questions of blame and responsibility, hence moving out of the realm of neuroscience onto questions of morality and society. He claims that although we live in a deterministic world and our brain is a machine, we are still ultimately responsible for our actions.

He claimed that there is a social layer to our brain. Although the brain may be fixed - he compared it to my own favorite analogy of the motherboard of a computer - we can still influence it in various ways through our experiences and social contact, which would be the software we download.

He believes that morality is mainly a social issue and gave the example of being the sole survivor on an island. Without social interaction, morality would not matter, but the moment you have another person arrive on the island, the fight over the coconut becomes of value (These were indeed his words that I am paraphrasing here!). To me this was a kind of stretch because I think morality is more a personal and individual matter. If we are moral only because social forces are present, then our morality lacks a sound basis.

I will give two examples to show that. One is actually a study Gazzaniga himself talked about. Infants already are wired to recognize issues of justice and fairness. When there are two people or two moving objects (the latter would be perceived as animated and hence with life), and they receive compensation, the infant will be content when each receives the same portions, not more or less. This shows that we have, at least before significant contact with society itself, an ingrained sense of right or wrong, our own integral morality.

The second is a personal example. As a conscious and law-abiding pedestrian, I will stop at a red traffic light no matter what. Even if there are no cars in sight or if other people cross the red light, I still stick to the law. I would generally not jaywalk in this case unless it turns out that the traffic light is broken and stuck on red. 

I think that my compulsion to follow the rules is an individual matter; it is not contingent on social contact, such as other people being present at the time of the action. In an empty room without cameras, I would still turn in the fat wallet somebody has left behind to the Lost and Found department. It is conditioned by my respect for the law as well as for doing the right thing, and it is not dependent upon the eyes of the other. (Please note that whether I necessarily had a choice in the matter is still up for debate.)

Then the talk continued and looked at criminal behavior. Are people who commit evil ultimately responsible for what they do? He claims so, and that put me in a state of disbelief. How can somebody who is insane be still considered responsible for their actions? Gazzaniga seems to believe they are.

That opens up a lot of questions, not to say a can of worms. If so, then what would be the best manner of punishment? Is retribution acceptable? And who ought to decide on these matters? At this point, he could not resist some surprisingly snarly but pointed attacks on lawyers who he claimed have little to no background in neuroscience and psychology, and hence are not the best people to deal with such matters.

A licensed lawyer made himself heard during the question period, while I could not help but disagree with the comment made by Gazzaniga. Lawyers are simply doing their jobs. It is, sad to say, not so much about getting to the truth of the matter but about defending the client regardless. Sure, they generally make a handsome amount of money in the process, but their job is not shedding light on the issue but rather finding the best ways to uphold the human rights of the client, deserved or not. In other words, lawyers do not need to have a background in psychology but must be versed in rhetoric and the law. Yet when it comes to the decision-makers, judge and/or jury, that is a different matter altogether.

To wrap up, it turns out that although we do not have free will, we are still responsible for what we do because of a type of social responsibility. During the reception period, I had a chance to sit down with Gazzaniga with my glass of red wine and ask him some questions about his talk. Did he believe that we have a choice then? In typical professor-style he retorted with a question for me, do chimps have a choice? I said, well humans are a bit more developed in terms of reason, but he stuck to his own question. All right, yes, in my own view, chimps have a choice. He nodded and smiled.

Then what we have is a rather limited form of free will, restricted by our brain and experiences, right? Although he seemed not to appreciate the use of the word free will, he generally agreed.

My line of questioning continued, nonetheless. If you claim that we have a choice, then why do we choose to do evil? To my knowledge, this question was either left unanswered or he digressed in typical professor-style. I came somewhat to his rescue (putting words into his mouth and answering my own question) by referring to Socratic ignorance, that we do evil because we simply are not fully aware of good. He seemed relieved to put that question to sleep.

I asked him what he thought about Buddhism and its view on the ego. He answered that the perspective of the mythic “I” can be useful. It is a narrative that we use to make sense of the world, but that it is not detrimental and it is not necessarily untrue. In other words, it is a useful illusion we create that could indeed end up being true.

Would the same not apply to the illusion of free will then? This question remained unasked partly because we were interrupted by others and mostly because I felt that our discussion was not going anywhere in particular.

But I want to finish on an important observation he made during his talk that left me thinking about morality and the human need for punishment and retribution. If there were a pill that could cure Parkinson's, would we all not embrace and hail this discovery? Of course.

What if there were a pill that could be given to the murderer or shooter of innocent children, which would be able to cure him of re-committing violence. Would society accept that as willingly? I think the answer is no. We do want the perpetrator to undergo suffering for his actions. Perhaps it would be best to give them the pill before they act in Minority Report-style (although highly controversial), or more practically, if we take away their opportunity to have dangerous weapons at their disposal.

And finally, we can also give those who seem lost, confused or helpless, those who suffer from a dangerous cocktail of genes and environment, the empathy and care they need to escape the dark void within. That way we can bring a little light into tormented souls and hope that a Socratic light of wisdom will flicker at the end of their dark tunnel propelling them not to go through with the evil deed.