Reading Group

Hey guys!  This post is going to be fairly short, I just wanted to keep you all updated on what we’ve decided to do regarding the monthly Reading Group.  I’ll also copy/paste this directly as a post in the discussion forum on the MeetUp website so you can see it and, if you like, respond to it there as well.

In any case, we will be having a reading group once a month, probably the 2nd or 3rd Sunday afternoon of the month.  We will be reading a mix of articles and sections from books.  We will try to keep them to a maximum of 12 pages so that they can be read fairly quickly and we can all take our time to think about them.  So even if you don’t have a lot of time to read in your busy schedule, that’s no problem: even just a weekend should suffice.  🙂

We are going to rotate between social issues, soft sciences (psychology, anthropology, etc), hard sciences (physics, biology, technology, etc.) and philosophy every month, so hopefully we will cover some topics that everyone can enjoy.  The topics will be largely accessible (so probably no metaphysics or quantum theory!) but with the intention of challenging preconceived notions of those topics.  For instance, we may bring up discussions about problems with the golden rule or the scientific method, or pros and cons of universal healthcare.  The intention is to broaden our minds about different sides of certain issues, as well as take them into consideration and develop stronger arguments for or against certain issues through productive and civil discussion and reading.

As far as specific material, I will be choosing it every month, but I am very happy to include your suggestions if you have any.  Alternatively, if there is a specific section of a book or an article that you would like to read as a group, I’m very, very happy to help you organize a reading group on a day and time of your choosing for the purpose of reading and discussing it.  In fact, I encourage you all to do so!  I would love to see multiple Reading groups (as well as other groups!) every month.

If you would like to join but are having trouble with the material, that’s fine!  Another purpose of the group is to help each other under the material, so please prepare some questions to bring to the discussion or, if you’d prefer, you can simply email me and I’ll try to answer your questions – as well as post them in the meet up details – as best I can.

For now we’re going to keep the group size fairly small (six members) but if it seems doable we’ll expand it in the future.

The expectations for this group are pretty minimal:

1. Please read the material (duh).

2. Please prepare at least one question and/or one comment about the material for the group.  If you didn’t understand something or want to get other people’s opinion on something, a question would be great.  If you want to share an understanding or interpretation of the material, or want to express something you agreed with or disagreed with, a comment would be great.  Of course, you are more than welcome to offer multiple questions and/or comments.

3. If you organize the meet up, then it’s your job to prepare a number of pertinent questions regarding the topic, as well as some back up material that you think may be of use for discussing or presenting multiple sides of the issue.  For instance, if you are going to use a section of Descartes Meditations, you might want to have some additional material (videos, articles, etc) that discuss dualism vs. monism, philosophy of mind, etc.

The tentative format will be as such:

1.  The organizer very briefly summarizes the Reading Discussion topic (what the material was, who the author was, some basic points brought up).

2. The organizer first fields all questions (no responses yet, just get them all down on paper first).

3. The organizer first offers some preliminary questions, or questions that are necessary for everyone to understand the material (please save more discussion-oriented questions for the end!)

4. The members all present their comments and everyone freely discusses the material and the comments that have been presented.

5. Finally, if any remain, the organizer presents the more discussion-oriented questions that have not been answered yet.

That’s a wrap!  Let me know if you have any questions about it, and I hope to see you at a future Reading Discussion Group!

Advertisements

March Meet Up: Consciousness

Every month we center on a particular theme which we discuss once a month.  This month’s topic will be split into an “open” group (a meet up where anyone and everyone can join) and a “closed” group (for members only).  This month’s blog post will, of course, be about this month’s open discussion group, so there will be no blog post for the topic next month, since it will be the same topic.  However, we will be starting a book/reading  discussion next month as well, so expect to see a new blog post topic about that.  In any case, this month’s topic was consciousness.  We focused on how we define consciousness, animal consciousness, machine consciousness, philosophical ideas about consciousness and “higher” consciousness.  So, without further ado:

What is consciousness?

Consciousness isn’t just being awake, although we typically use it synonymously with that definition (He’s not conscious!  Wake him up!).  There doesn’t seem to be much agreement about what it is.  Instead, it appears to be a philosophical primitive, or something that most people have an intuitive and shared understanding about when we use the word.  Basically it’s defined as the ability to be aware of external objects as well as internal states.  More specifically, it is characterized as having:

  1. Sentience
  2. Awareness
  3. Subjectivity
  4. A sense of selfhood
  5. Executive control system
  6. Wakefulness

I had a hard time distinguishing those terms, a lot of them are really similar, if not the same.  One could easily make the argument that we ought to combine several of those, or even add to or subtract from it.  In any case, I looked into how they are defined and came up with these descriptions of each:

1. Sentience: The ability to feel, or experience sensations (individually called qualia) subjectively.  It is distinguished from thinking or reasoning. It is distinct from creativity, intelligence, sapience, and self-awareness.  Sapience (not a necessary characteristic of consciousness) is the ability to act with appropriate judgment, more colloquially known as wisdom.

2. Awareness: The ability to perceive (or even experience or feel) events, objects or patterns.  It may be conscious, unconscious or even subconscious and, to this extent, animals have this capability as well.  It may be internal as well as external.

There was an interesting and very understandable confusion regarding sentience and awareness (understandable because I, too, had trouble distinguishing them!).  Sentience is the ability to interpret sensations whereas awareness is about perceptions.  Colloquially, these two words are sometimes used interchangeably, but in a philosophical or biological context, they are quite distinct.  A sensation is a subjective experience, like an emotion or an image in one’s mind.  It’s internal.  A perception, on the other hand, is an (arguably) objective experience.  It’s related to things that exist outside of the mind, externally, like seeing an apple or watching a movie.  How that movie makes you feel is your sensation of it, but the act of watching the movie itself is a matter of perception.

3. Subjectivity: Being a subject – me, I, myself, etc. – or the sense/feeling of being someone apart from others (I am me, and you are you, we’re not the same). The subject is the “form” or body that contains those things, and subjectivity is the feeling or understanding of that itself.  Subjectivity is a constantly changing quality and it is up for debate as to how permanent or transient it is, an issue tied into the idea of the “self”.

4. Selfhood: the subject in the subjectivity is the self.  Being aware of the self means knowing that you are a separate entity from the other entities around you.  It means being able to introspect, or to examine one’s conscious thoughts.  This thing that thinks is the self.  The capacity to identify with past actions and consider future actions just as well as we can one’s present is the capacity to have self-hood.

This was also, even more understandably, a confusing distinction.  Some argued – and I partially agreed – that we could put them together.  The only real distinction was this: is it possible that certain organisms have subjectivity – feelings of subjectivity but not selfhood?  Can other animals, for example, recognize that they are separate from other beings but not be able to recognize that sometimes, or examine this understanding?

5. Executive Control System: the management (regulation, control) of cognitive processes, including working memory, reasoning, task flexibility, and problem solving as well as planning, and execution.

6. Wakefulness: even our sleeping self, our unconscious self, is said to be different from our awake self, so wakefulness is an essential part of consciousness.  For example, if I murdered someone while sleepwalking we would not considerate it an act of self, at least not so much as if I did it awake and conscious.  As Locke has postulated, it “would be no more right, than to punish one twin for what his brother-twin did, whereof he knew nothing, because their outsides were so like, that they could not be distinguished.”  Whatever actions that one had done but had no conscious awareness of cannot be said to be appropriated to the person, at least not in the same way as if they were aware of it.

We discussed which of these characteristics were necessary, and which may be sufficient.  Often, it depends on how consciousness is being used.  For example, while the sixth characteristic (wakefulness) may be very useful in a court case, is it useful in a philosophical discussion?

Do other animals have consciousness?

Since we cannot directly communicate to them, it is hard to say how exactly they experience the world around them.  They seem to reflect certain characteristics, like sentience and perception, but what about subjectivity and selfhood?  Many of us intuitively feel that animals do, but how can we know?  You may wonder why it even matters.  One member of the group brought up how he was proud that in his home country, dolphins are now considered “non-person” beings, meaning they were not persons but there were legal norms requiring them to be regarded as persons.  Whether or not animals have consciousness is a question that led to such a conclusion; it is a pertinent question, the answer regarding which animal rights and the right treatment of animals depends on.  We can either justify the promotion of animal rights or even speciesist behavior towards them depending on our conclusions.  Question like these lead us directly to talking about animal rights.

What about machines?

Do machines have consciousness or could they, theoretically, have it?  Alan Turing came up with a hypothesis regarding consciousness and developed a test.  He posited that if a human could not differentiate between a human’s behavior and a machine’s, it could be said to have consciousness.  But is that true?  Is the ability to behave like a human indicative of consciousness?  Is it sufficient?  This question was tossed back and forth a bit in our group.  Some said yes, there is no way to distinguish between a human-like entity and an actual human one, so the best explanation is that such an entity has consciousness.  Others disagreed, saying it’s theoretically possible to give a machine enough info that it could simulate a human-like task without any actual characteristics of consciousness.  In any case, questions like these lead us to discussions on artificial intelligence.

Who cares about all this?  What about human consciousness?

Well, let’s look at it then!

Our exploration of consciousness begins with Plato and his separation of soul and body, but is properly formulated with Descartes who posited that consciousness existed in another “realm” called the res cogitans or the realm of thought.  This is called dualism.  Descartes divided consciousness (which he called mind) or mental contents from physical ones (which he called body).  In other words he considered the mind and the body separate entities, which instigated a centuries long philosophical struggle called the mind-body problem that is still going on to this day.  While many scientifically minded philosophers have wished to reduce mental states to physical ones, some physicians believe quantum theory may provide an answer to the dualist problem, tentatively called the quantum mind (which, unfortunately, we didn’t get a chance to talk about… maybe in the closed discussion!).

The other side of the argument is monism, a view first posited by Parmenides but later espoused by Spinoza.  This stance posits that there is no duality or distinction between mind and body, but rather they are derived of the same substance.  Many believe that any concept of consciousness is incoherent or illusory, and thus non-existent.

In any case, the mind-body problem seeks to determine what relationship there is, if any, between mental and physical processes.

Interestingly, most of the members agreed with dualism more.  It was apparent that monism was a more scientifically inclined view, whereas dualism was more intuitive, it seemed to make more sense on the surface.

Let’s look at dualism.

There are two types:

1. Substance Dualism holds that mind or mental properties are distinct from physical one and, as such, do not adhere to the laws of the physical universe.

2. Property Dualism holds that these properties belong to one substance (rather than being two separate ones) they do adhere to universal physical laws, but physics cannot be used to explain consciousness or mind.  There are a variety of explanations in regards to the relationship of the two properties.  So, instead of being two different things, they’re just two different ways of describing the same type of thing.

Dualism essentially looks at consciousness as a separate substance from anything physical like the brain or chemicals or anything like that.  For instance, Descartes argued that mental states, such as imagining a drink, had no extension; in other words, they did not take up any space and so could not be measured in terms of height, weight, length, etc.  Physical things, on the other hand, have extension.

The members were kind of divided as to which they agreed with more, but the leanings were definitely towards property dualism.

Let’s look at monism.

There are three main types:

Physicalism holds that mind is matter held in a specific way.

1. Type Physicalism holds that mental activity is most likely equivalent to neural/electrical activity in the brain.  In other words, the feeling of fear, for example, is the same as whatever neural activity that occurs when we express the feeling of fear.  So a statement like “I’m scared” really means “My neural activity is creating certain emotional and physiological changes in my body that are impelling me to flee.”  It’s a lot easier to say “I’m scared” of course.

2. Functionalism modifies this stance slightly and holds that mental events are simply functions of physical ones and serve as causal relations to other states.  They are much like what I see on a screen that comes from a software program on my computer (my computer being my brain, the software being the faculties or properties on my brain). This particular image I am looking at on the screen is not actually the software itself, but rather the medium through which the software functions.  Whereas a computer would use electricity as a conduit for function, the brain uses neural activity.  It has been suggested that mental properties supervene on physical ones, or that they must be grounded in some kind of physical property.  In other words, they require physical properties to exist, so while they are dependent on physical properties, they are not reducible to them (they are not one and the same).  So in other words, the physiological changes when we feel scared are the stuff going on in the hardware (brain), whereas the feeling of fear is what’s happening on the screen (our consciousness) that impels us to act.

3. Epiphenomenalism disagrees with those stances.  It holds that mental states are simply “byproducts” of physical brain states, nothing more.  They have no useful function and do not affect physical states.  Only physical events can have effects on each other.  So in other words, they are like weird “glitches” in the software that don’t mess up the program, but certainly aren’t doing anything useful.

We only got as far as these three types during the discussion, but I included a couple other related ideas below for you to look at as well.  It certainly raised a lot of discussion, mainly regarding epiphenomenalism.  Some didn’t understand quite what it was or how it worked, and even the most ardent scientist in the group was a bit put off by it.  It does seem a bit grim to consider that all of our consciousness experience are just unnecessary accidents, that our bodies could function just as well without them, but there are pretty solid arguments for why this may be the case.  One book I read about it was called Kluge.  Although I didn’t necessarily agree with all of the author’s conclusions, I recommend the book as an accessible introduction to the view.

In any case, here are a couple more ideas:

  • Idealism is the idea that there are no physical properties and matter is an illusion, so instead only mental ones exist.  In other words, everything we consider “real” is just a projection of our minds.  Whether or not they actually serve a useful function is debatable in this philosophy.
  • Neutral Monism holds that both mental and physical properties are actually derived from the same essential essence, but whether that essence is mental or physical is hard or even impossible to say, or irrelevant.

In a nutshell, monism takes the view that consciousness can be reduced or defined by physical properties and that any distinction between it and physical substances is illusory.  The difficulty with monism is in explaining why or how mental processes appear to be so different than physical ones, or how they were even able to emerge in the first place.

Some Problems with Both Views:

Dualism seems the most common sense solution to the problem.  After all, don’t physical states feel differently than mental ones?  Isn’t conscious experience – like how the emotions I experience or what I am thinking – distinct from inanimate matter like rocks and my toe?  We tend to associate those conscious states more with our notion of a “self” than we do with physical properties like our hair or even our organs.  On top of this, physical properties do not seem to have a subjective quality, whereas consciousness does.  For instance, when we burn our fingers and are hit with an emotion, we say that we are feeling that emotion (pain), but when nose grows in my hair or our body releases a certain chemical, we generally don’t say “we” are doing that, but rather the nose or the brain is.

This brought up a bit of discussion as well: the distinction between our physical properties and our mental ones can be pretty fuzzy, depending on the language we use and what we are talking about.  For example, when we eat a lot, we say both “I am full” and “my stomach is full.”  On the other hand, when we commit a communication faux pas, we will sometimes say “I must be going crazy” but sometimes say “My mind is playing tricks on me.”

We skipped this next part and jumped into higher consciousness, since it was of interest, but I’m going to include this information here as well:

– On the other hand, dualism can’t account for how mental events can cause changes in physical properties, like the physical creation of memories, or how physical changes can result in mental changes, like brain damage affecting personality and behavior.

A type of dualism called interactionist dualism has an answer to this.  It still holds that mind and body must be separate properties because of the extension (mental properties can’t be measured) issue, but that mental properties must interact with, or influence or affect, physical ones, since we can detect physical changes based on mental events (like when I think of somebody I don’t like and I get angry, my blood pressure increases and my heart rate goes up, etc.).  Mental and physical events interact with each other: My girlfriend sees a spider (physical) and feels a sense of fear (mental) which causes adrenalin to pour into her body and she screams (physical) which I hear and then feel alarmed (mental) and run over to see what happened (physical) and so on.

– One problem with this, though, is that it assumes that all of our conscious thoughts are clear and distinct; meaning we understand perfectly what they are and can distinguish them from other events.  But more and more both scientific and psychological discoveries are casting doubt on this (unconscious motivations, discovery methods, automatic heuristics, cultural customs and habits, neurological errors).

– Another problem is that the idea of cause and effect necessarily implies material impact, or the two real or physical things actually contacting or touching each other, and yet if mental properties do not have extension (no height, width, weight, etc), how can they be said to “impact” physical ones?  If mental properties are different from physical ones, how they affect the physical system without going against the law of the conversation of energy (they add energy to the system without taking it out)?

– Going back to monist ideas, Type Physicalism cannot account for the idiosyncrasies in different organisms physical states when experiencing the same or similar subjective phenomenon.  For instance, if we both hear the same word, why is it that I might feel sad at hearing it but you don’t?  It’s the same physical stimulus, so why are our mental sensations of it different?  Or more aptly, when listening to music we both say we enjoy for similar reasons, why are our physical changes unique?  If two or more organisms are affected by the same external stimuli, why are there idiosyncrasies in their physical states?

– Many philosophers say the essence of consciousness is experience, which is necessarily subjective.  But if consciousness is essentially subjective, then how do we know other entities have or don’t have consciousness?  Maybe when I see a snake, I “feel” fear, I have an experience. But how do I know that you do?  How do I know that rocks don’t?  How can we know whether these experiences are the same, or even remotely similar?  This is called the problem of other minds.

Let’s talk about “higher” consciousness

This is also called the collective conscious or God or cosmic consciousness.  Generally speaking, this is considered the level of consciousness that a human can reach where he or she realizes the reality, or reality as it really is, rather than how it may subjectively seem.  This type of reality is sometimes referred to as ultimate reality.  Many believe that evolution has endowed humans with the faculties to achieve this level of reality, but that it requires practice and development but most people do not put in the concerted effort to achieve it.  One underlying assumption is that people with ordinary consciousness are only partially aware of reality; they are still ignorant of certain truth(s) and they are prone to lower, more impulsive drives and wants.

This generated a bit of discussion as well.  One asked how it could even be defined.  Wouldn’t it depend on the person using the term?  What makes it better or worse than so-called lower consciousness?  One possible answer was that the less one’s thoughts were in connection with animal impulses and drives, the higher one’s consciousness could be said to be.  Another was that the less one’s thoughts were attached to worldly things (money, possessions, etc), the higher it could be said to be.  Another argument was that higher consciousness was not necessarily better, but an inevitable outcome of ongoing evolutionary change.  It was no “better” to have higher consciousness than to be a chimpanzee as opposed to a single-celled organism, it was just a natural outcome of generations of evolution.

We ended the conversation here, but there is one more very important aspect to the scientific and philosophical question of what is consciousness, and I’d like to include it here:

The Hard Problem of Consciousness

There are various formulations of the “hard problem”, a problem that philosopher David Chalmers does not believe can be answered even if we find solutions to the easy problems (how we store information, how we report mental states, how we focus attention, etc.).  Some of these are:

“How is it that some organisms are subjects of experience?”

“Why does awareness of sensory information exist at all?”

“Why do qualia exist?”

“Why is there a subjective component to experience?”

“Why aren’t we philosophical zombies?”

Chalmers argues that it is fundamentally impossible to explain these phenomena by physical means, and that another solution altogether is necessary.  Philosopher Thomas Nagel agrees, stating that since physical events are objective and mental ones subjective, we cannot conflate them.  Philosopher Daniel Dennett disagrees, dismissing the notion that there even is a hard problem.  He speculates that once the easy problems are solved their solutions will offer a viable explanation to all the supposedly “hard” questions, and that there is no need to posit a need for other properties to explain them.  He equates the phenomena of mental events to magic tricks, tricks that the brain plays on us to make it seem as though there is something separate from the physical going on.

Thanks for reading!  If interested, please be sure to check this stuff out:

Susan Greenfield: What is consciousness?

https://www.youtube.com/watch?v=ZuGZhTYnlY4

Jane Goodall: Animal Consciousness

https://www.youtube.com/watch?v=ZNOom3O6Tkg

Sam Harris: Physicalism (vs. Dualism)

https://www.youtube.com/watch?v=Juriylw7B0g

John Searle: Problems with monism and dualism

https://www.youtube.com/watch?v=WFQ0Spu50Oc

Alan Watts: Human and Higher consciousness

https://www.youtube.com/watch?v=gSkDubMfsMA

February Meet-Up: Origins of Language

Every month we center on a particular theme which we discuss twice a month, once in an open group where anyone and everyone can join, and once in a closed group, which allows for a limited number of participants and is for group members only.  This month’s topic was language, particularly the origins of language.  We focused on what language is, how it differs between humans and other animals and how/why we may a language faculty may have evolved.  So, without further ado:

What is language?

Language is a tought term to pin down.  Essentially, language is the human capacity for acquiring and using complex systems of communication, and a language (like English or Japanese) is any specific example of such a system. When used more generally, it could refer to the cognitive ability (an ability which is related to learning, understanding, thinking about and remembering something) to use systems of complex communication, or to describe the rules that make up these systems (like grammar, for example), or the set of utterances that can be produced (in other words, speech) from those rules. All languages rely on semiosis (a sign process; any form of activity, conduct, or process that involves signs, including the production of meaning) to convey meanings. Linguistics and language often get confused, so as a side note, linguistics is the scientific study of language, rather than the communication system itself.  But linguistics is a discussion for another day.

Are languages restricted to humans only?

To answer this question, let’s talk about some different kinds of language first.  A natural language, also called an ordinary language, is any language which is created naturally (as opposed to a constructed language, which was created deliberately) as the result of our facility (or built-in ability) for language. Any normal human infant is able to learn any natural language without requiring instruction to do so. Both signed and spoken languages are considered natural languages.

Human language is open-ended and productive and based on a dual code, meaning it allows humans to produce infinite speech from finite elements (letters, words, sounds, grammar rules) and to create new words and sentences. Human language is modality-independent, which means it is not dependent on any one type of encoding (such as writing or sound) to be learned or acquired. The symbols and grammatical rules of a language are largely arbitrary, meaning that the system can only be acquired through social interaction. Human language is also unique in being able to refer to abstract concepts (like freedom) and to imagined or hypothetical events (like asking what we would do if we could live forever) as well as events that took place in the past or may happen in the future. It is unique because it has the properties of productivity, recursivity, and displacement.

Productivity, in this sense, means how much or how well we form words and grammatical expressions.  Our ability to produce novel sentences is evidence of our high level of productivity.  We don’t only repeat sentences we have picked up before, we can also produce our own sentences which have never been created before.

Recursion is the process of repeating items in a self-similar way, so when we use the same words over and over to mean the same things, that’s recursion.  Our ability to use the word freedom in more than one situation to mean the same thing is an example of this.

Displacement is the capability of language to communicate about things that are not immediately present spatially or temporally, things that are either not here or are not here now.  So when we talk about things we did yesterday or are going to do tomorrow, or talk about what’s in our house, or abstract ideas, that’s displacement.

How is this different from animal language?

Animal communication can only express a finite number of utterances that are mostly genetically transmitted.  They cannot produce novel sounds or symbols like we can, nor can they communication thoughts or opinions or ideas.  None have been able to learn as many different signs known by an average 4-year-old human (and only some other primates and dolphins have been able to do that much), nor have any acquired the complex grammar of human language. Language also relies entirely on social convention (a socially acceptable way of acting) and learning. Its complex structure allows a much wider range of possible expressions and uses than animal communication.

So where did language come from?

There are many theories about the origins of language, but the prevalent one holds that language started when early hominids’ primate communication gradually changed and they achieved the ability to form a theory of other minds and a shared intentionality. Theory of mind is the ability to attribute mental states to oneself and others as well as understand those states in others. It’s necessary for empathy, which helps us care about each other and work together.  Intentionality is the power of minds to be about something, or to represent things, properties and states of affairs.  So when two people express the same feeling, belief or opinion about something, they are sharing intentionality.

Languages evolve and diversify over time. All languages change as speakers adopt or invent new ways of speaking and pass them on to other members of their speech community (people who all speak the same language, basically). Language change happens at all levels, from the phonological level (the level of sounds, or what we refer to ask accents and pronunciation) to the levels of vocabulary, morphology (the parts of words, including roots or even intonations or stresses), syntax (the formation and organization of words and sentences), and discourse. Language change is often looked down on, at first, by native speakers who often consider call the changes “decay” or “degradation” or a sign of slipping norms of language usage, it is natural and inevitable.

There are different theories about the origin of language depending on the assumptions about what language is. Continuity-based theories are based on the idea that language is so complex that it must have evolved from earlier systems of our pre-human ancestors. The opposite viewpoint, discontinuity-based theories, holds that language is such a unique human trait that it cannot be compared to anything found among non-humans so it must therefore have appeared suddenly in our species. Most scholars agree with continuity-based theories but they don’t agree on how exactly it developed. Some see language as being mostly innate, like Steven Pinker, and believe it to be wholly animal cognition, whereas others see language as a socially learned tool of communication and think it developed from animal communication, either primate gestural or vocal communication, but works in conjunction with social learning.

So why do we have language at all?

One view sees our capacity for language as a mental faculty that allows humans to learn a means of communication and to produce and understand it.  Language is universal to all people and we have the neurological capacity to develop it, so it seems that it is this faculty is biologically innate. Proponents of this view often argue that this is supported by the fact that children who can access language in their environment acquire it even without instruction.  This is, as you can see, ties more strongly into the continuity-based theories.

Another view sees language as a formal system of signs governed by grammatical rules of combination to communicate meaning.  This is called the formal symbolic system theory. This view stresses that human languages are arbitrary, even man-made rules and systems that point signs to meanings. It concedes that our capacity for communication is innate, but formal systems of language are not necessarily.  Rather than focus on where language comes from historically, they focus on how rules and systems were made.  This one, on the other hand, ties more strongly into the discontinuity-based theories.

Yet another view sees language as a system of cooperation.  Rather than a natural ability, it sees language as a cultural creation used to enable people to express themselves and manipulate their surroundings.  This argues that while there is a connection between our animal language and our capacity for communication, it did not necessarily have to be language, but language was derived from a drive to cooperate.  This, too, ties more strongly into the discontinuity based theories.

Communicative style is the ways that language is used and understood within a particular culture. Communicative style also becomes a way of displaying and constructing group identity. Some would go so far as to say language is, in this way, divisive: Linguistic differences may be a factor in the divisions between social groups (speaking a language with a particular accent may imply membership of an ethnic or social group or status as a second language speaker). These kinds of differences are not part of the linguistic system, but are an important part of how people use language as a social tool for constructing groups. However, many languages such as Spanish or Japanese also have grammatical conventions that signal the social position of the speaker in relation to others through the use of registers. In many languages, there are stylistic or even grammatical differences between the ways men and women speak, between age groups, or between social classes, just as some languages employ different words depending on who is listening.

How do we learn language?

The learning of one’s own native language, typically that of one’s parents, normally occurs spontaneously in early human childhood and is biologically, socially and ecologically driven. A crucial role of this process is the ability of humans from an early age to engage in speech repetition and so quickly acquire a spoken vocabulary from the pronunciation of words spoken around them.

 Stuff to check out:

Theories of origins of language:

http://www.infoplease.com/encyclopedia/society/linguistics-structural-linguistics.html

Steve Pinker on Language:

http://www.youtube.com/watch?v=3-son3EJTrU

Noam Chomsky on Language:

http://www.youtube.com/watch?v=Zg1bHzBoggk

Critical Thinking workshop

Every month we also have a study/discussion group (which are now called learning groups) about an aspect of critical thinking, logic or argumentation. Our first one in October was about informal logical fallacies, the previous one was on confirmation bias.  This month’s group was an overview of critical thinking.  Critical thinking is an enormous topic which encompasses numerous skills and philosophies, and I may consider making a whole separate meet up in dedication to it.  But in the very least I’d like to provide an introduction to it.  First off…

What is critical thinking not?

Critical thinking is not just thinking.  It is not even thinking a lot, nor thinking “deeply.”  One can be contemplative without being a critical thinker.  It is not knowing a lot.  Retaining vast amounts of knowledge can be credited to having a good memory, not being a good thinker.  In fact, a very good number of knowledgeable, even intelligent, people suffer from this misunderstanding: one’s amount and depth of knowledge does not automatically make one a good thinker.  Critical thinking includes skills that go beyond passive acquisition and retention of information.  It is about judgment, but not about being judgmental.  It is about having criteria, not about being a critic.

So what is critical thinking?

It is hard to pin down a precise definition of critical thinking.  First, to establish one thing: Thinking is a means of learning, explaining and, basically, communicating.  Essentially, it is a reasoned method for determining the truth or falsity of a claim.  It is also, thereby, a method for reaching valid conclusions.  It is a process, an ongoing process.

It is evidence-based, disciplined and rigorous.  It is domain-independent, or domain-general, meaning it can applied to any and all domains of life.

The purpose of it is to organize and clarify reasoning, as well as recognize errors and biases in reasoning (both one’s own and others). Listening and unequivocally accepting another’s belief or opinion as your own leads to “inherited” opinions; believing simply because someone told you so (such as our families when we were children).  There is nothing wrong with such opinions per se, but those opinions only become strong when they are supported by reason.

It is critical – no pun intended – towards a foundation of science and a democratic, autonomous society.  Most, if not all of us, have the right to and the capacity for decision-making.  The less informed our decisions are, the more precarious their results are. While persuasion itself is not a skill of critical thinking, ideally critical thinking skills themselves make arguments more persuasive.

It utilizes skepticism.  Skepticism is not the instant dismissal of any and all claims until they are irrefutably proven, it is a questioning attitude towards claims or positions particularly when they are stated as having a factual nature.  It is a tentatively held suspension of judgment or doubt until sufficient evidence for a claim is presented.  A critical thinker tends to examine the reasoning as well as possible assumptions and biases behind claims before accepting them.  A critical thinker realizes that the truth value of factual claims is not determined by the emotional impact that accompanies it nor people’s preferences but on the strength of the reasoning and evidence.

It does not ignore emotion, but rather validates it. As pointed out in our bias group, emotion is a useful tool as well, but only in certain situations or for certain reasons.  However, regardless of their utility (or lack thereof) we cannot detach ourselves from emotions completely.  Critical thinking is a system through which we can determine whether or not an emotion is justified; whether it should be given credence or not. For example, fear of drinking poison is highly justified; it should be heeded.  However, fear of talking to strangers at a party, for instance, should not; it is not justified, and one should behave, in fact, in a way that overcomes the fear. What this is to say is that emotions are certainly efficient, but they are not necessarily an effective way of assessing all situations; through the usage of reason and critical thinking, we can determine when they are, and this is quite important for decisions which will or may have a massive impact on our future.

It is also useful for knowing what not to do, which we have started to apply by studying logical fallacies and cognitive biases.  It also involves learning how to do certain skills well, and recognizing when they are done poorly. Critical thinking involves the following, roughly in the following order

  1. Defining
  2. Conceptualization
  3. Listening
  4. Analysis
  5. Inquiry
  6. Examination
  7. Inference
  8. Synthesis

We will talk about each of these individually in a future meet up.

Why should we think critically?

There are more reasons to think critically beyond what has been stated already.  Consider this: we often hold our beliefs and ideas sacred, but are they?  Is there some magical quality about them?  What harm does it do to change them or realize they were wrong and others are better?  I hold that there is nothing special about them, and that few, if any, of them are essential.  Concretely speaking, if we realized they were wrong we could abandon them or change them indiscriminately and no immediate harm would come to us.  Critical thinkers tend, therefore, not to have “cherished” beliefs.  They are willing to abandon beliefs should better arguments or evidence present itself.

Conversely, thinking uncritically can lead us to a whole host of problems.  Among them are the possibility that we:

  • Become impulsive and rush to erroneous conclusions
  • Fail to consider the implications of our positions
  • Ignore, miss or distort biases, evidence, information, errors, unjustified assumptions and fallacies in other people’s arguments as well as our own
  • Forget the purpose of the discourse, utilize irrelevant arguments, and/or focus on the trivial
  • Unwittingly hold unrealistic positions
  • Communicate poorly, vaguely or with unwarranted presuppositions or respond to others’ arguments incompetently
  • Confuse and conflate meanings and statements
  • Basically think narrowly, imprecisely, irrationally, simplistically, superficially, egocentrically or in a contradictory manner.  We become passive thinkers, going with whatever pops into our minds and pursuing any thought or desire
  • Become egocentric or sociocentric
  • Most importantly, make poor decisions as a result of all this that affect our and possibly other people’s lives (which is why I think this is so important)

If we are to develop as thinkers, we must learn the art of clarifying thinking, of pinning it down, spelling it out, and giving it a specific meaning.  The whole purpose of thinking at all is communication, in particular discourse; conversation that takes in the principles of critical thinking.  It is the most peaceful and civilized way of changing minds.  One could argue it’s the only way.

How do we become critical thinkers?

First, we must recognize and adhere to the standards of critical thinking.  Depending on which text you consult or which authority you ask, you will hear different ones.  However, I have put together what seem to be the most prominent.  They are, in order of priority:

  1. Relevance
  2. Clarity
  3. Precision
  4. Accuracy
  5. Depth

Relevance deals with whether or not the arguments are actually related to the topic at hand. For example, bringing up a politician’s marital status when discussing their foreign policy is most likely not relevant.

Clarity has to do with whether or not the arguments were understood by those listening to it. Life is concrete, life is not abstract, we don’t live life abstractly, so offering clear, concrete examples or explanations is important.  For example, a statement like “Hope is good” is less clear than a statement like “People who express optimism are less likely to suffer from depression.”

Precision is about detail and specificity.  It is contextual; it is dependent on the particular issue at hand.  For example, if I am ill, I don’t want my doctor saying “You’re sick,” I want him to tell me what, specifically, I have.  Context determines the level of precision required.  For example, if I’m taking someone’s temperature I want to know it to the tenth of a degree (36.8, for example), but if I’m measuring lead in water, I want to know it to the millionth of a degree; I need a more precise measurement.

Accuracy is assessing whether or not the information is true, or accurate.  For example, I could say “I am 42 meters, 16 centimeters tall” and that would be clear and precise, but hardly accurate.

Depth is related to how well the argument or information deals with the complexities of a given issue.  Not all issues require a lot of depth to deal with.  When giving an explanation for why you are late, “I didn’t hear my alarm” is sufficiently deep.  However, when trying to understand why someone killed themselves, attributing it to “drugs” may not be.  The relevance of depth is in direct relation to the complexity of the issue.  The less complex, the less relevant depth is.  You cannot adequately deal with complex questions with superficial or shallow answers or reasoning.

Some examples of questions to ask yourself or others to ensure these standards, again in order:

  1. Does it have any bearing on the issue/question at hand? (relevance)
  2. What exactly is meant?  Can examples be provided? (clarity)
  3. How much/many?  How can we measure that? (precision)
  4. How do you/I know that?  How can we test, check or observe that? (accuracy)
  5. Does it completely deal with the intricacies or complexity of the issue/question? (depth)

Richard Paul and Linda Elder, two experts on practical critical thinking, talk about five stages of thinkers in their book Critical Thinking: Tools for Taking Charge of Your Learning. It’s a rather lengthy description, so I’m going to try and sum it up here:

Stage One: The Unreflective Thinker (we are unaware of significant problems in our thinking)

Stage Two: The Challenged Thinker (we become aware of problems in our thinking)

Stage Three: The Beginning Thinker (we try to improve but without regular practice)

Stage Four: The Practicing Thinker (we recognize the necessity of regular practice)

Stage Five: The Advanced Thinker (we advance in accordance with our practice)

Stage Six: The Master Thinker (skilled & insightful thinking become second nature to us)

To elaborate:

Stage One: Unreflective Thinkers

We start unaware of the role thinking plays in our lives, how it helps us or causes problems for us.  We assume what we believe and think of is true, we assume we are, at least compared to others, unbiased, objective and rational.  We assume experience, feelings and common sense are sufficient and base our beliefs and positions on them.  Basically, if it feels good, it must be right/true.

Our intuition may be quite competent at this stage, or we may even be good critical thinkers in particular domains.  But we have no sense of “metacognition”, the ability to think about our own thinking.

Stage Two: The Challenged Thinker

We stop denying we have a problem; we admit that we are weak thinkers and largely ignorant. We acknowledge and accept that, as Einstein once said, one cannot solve a problem at the level on which it was created.  We start to recognize our own fallacies and assumptions.  We begin to properly utilize information and concepts, make inferences, understand implications, define terms and problems and admit to our own fallibility.  We start to understand the tremendous long-term challenge before us.  It’s easy to retreat to the first stage at this point.

Stage Three: The Beginning Thinker

We embark on this challenge.  We are, in a sense, “beginning” critical thinkers.  We begin to take thinking seriously.  We simultaneously see the vast expanse of knowledge that is critical thinking but also feel invigorated by our new quest.  We begin to be more perceptive of our own flawed reasoning and biases.  We begin, maybe instinctively, to analyze the logic of situations and ideas.  We begin to question ourselves more.  We start to prioritize not only the information itself, but its accuracy and relevance.  We become aware of our own interpretations and vigilant of it. We pay attention to the meanings of words and the implications of our reasoning.  We begin to consider and respect alternatives.  We begin, most importantly, to apply standards to our thinking.  Our values shift.  We value reasoning, intellectual honesty and rigor more.  As tempting as it may be to give up on difficult problems, in order to move past this stage, we must change our values, lest we assume that we are “good enough” thinkers, or that we cannot improve as thinkers.

Stage Four: The Practical Thinker

We begin to develop a systematic plan for how to think. We become intellectually organized and rigorous about changing our thinking.  We commit to this change.

Stage Five: The Advanced Thinker

Our regimen for thinking starts to pay off.  We can routinely identify and find solutions for problems in our thinking.  We are aware of the particular domains that need the most work and strive to improve them.  We find and begin to reduce bias in our thoughts.  We no longer have an ego investment in being right or winning arguments, but rather learning and improving our thinking.  We begin to enjoy constructive criticism and entering other’s perspectives.  We find the process satisfying and fulfilling.  We continue to look for ways in which we need to improve.  We also, sadly, begin to see how egocentricity and bias can be destructive.  We acknowledge its innateness, but reject its necessity or benefit.  We regularly monitor and assess our own thinking as well as others’.

Stage 6: The Master Thinker

We have developed a systematic strategy for thinking which we are constantly improving.  The basic skills and standards of thinking are deeply internalized and have become intuitive.  Thoughts are now much more rigorous, objective and careful.  We are intellectually humble, honest, persevering, responsible and autonomous.

How do we further improve our critical thinking?

To move past stage 1 we must first recognize and acknowledge that we are not perfect thinkers.  It’s that simple.  But many people do not even do this.  Our recognition must also be specific.  How is your thinking flawed?  What specific biases do you have?  What specific fallacies are you guilty of committing?  We continue to improve by practicing, such as we did during the meet up.  We assess the clarity of the questions and statements presented to us.  We ask questions for more clarification.  We even offer ways for others to clarify their points to us.  We assess the relevance of what is being said or asked.  Is it tangential?  Does it pertain, perhaps in a way I hadn’t recognized earlier?  We develop an intuition for irrationality and illogical in both our and others’ arguments.  We strive to gather and assess all the relevant data.  We work to recognize and break down our own biases and prejudices.

Can you be more specific?

We tried out a couple exercises during the meet up that you can also try to do on your own or, preferably, with a partner or partners.  Here they are:

1. Analyze social norms

We can analyze the behavior that is expected of us in our social groups.  What is encouraged and discouraged?  What are we expected to believe and agree on, and what are we expected to deny, reject or disagree with?  Do you agree with those norms?  Why do you think we have them?

2. Tour Guide for an Alien

Pretend that you have been assigned the task of conducting a tour for aliens who are visiting earth and observing human life. You’re riding along in a blimp, and you float over a professional baseball stadium. One of your aliens looks down and becomes very confused, so you tell him that there is a game going on.

Try to answer the following questions for him.

1. What is a game?  What is a team?

2. Why are there no female players?

3. Why do people get so passionate watching other people play games?

4. Why can’t the people in the seats just go down on the field and join in?

Half of the group can take on the role of the incredulous and highly skeptical aliens, while the other half takes the role of the tour guides.

3. Consider a complex problem

Let’s consider a complex issue, an issue with some depth.  Let’s analyze the elements of this issue.  First, why is this issue important?  Why should anyone care about it?  What is/are the problem(s)?  What are our underlying assumptions about it?  Are they fair and rational ones?  Do others have any objections to them?  What questions do we have about our own position or the opposition’s?  What are we trying to solve?  What information do we need in order to do this?  How can we get it if we don’t already have it?  Be as specific as possible.  Our purpose is not necessarily to reach a conclusion; we probably can’t do that in one day.  It is, instead, to calmly and deliberately reason about the problem.

Some things you can do at home, by yourself:

1. Ask yourself some fundamental questions.  What are your values and beliefs, particularly ones that you or others haven’t questioned?  What do you take for granted or assume is common sense?  Identify them.  If you can, try to question them: Why do you believe those things?  Do you have reasons?  Can you rationally justify them?

2. A problem a day: Find some time each day, even just 5-10 minutes, and consider a problem, maybe an issue you’re not sure about or that you’ve found some opposition to or that you realized you didn’t have fully thought out.  What exactly is the problem?  What relevant questions could I ask an opponent or myself to determine the solution?  How does it relate to my assumptions, beliefs or ideas?  What are the implications of my ideas?  Where could you find some helpful information?  If you have time, look for it.  Share it at the next group meet up.

3. Keep a critical thinking journal.  Write situations or issues that are emotionally significant to you, maybe even hot button issues.  Keep it to one issue per entry.  Describe your mental reactions to this situation.  What do you think about it?  What are your beliefs and opinions in relation to it?  Be specific.  After, analyze them: how objective are they?  What are the implications of your positions?  Are you left with any questions or problems?

Your attitude is also important.  You must be willing to figure out the answers for yourself, rather than demanding that someone provide them for you.  You must be willing to exercise your mental energy in pursuit of a solution rather than indolently relying on feelings.  You must be willing to review assumptions, claims, and information over and over and over again.  You must be willing to change your position, perhaps several times, should better arguments or evidence present themselves.  You must be willing, most importantly, to be wrong and to be criticized (hopefully in an objective way) for it again and again.  You must be open-minded and prudent.  You must desire to be well-informed and knowledgeable.  You must be willing to make unbiased and objective judgments on the credibility of sources.  You must want to ask questions.

To start with we are going to use Robert Ennis’ three underlying strategies he calls “RRA”.

Reflection, Reasons and Alternatives:

1. Reflection: stop and think rather than make snap judgments or go with the first idea that pops in your mind.  Give yourself some time and space to formulate your position, your counter argument or questions.

2. Reasons: Consider and question reasons.  Ask “How do you know?” or “What reason(s) do you have to believe that?” or “What are your sources?”

3. Alternatives: Remain alert for possible alternatives.  Offer them by asking “What about…?” or “Is it possible that…?”

Websites:

http://www.criticalthinking.net/index.html

http://philosophy.hku.hk/think/

http://timvangelder.com/

Videos:

January Meet Up: Education in Japan

Been awhile since I’ve updated this blog, my apologies.  Just got back from a great vacation, but now it’s time to think critically again!

Every month we center on a particular theme which we discuss twice a month, once in an open group where anyone and everyone can join, and once in a closed group, which allows for a limited number of participants and is for group members only.  This month’s topic was education, particularly at the primary and secondary levels in Japan.  We focused on cram schools, differences between Japanese and Western education and bullying.  We also talked a bit about the history of education in Japan, philosophy of education and people’s personal experiences either attending or working at Japanese schools.

 What is education?

Education is passing knowledge, or skills from one generation to the next through some form of instruction (generally teaching or training). Education usually takes place under the guidance of others, but may also be autodidactic. Education can take place in formal or informal educational settings.

What is the history of education in Japan?

When the Tokugawa period began, few common people in Japan could read or write but by the period’s end, learning had become widespread.   This started around 1600 and ended around the middle of the century.  During the Tokugawa period, the role of many of the samurai, changed from warrior to government bureaucrat, and as a consequence, their formal education and their literacy increased proportionally. Traditional Samurai curricula for elites stressed morality and the martial arts and Confucian classics. Arithmetic and calligraphy were also studied. Education of commoners was generally practically oriented, providing basic 3-Rs (reading, writing and arithmetic), calligraphy and use of the abacus. By the 1860s, 40-50% of Japanese boys, and 15% of the girls, had some schooling outside the home. These rates were comparable to major European nations at the time. The Meiji period facilitated Japan’s transition from feudal society to modern nation paying close attention to Western science, technology and educational methods. Reformers set Japan on a rapid course of modernization, with a public education system. The Iwakura mission were sent abroad to study the education systems of leading Western countries. Elementary school enrollment climbed from about 40 or 50 percent of the school-age population in the 1870s to more than 90 percent by 1900, despite strong public protest, especially against school fees. After 1870 school textbooks based on Confucianism were replaced by westernized texts. However by the 1890s, a reaction set in and a more authoritarian approach was imposed. Traditional Confucian and Shinto precepts were again stressed, especially those concerning the hierarchical nature of human relations, service to the new state, the pursuit of learning, and morality. In the early 20th century, education at the primary level was egalitarian and virtually universal, but at higher levels it was highly selective and elitist. Occupation policy makers and the United States Education Mission, set up in 1946, made a number of changes aimed at democratizing Japanese education: instituting the six-three-three grade structure (six years of elementary school, three of lower-secondary school, and three of upper-secondary school) and extending compulsory schooling to nine years. They replaced the prewar system of higher-secondary schools with comprehensive upper-secondary schools (high schools). Curricula and textbooks were revised, the nationalistic morals course was abolished and replaced with social studies, locally elected school boards were introduced, and teachers unions established. After the restoration of full national sovereignty in 1952, Japan immediately began to modify some of the changes in education, to reflect Japanese ideas about education and educational administration. The postwar Ministry of Education regained a great deal of power. A course in moral education was reinstituted in modified form, despite substantial initial concern that it would lead to a renewal of heightened nationalism.

What is the Japanese education system like now?

Japanese education has been run as a nation-wide standardized system under the full control of the Ministry of Education. The only alternative option is private schools that have more freedom to offer different curriculum including the choice of textbooks (public schools can use only the government approved textbooks) and foreign languages. However, almost all of these private schools require students to take an entrance examination and pay a high tuition. Japan has a 100% enrollment in compulsory grades and near zero illiteracy. High school enrollment is over 96% nationwide and nearly 100% in the cities even though it is not compulsory. The high school drop-out rate is about 2% but has been increasing. Almost half of high school graduates go on to university or junior college. The average school day on weekdays is 6 hours, one of the longest school days in the world and vacations are 6 weeks in the summer and about 2 weeks each for winter and spring breaks. There is often homework over these vacations.

According to the material I encountered, in elementary school students spend a lot of time on music, art and physical education. In 1959 students started taking moral lessons again, as part of holistic education which is seen as the main task of the elementary school. However, a number of Japanese students said that their personal experiences were different.  For example, they recalled spending more time on academic subjects and not an especially distinct amount of time on physical education, or they didn’t recall taking any classes on morals.

Also, even though the sources I read said that the middle school and high school curriculums still have music, art, physical education, field trips, clubs and home room time, the Japanese who attended the meet up said these things had diminished significantly and were replaced with more academic subjects and learning: Japanese, mathematics, social studies, science, and English. The pace is quick and intense and instruction is structured, fact-heavy and routine-based because teachers have to cover a lot of ground in preparation for high-school entrance examinations. Hierarchical teacher-peer and senior-to-junior relationships as well as highly organized, disciplined and hierarchical work environments such as various established student committees, are observed at middle schools.

There is some evidence that teachers feel it is their duty to develop children “holistically”,  focusing on health, nutrition, sleep, manners and so on.  Again, however, a number of the Japanese people who attended stated that not all of their teachers focused on these things. Students are also taught how to speak politely and how to talk to their teachers and peers appropriately (it differs in Japan).

Is it a good system?  Some critics say very that since creativity and critical thinking are not developed, little learning actually occurs. The education system was designed in an era when most people would finish up high school and work in factories, either in management or labor, which was fine when Japan was industrialized and mass production and consumption drove the economy. But these critics say the world is becoming a post-consumerist, global society where creative ideas and solutions are becoming increasingly important. Japan seems does not seem to be adapting to or possibly even understanding this.

As I perused the internet for critical insights into possible problems in the Japanese education system, I came across the following examples:
1. Lack of competition
Japanese education is often rigid and uniform. In order to be applicable at a public school, approval from the Ministry of Education must be met.  Anecdotal evidence suggests that while this may be the case in theory, in reality a lot of teachers are actually applying diverse methods unbeknownst to the government in practice.  In any case, the diversity of school books and other materials is limited, and there is little room for developing new educational materials and methods.

2. Exam wars

High school, junior high school and increasingly nowadays even elementary school students are spending more time at cram schools and coming home later. A survey has shown that 27% of elementary school students and 64% of junior high school children feel fatigue in their daily lives. Examination wars prevent children from growing up with sound minds, which makes their future of Japan gloomy.  Many people worry that students are thereby forfeiting time needed for play, leisure and the natural development of social skills.
3. The risk of the nationally unified education

Since a government agency decides educational content, if the agency makes a mistake, all schools are forced to go along with it.  On top of this, a system like this is more susceptible to national indoctrination than one which is more diversified.
4. Japanese education rejects individual differences

The students who achieved excellent results in a subject can frequently progress faster or proceed to the next grade in the United States. The absence of a national curriculum allows such flexibility. No educational theory nor educational psychology argues that every child at each grade develops at the same speed.  For instance, none of the Japanese students had heard anything like AP classes or jumping ahead grades like we have in the U.S.
5. Educational system disturbing freedom of thought and education

The description and interpretation of school books on history have been variously argued in Japan. Strictly speaking, there are about 1,200 million Japanese nationals, and accordingly, there must be the same number of historical views since all of them were born at different times in different environments. Today, Japanese schools nationwide teach a unified historical view.  However, one of the Japanese people who attended claimed that she was taught to feel guilty for what Japanese had done in history, while another claimed she was not taught to feel this way about it.

So what are these “cram schools”?

A lot of students nowadays go to cram school, called “juku”. These are specialized schools whose ostensible purpose is to help students prepare for special entrance exams to get into either preferred or more prestigious schools.  According to the Japanese members of our group, these tests are quite strict: students may only select one school and only have one chance to pass the entrance exam, otherwise the next school they will attend will be decided for them.  Japan even has them for entering prestigious private kindergartens.  Some Japanese parents are eager to send their children to such kindergartens, which are also associated with prestigious university, and in most cases guarantee that the students can go on all the way to university. There was a time in Japan, when people relied on private tutors, but private tutors could not cope with the intensifying entrance examination competition. Many students go to juku to prepare themselves to successfully pass entrance examinations. These exams weed out applicants by a rote-learning type of written test and special training is required. The standard education received at school alone is not enough to survive examination war and juku makes the difference. It is not unusual to see children going to juku 2-3 hours a day after school, 3-4 days a week. A fiscal 1993 Ministry of education study found 24% of elementary school children, 50% of junior high school students and 60% of high school students are going to juku. The diploma from first-rated universities is one of the important requirements to get quickly promoted in a job, which is behind this entire obsession. While cram schools are ostensibly for passing these exams, the Japanese people who attended the meeting claimed that they had attended cram school either as an opportunity for making friends, or because they felt pressured to do so because their friends were also doing it.  It has, apparently, become quite the norm to attend at least some cram school, which is not surprising, considering more than half of high schools students do so.

Are cram schools good or bad?

Anne Conduit, author of “Educating Andy. The Experience of a Foreign Family in the Japanese Elementary School System” seems to think it’s a combination of the primary school and the cram school which produces such high achievements in mathematics for the Japanese. Monbusho (Education Ministry), however, seems to think differently. For many parents and students, it seems, they are a necessary part of life. There is disagreement as to whether cram schools are serving an educational needs, or they are responsible for manufacturing such a need.  Whichever the case may be, cram school attendance is on the rise, despite the Ministry of Education’s best efforts.

There are ambivalent attitudes towards the commercial nature of cram schools, as well.  Many feel that their profit-driven motives are detrimental to any actual educational directives.  The necessarily high costs also driven a wedge between have’s and have not’s.  Proponents of cram schools, however, counter by stating that if they did not produce results that parents and students are happy with, they would lose profits, and by virtue of that they must educationally beneficial results or their profits would inevitably drop. The results are easy to measure since they depend on how many graduates pass the examinations for private school. The profit motive, in other words, provides an incentive to create an atmosphere in which students want to learn.  Proponents also point to the rise of jukus as an example of Japanese success, a reflection of a system of meritocratic advancement.

Critics, however, also point out that it is forcing children to surrender their childhood to an adult-like obsession with status and achievement. Others say jukus are hampering a child’s free time to play and develop social skills. It is not healthy to become completely caught up in competition and status at such a young age.  The exam war and intense preparation leading up to entrance exams is said to be the chief cause of stress for most middle class children. Children suffer from study stress, from bullying at school, from the effects of kireru (a sudden, unexpected explosion of hostility or even violence), as well as social withdrawal; “acting in” rather than acting out.

Ijime (Bullying)

Ijime, or bullying has been the most publicly discussed educational problem of the century since a recent wave of suicides beginning in 1994: 11 cases over an 18 month-period. Nationwide, 60,096 incidents of bullying were reported in 1995. Legal affairs bureaus made cases out nearly 4,000 cases of bullying in 2012. The national police agency fully investigated 260 cases of school bullying that year, double that in 2011, the highest in 25 years. The report said 511 students were arrested or taken into custody for bullying, more than twice the year before. The 2004 results of survey showed that about 55% of both girls and boys replied that whenever they witnessed bullying, they pretended not to notice.

According to investigations by MOE’s specialist research group on bullying, 12% of the students were bullied and that 17% inflicted bullying on others. According to this study, any student could be a target of bullying and that bullying occurs among friends and ordinary classmates.  Around half of home room teachers incorrectly thought that their classes had no incidence of bullying. Bullied children were very unlikely to reveal the happenings to their teachers. Only 40% of the bullied thought their teachers knew about the happenings, and 30-50% thought their parents were unaware of the happenings. About 80% of the parents were actually unaware that their children were bullies.

Japanese students are two times less likely to intervene in bullying than their counterparts in other countries (only 1 in 5 said they would). Japanese youth are more ambivalent about the nature of bullying, only 2 out of 3 said you shouldn’t bully. 10.7% of Japanese boys and 3.8% of girls surveyed say they would join in the bullying compared to 5.2% and 2.6% of their respective counterparts in the US. Moving up the grades, bullying tends to change from exclusion to violence. Bullying escalates at middle schools showing up more incidents than at elementary schools, while the opposite trend is observed in other countries.

Some 80% of bullying among school students in Japan qualifies as “collective” violence, meaning entire classrooms vs. a single victim, and 90% of the cases are considered ongoing, lasting more than a week.  Some examples are pretty horrific. One student was taunted, then beaten, then forced to shoplift items for the bullies, and eventually forced to eat dead bees over a period of months. That student sparked a recent national outcry on bullying when he committed suicide at the age of 13. Teachers at the school were aware of the problem, but had only responded with a verbal warning.

Thoughts:

  • While Japanese education appears to be superior in its quantitive surface, is it really?  Are the statistics that unusual for education in any developed country?  In addition, are they sufficient for pointing to the Japanese education system for a model of superior education?  What about qualitative measurements?
  • To what extent does Japan’s collectivism influence its educational system and vice versa?  Is holistic education more common in collectivist country than in individualist?  Most of the members who joined this discussion were Western; are we simply biased into thinking that our more progressive educational systems are “superior”?
  • In relation to that, what exactly should a teacher’s roles be?  Students at the primary and secondary level spend arguably most of their time at school, should teachers be responsible for teaching more than just academic knowledge?  Should they be responsible for educating a child holistically?  Isn’t it almost inevitable, given how much time teachers spend with children, that these lessons will be taught, inadvertently or otherwise?
  • Are cram schools really to “blame” for any educational problems, or are the problems more central to educational systems in Japan or even societal issues?
  • While the data seems to indicate that Japanese children’s proposed attitudes towards bullying are not as enlightened as attitudes of children in other societies, does this necessarily point to anything in real life?  Is actual bullying any worse than it is in other countries?  Or is it that educational problems in other countries are much more serious (school shootings, lack of attendance, teen pregnancy, drop-out rate, etc.) so they do not have the resources to focus on bullying like Japan – a country which does not suffer from these more severe issues – does?

Further reading:

Juku

http://www.japantimes.co.jp/community/2013/03/05/issues/juku-an-unnecessary-evil-or-vital-steppingstone-to-success/#.UoYkxFaCjIU

Bullying:

http://thisjapaneselife.org/2013/06/12/japan-ijime-bullies/

http://japandailypress.com/tag/bullying/

Problems with Japanese education:

http://www.ronperrier.net/2013/05/04/problems-with-the-japanese-education-system/

December learning groups: Confirmation Bias

Every month we also have a study/discussion group (which are now called learning groups) about an aspect of critical thinking, logic or argumentation. Our first one in October was about informal logical fallacies, the previous one was on evidence.  Initially this one was going to be on bias in general, but as usual, I discovered the scope to be much too wide to fit in just one meet up.  In fact, all total it may take about a year to cover. So I decided to narrow it down to confirmation bias.

But first, let’s look at what bias is, exactly.

Bias

We all work within a subjective social reality, a way of looking at the social world from our personal perspective.  But it is potentially rife with distortion, as well as inaccurate judgments and interpretations; in other words, irrationality.

Bias is a subjective and often flawed desire or tendency to hold a particular perspective or outlook.  It could be by dismissing or denying other points of view.  It is generally held as a stance towards a particular object, such as an individual person (such as a boss), an individual non-human object (for example, black cats) or even a group of people (a nationality or even a gender).  There are, as I discovered, several types of biases and numerous examples of each type.  Some affect judgment or decision-making (one of which we will be covering today).  These are called cognitive biases.  Others affect our social behavior, others our memory.  Some biases exist externally, such as statistical bias and media bias, as well.

Biases are not necessarily always detrimental.  They lead to more efficient – if sometimes less effective – decisions. They also enable us to be more proactive, rather than wallowing in perpetual over-analyzation.

So why do people have bias?

Quite frankly, it would seem that we don’t have much choice:

  1. Our rationality is bounded, meaning we have cognitive limitations: a finite amount of time within which to acquire a finite amount of information and to make decisions.  Often our decisions need to be quick, and a deliberate and analytical process is unproductive or even unfeasible.  We often seek satisfying decisions rather than optimal ones.
  2. Most of the time evidence is neither simple nor clear-cut; it is often complex, ambiguous and/or confusing or even contradictory.  Again, because of bounded reality, we can’t possibly weigh all the evidence efficiently, much less objectively.
  3. We are biased for our own sanity: we are bombarded with stimuli every day, particularly in modern society, much too much to take in completely, at least at a conscious level. We have developed a mechanism called heuristics, which we will talk about in a future meet up, to compensate.
  4. It pays to be selectively perceptive.  There are intrinsic social costs to being wrong about your beliefs, as opposed to having objectively correct beliefs.  It makes us look, validly or not, foolish, unintelligent, gullible or dishonest.  It discredits us as potential leaders, thus lowering our status.  It ostracizes us from any groups we have identified with.    For this reason, we benefit from forgetting or ignoring any stimuli that is emotionally discomforting or contradictory to our paradigms and we focus on the information or stimuli that confirms our current paradigm.
  5. Unfortunately it is more or less reflexive: Confirmation bias is often the result of the Semmelweis reflex, a tendency to instinctively reject any evidence or knowledge that contradicts our own norms, beliefs or paradigms. There is reason to think that an admission of error can be a socially costly one; you lose face, you lose reputation and trustworthiness, etc.
  6. On the other hand, self-verification and self-enhancement are to our advantage. For this reason we tend to be one-sided in looking for evidence or arguments that bolster our ideas, rather than one’s that may contradict them, even though they made lead us to a more likely conclusion. This leads to presumptuous and loaded questions and inquiry, as well as appeals and other logically fallacious thought processes.
  7. Wishful thinking is a concept based on the “Pollyanna Principle” which dictates that if a conclusion is pleasant it is therefore favorable over an unpleasant one, no matter how cogent or sound the unpleasant one may be.  Basically, it’s the idea that if we simply wish something were true, that should be sufficient to make it true, and anything that contradicts that is to be dismissed, ignored or ridiculed.

Another related bias which explains confirmation bias is called subjective validation.  People consider a piece of information more valid if it has any personal significance for them.  So, for instance, a superstitious person feel validated seeing a correlation between them getting fired and the date being the 13th (and consequently relieve themselves of personal responsibility), whereas another person would not see the same validation.  Related is the Forer effect, the tendency for us to associate ourselves with positive attributions about ourselves that psychics, mentalists, cold readers or other scammers use.

  • Example: This was shown using a fictional child custody case. Subjects read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take him or her away for long periods. When asked, “Which parent should have custody of the child?” the subjects looked for positive attributes and a majority chose Parent B. However, when the question was, “Which parent should be denied custody of the child?” they looked for negative attributes, but again a majority answered Parent B, implying that Parent A should have custody. (This could be a good reason why it’s beneficial to ask positive questions rather than negative ones, as opposed to the act of asking positive questions being “idealistic” or “naive”)

So what is confirmation bias?

We are going to focus on cognitive biases, specifically judgment and decision-making biases, starting with confirmation bias, and moving on to illusory biases which make us look at the world in an overly positive or negative way, as well as attentional biases, probability biases and what I call “comfort” biases, biases that motivate us to retain a status quo.

Confirmation bias is a tendency of people to favor information that confirms their beliefs or hypotheses.  It may be a bias towards collecting or searching for information (we tend to read books or magazines or check out websites that go along with our own views), or towards remembering certain information (we tend to remember information that supports our views better than information that contradicts it), or even towards paying attention to or accepting certain information (when people talk to us when tend to pick out the information that confirms certain positions we hold).  It can lead a person to interpret ambiguous information as favorable towards their position.  For instance, when disaster strikes an atheist might say “See, a loving God wouldn’t do that” whereas a theist might say “See, that’s God showing his disapproval.”  We tend to interpret information in a biased way when we have a strong opinion one way or another. Our standards are more lenient for evidence that supports our beliefs, and stricter for opposing evidence. In regards to opposing evidence, this is also called “disconfirmation” bias.

  • Example: A team at Stanford University ran an experiment with subjects who felt strongly about capital punishment, with half in favor and half against. Each of these subjects read descriptions of two studies; a comparison of U.S. states with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the subjects were asked whether their opinions had changed or not. They then read a much more detailed account of each study’s procedure and had to rate how well-conducted and convincing that research was. In fact, the studies were fictional. Half the subjects were told that one kind of study supported the deterrent effect and the other undermined it, while for other subjects the conclusions were swapped. The subjects, whether proponents or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Subjects described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways. Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, “The research didn’t cover a long enough period of time”, while an opponent’s comment on the same study said, “No strong evidence to contradict the researchers has been presented”. Subjects made their judgments while in a magnetic resonance imaging (MRI) scanner which monitored their brain activity. As subjects evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures.

Yeah… so what?

Well, for one thing, confirmation bias can lead to attitude polarization, the phenomenon by which people’s attitudes draw even further and further apart from those who disagree with them, and towards belief preservation, the persistence of a belief even after it has been demonstrated to be false and finally towards illusory correlation, falsely seeing correlations where there actually are none.  Conspiracy theorists are a good example of attitude polarization and belief preservation at work; even if they find contradictory evidence, they chalk it up to part of the conspiracy, and feel that much more ensured that their beliefs are true.  They are also a good example of illusory correlation, seeing signs that the illuminati is at work in the supposed “symbols” and gestures that celebrities exhibit.

  • Example of attitude polarization: A study was done at the Stanford in which subjects with strong opinions about the death penalty read about mixed experimental evidence. Twenty-three percent of the subjects reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes. Another example: They measured the attitudes of their subjects towards these issues before and after reading arguments on each side of the debate. Two groups of subjects showed attitude polarization; those with strong prior opinions and those who were politically knowledgeable. In part of this study, subjects chose which information sources to read, from a list prepared by the experimenters. For example they could read the National Rifle Association’s and the Brady Anti-Handgun Coalition’s arguments on gun control. Subjects were more likely to read arguments that supported their existing attitudes.
  • The belief perseverance effect has been shown by a series of experiments using what is called the “debriefing paradigm”: subjects read fake evidence for a hypothesis, their attitude change is measured, and then the fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous level. A typical finding is that at least some of the initial belief remains even after a full debrief. In one experiment, subjects had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, subjects were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.

Confirmation bias can also lead us to see relationships where none exist. This is called illusory correlation or illusory association.

  • Example: A study recorded the symptoms experienced by arthritic patients, along with weather conditions over a 15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero. People rely heavily on the number of positive-positive cases when judging correlation: in this example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation (of no pain and/or good weather).

Confirmation bias can also lead to the “backfire effect”; when people’s confirmation bias is quite strong they will tend to be more repelled by contradictory evidence than someone who is more objective.  In other words, contradictory evidence makes them more adamant about their current beliefs!  This effect explains the behavior of the conspiracy theorist, as well.

In more practical terms, it can lead to bad financial decisions; it can result in overconfidence in one’s decisions which leads one to ignore evidence that their decisions may be bad ones.  It can lead to inhibition of medical advances when professionals are certain that their current knowledge is optimal.  It is a product of depression, leading depressed individuals to seek out and confirm evidence that fits in and reinforces their negative paradigms.  It can lead to a tendency to believe in the paranormal, such as psychic readings.  If one wants to believe in such phenomenon, he or she will make a cognitive effort to make connections between himself or herself and the reading.  It can lead to exacerbating or extending conflict, whether between individuals or even nations, when both sides are certain of the veracity of their side of the argument.

The underlying idea is that, optimally, we want to make rational rather than irrational decisions, or at least interject as much rationality as we can in our judgments and decisions.  While it may be impossible to eliminate emotions from decisions, especially those which affect us personally, ideally we want our decisions to be based on an optimal, formal process, particularly decisions that are going to impact us and those around us most strongly and most long-term.  Emotions tend to lead towards detrimental decisions when considered in the long run.  The more intense and immediate the anticipated results and the emotions themselves are, the more impact we feel they have.  We tend to focus on the short-term effects and anticipated negative emotions for the purpose of alleviating our present emotional state or currently perceived negative impact, rather than look at the objective results of our decisions long-term. In these cases, our desires or fears override our reasoning, and our beliefs or actions suffer. Fear and sadness tend to lead to irrational pessimism, anger towards overly quick and necessarily unanalytical decisions.  Stress generated from emotional upset can add to cognitive “load” which makes it difficult to remain rigorous when making decisions.  Also, fear of potential regret or disappointment in the future can negatively affect a decision in the present. They can affect not only trivial decisions, but major financial or even medical ones.  Neuroscience experiments have shown how emotions and cognition, which are present in different areas of the human brain, interfere with each other in decision-making process, resulting often in a primacy of emotions over reasoning, meaning that we place more importance on emotions rather than reasoning.  Biases are not only detrimental on such a small-scale.  Consider that many social institutions rely on individuals to make rational judgments, like courts, or that people in leadership positions are susceptible to bias as well.

Examples:

1. An investor who imagines losing a small amount of money even after a big gain will generally focus with disappointment on the lost investment, rather than with pleasure on the overall amount still owned.

2. A dieter who anticipates losing two pounds may imagine feeling pleasure even though those two pounds are a very small percentage of what needs to be lost overall.

3. Game participants who could win $1000 and end up with nothing base their disappointment on the loss of the hoped-for prize, rather than on the fact that they have no less money than they had when they began the game.

4. A fear of flying experienced while deciding how to travel may lead a person to choose driving even though air safety statistics would show air travel to be statistically less likely to present a danger. A fear of flying may be enhanced by the vividness of the mental image of a plane crash may be in the mind of the decision-maker.

Confirmation bias, among other biases which we will discuss in the future, can affect our memory.  This is called “selective recall“, “confirmatory memory” or “access-biased memory“.  Information that matches expectations or beliefs is more easily stored and recalled.

  • Example: In a group of subjects were shown evidence that extroverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated, study, they were asked to recall events from their lives in which they had been either introverted or extroverted. Each group of subjects provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.

So, as we can see, bias can affect us pretty severely.

Ok, so what can we do about it?

Some would say very little to nothing.  However, some studies have shown that awareness of biases has the tendency to decrease the likelihood of the bias.  Simply put, the more conscious we become of our unconscious, the less influence it has.  By refocusing our attention to our behaviors rather than trying to decipher their inner workings we can become more objective, and thus more accurate, about our own biases.

In the future we will discuss cognitive bias mitigation, or ways to handle bias when it arises, and cognitive bias modification, ways that we can change our own biases to prevent them from interfering when we need to think clearly and rationally.

 Recommended Books

A Mind of Its Own: How Your Brain Distorts and Deceives – Cordelia Fine

Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgment and Memory – Rudiger Pohl

Don’t Believe Everything You Think – Thomas E. Kida

Mistakes Were Made (But Not By Me) – Carol Tavris

Social Cognition: Making Sense of People – Ziva Kunda

Strangers to Ourselves – Timothy Wilson

Thinking, Fast and Slow – Daniel Kahneman

Thinking and Deciding – Johnathan Baron

When Prophecy Fails – Leon Festing

 Recommended Videos and Websites

Selective perception videos:

http://www.youtube.com/watch?v=vJG698U2Mvo

http://www.youtube.com/watch?v=RzwZ0kwhYyo

Illusory Correlation:

http://www.youtube.com/watch?v=2I1n8-zpvMI

Amusing video on confirmation bias:

http://www.youtube.com/watch?v=hcucGn_X8AA

Website on confirmation bias (as well as other biases):

http://www.sciencedaily.com/articles/c/confirmation_bias.htm

December Meet-Up: Meta-ethics (semantical theories)

This month’s topic: Meta-ethics

First question first: what is morality?

Morals and ethics are very similar, even the same under some definitions, but in a general sense, morals are meant to refer to the internally created differentiation between good and bad, right and wrong, as well as our personal intentions, decisions and actions in regards to those determinations, while ethics are the specific application of morals in a specific context.  For the purposes of discussing metaethics, however, it is fine to conflate the two, as metaethics entails discussing the presumptions which underlie normative morals and ethics.

There are several branches of ethics.  One of them called meta-ethics is about knowing, understanding and defining ethics. Normative and applied ethics focus on what is moral (and how we can act morally), while metaethics focuses on morality itself.  So normative or applied ethics might ask “What should we do?” or “How should we handle this situation?” metaethics asks “What is goodness?” or “What does moral mean?” or “How do we distinguish right and wrong?”  It seeks to understand the nature of morality.

We’re going to talk about metaethics today, specifically what are called semantic theories.  That might sound quite lofty and nebulous, but it’s actually a simple aspect of ethics and morality with complex implications.  It’s about what we mean when we say “good” and “bad”.

I assume that it goes without saying why a discussion of morals and ethics is so important, but the same may not necessarily be said about metaethics, so I’d like to elaborate on why I chose this topic: our expanding knowledge regarding science, particularly evolution and genetics, has led us to a wide variety of conclusions about morals and ethics, some less substantiated than others.  For example, the conclusion that certain races are more deserving of life than others (so-called social Darwinism, taken to a more extreme form).  I believe that the ability to define morality, especially in practical deliberation, is an increasingly important one in terms of evaluating morality and making moral decisions.

I realized there are a lot of questions regarding ethics and morals, and each of them entails some pretty heavy philosophical consideration, so today I’d like to focus on one central question: What is the meaning of good/bad, right/wrong?  What exactly do people mean when something is moral or immoral?  Is there some factual, truth-related basis to it, or is it just a matter of preference or social prescription?  Hopefully in the future we can address the other very important questions as well.

To discuss it further, I’d like to talk about two broad schools of thought called cognitivism and non-cognitivism

 Cognitivism holds that right and wrong are factual matters, just like saying “Water is H2O”.  They are called truth-apt statements, or statements that can be either true or false. Because of this it can logically accommodate the connection between moral and non-moral thought and talk, but has a harder time figuring out the nature of morality.  They hold that non-cognitivism holds that the burden of proof lies on the non-cognitivists to explain why we speak of morality in terms of truth-apt statements.

Non-cognitivism has the opposite problem.  Non-cognitivism is, of course, the opposing view that moral statements cannot be true or false, but rather they are statements of feeling or preference.  It implies that moral “knowledge” is impossible. Hume noticed that moral disputes often carry heavy emotions and we can’t adjudicate to verification or falsification like we can with scientific statements.  For example, with water we can point to it and give it qualities like “wet” or measure it, but we can’t do that when someone says “slavery is wrong!”.  Non-cognitivists are interested in the attitudes which are expressed and what people are actually doing when making such claims (for example, reacting to the world or expressing a desire for an ideal world).  They posit that language, and thus a moral statement, is simply a tool for influencing others. They also hold that the burden of proof lies on the cognitivists to demonstrate that moral statements can be true or be a property.

Let’s look at some cognitivist ideas now.

Moral realism is the stance that moral statements are mind-independent (they don’t have to exist in a mind), objective facts about the world.  One form of this is Ethical NaturalismEthical Naturalism suggests that we can gain moral knowledge by inquiring into the natural world, just like what we do with scientific knowledge. They assume that there is natural justification for morality.  They look to scientific models of reductionism (breaking things down into parts, like from an object to its atoms) to understand moral reality.  In other words, it holds that there is a “science of morality.”  It holds that moral concepts are natural properties of the natural world, much like “hardness” or “dampness”.  It suggests that we can base ethics on rational and empirical consideration of the natural world.  It supposes that there are objective answers to moral questions.  For example, why is it right to protect family members and loved ones?  They would point to our biological make-up creating the imperative.  It does not necessarily suggest that the answers we find will be absolutely certain, just like we do not suppose scientific facts are absolutely certain, either.

Ethical Non-Naturalism, also a type of cognitivism, holds that moral statements are factual, mind-independent and objective, but while Ethical Naturalism holds that these they are properties of nature, Ethical Non-Naturalism holds that they are simply undefinable, non-natural (not to be confused with supernatural) properties, kind of like the laws of logic.  They allow for the justification of moral beliefs to be grounded in brute facts, or facts that are true a priori, such as “killing someone innocent is wrong”. It holds that moral properties are sui generis (a concept or property of its own). What this means is that, unlike Naturalism, an Ethical Non-Naturalist cannot reduce, for example, “goodness” to a need or a want or a pleasure (as naturalists do) like they could with natural properties like “hardness” or “dampness”, but only hold that goodness is goodness; it is undefinable beyond that.

Ethical Subjectivism is the stance that moral statements are made facts by individual people or society.  In other word, moral statements are true about the attitudes of people or societies, but not about nature or the world itself.  So if the members of a community hold that “lying is completely and always wrong”, then that particular statement becomes factual for that community.  We could hold “Lying is completely and always wrong in that community” as a factual statement.

Ideal Observer Theory holds that we can determine what is objectively right by imagining a hypothetical “ideal observer” and assuming that what this observer evaluates as right is, in fact, right, regardless of how we feel about it.  So when we consider a situation, we can determine its objective moral value by imagining a third, disinterested party observing and evaluating it.

Divine Command Theory holds that a unique being, such as God, is necessary for determining what is factually right.

Ok, now on to the non-cognitivist theories:

Prescriptivism holds that moral statements are merely prescriptions or authoritative recommendations.  So a statement like “killing is bad” may seem like a truth-apt statement on the surface, but actually it is equivalent to “you shouldn’t kill”, which is not a statement of fact but opinion.  Therefore, morality isn’t about “knowing” what’s right or wrong, but about judging people’s and action’s character and then prescribing an action.  It is the feeling – for example, of disgust towards murder – that keeps us from behaving badly, not any “facts” about the world or any property of “wrongness”. Universal Prescriptivism holds that moral statements are universalized imperatives or commands.  So while cognitivists treat “Murder is wrong” as a statement of fact, Universal Prescriptivists treat it like a command: it means “do not murder” and they expect to be obeyed universally.

Kant’s Categorical Imperative holds that one reasoned imperative entails all subsequent obligations.  This imperative is absolute and unconditional.  It is an intrinsic property; an end-in-itself: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”   Kant believed that through this type of reasoning we could derive all morality.  So whether or not killing was an “objectively” bad thing to do was irrelevant; we would just have to ask ourselves “Does it contradict the imperative for me?”  If it does, it’s immoral.  If not, it’s moral.  Empirical experience from the natural world was unnecessary.

Emotivism holds that ethical sentences serve merely to express emotions.  Emotivists hold that moral statements cannot be empirically verified, and so are meaningless in any factual sense.  The only empirically verifiable aspect is the action, so if I say “stealing that money was bad” this breaks down to “You stole that money.  It was bad.”  The only fact is “you stole that money” and “it was bad” is meaningless.  Instead, it is simply a declaration of my disapproval, like saying “Boo to stealing money!” which is not factual at all.

Quasi-realism holds that moral statements “act” like factual statements but are not really.  So, for all intentions and purposes we treat them like facts and properties but they actually just express emotional attitudes.

Problems with cognitivism:

l  What does it mean to say a moral statement is “true”?  Does it mean it actually describes the external state of the world, independent of the proposition?  In other words, when we say “Murder is wrong” does it correspond to some fact in the world (a la the correspondence theory of truth)?  Can we “point to” wrongness?  Is it a spatial-temporal object, like a cat or a tree?

l  Response to Frege-Geach Problem (the Embedding Problem): Blackburn’s quasi-realism holds that we project our attitudes on to the world as if they were true (and thus susceptible to verification and falsification), thus accounting for their apparent truth-apt, logical structure. Blackburn recognizes that any form of expressivism is susceptible to the equivocation problem, if we accept the conditional as being logical. But Blackburn asks “what commitment do we put into conditionals?” as the quasi-realist accepts that moral discourse has surface logical structure, this does not mean that they have actual, classically logical structure. Once one holds sincerely that, for example, “Lying is wrong,” this is just a different way of saying, for example, “Lying has the property of being wrong” or “I believe that lying is wrong” or “It is true that lying is wrong” or that “It is a fact that lying is wrong.” The solution Blackburn proposes is the introduction of an expressive language with a corresponding logic of attitudes (Hooray and Boo). This language is meant to express approval or disapproval of an action, rather than a logical relationship. For example, “Lying is wrong” translates as “Boo! to lying”. It can be argued that this serves no practical purpose though; while it reconciles moral discourse with non-cognitivism, people will still continue to conduct discourse in this manner.

2  (This is a problem with naturalism) G.E. Moore agreed that morals have properties, but that it was wrong to assign them natural or even supernatural properties.  Basically, the idea was that by virtue of the fact that we ask if X is good it means that it is meaningless to definitively and conclusively state that X is good as though it indisputable.  The fact that the goodness of X is an open question, a question that cannot be deduced from the conceptual terms alone, so it cannot be answered as though it were a natural or supernatural property.  It’s still controversial whether good is the same thing as pleasure, for example.  If this is true, then moral facts cannot be reduced to natural properties and that therefore ethical naturalism is false.  Moore used this in defense of ethical non-naturalism.  One contention with this is that if goodness is a property discovered a posteriori, then it is not a meaningless question, as it requires inquiry and discovery to find the answer to that question, much like the statement “Water is H2O” (and hence it is still open).  This is done by invoking rightness and wrongness to explain certain empirical phenomena, and then discovering a posteriori whether maximizing utility occupies the relevant explanatory role, the assumption being we can find an empirical explanation of what we mean by “rightness” or “goodness”. Only if goodness is considered an a priori property can we say it is closed. However, the above account of a sort of a posteriori moral search is unsatisfactory in that normal value, and not moral value, can be used to explain the relevant events. Normal value arises from the relationship between desire and a state of affairs. People tend also to objectify such value, into categorical moral value, though this is fallacious. So, a situation that can be explained by the existence of real moral value (e.g. the fulfillment of preferences, the tendency towards social stability) can also be explained by non-moral value. This explanation is far simpler, given the ontological difficulties surrounding moral value. Another general problem with this is that Moore assumes the open question is a meaningful one.  Some say this begs the question (how does he know?).  One attempt to answer this utilized psychology: If X is good, then X will act as an intrinsic motivator to do it (naturally), but a person can understand that Action X will produce X and yet the person may still not do X, so X is still an open question.  It also assumes that morals are properties at all or that they even express beliefs (as opposed to feelings or imperatives or prescriptions), a contention that all non-cognitivists hold.  They would argue that it is open simply because we aren’t attributing any properties at all.

Problems with non-cognitivism:

l  If moral statements are simply statements of emotion, attitude or prescription, then why is it so plausible to express them as beliefs or attribute them as properties to actions or people?

2  If moral statements are imperatives or prescriptions, then why can we apply rules of logic to them and come to understand them better even though imperatives, feelings and prescriptions don’t?

3  If morals are just quasi-realistic expressions of attitudes or emotions, then why can I feel positively about something yet know it is immoral or vice versa?  How can I like/prefer something and yet simultaneously hold that it is immoral and vice versa?

Frege-Geach Problem (Embedding Problem): Indeed, we tend to express morality cognitively when we say “Lying is bad”, for instance.  If cognitivism is not true then how can we make logical inferences based on it?  According to Geach, the sentence “Telling lies is wrong” has the same meaning regardless of whether it occurs on its own or as the antecedent of “If telling the lies is wrong, then getting your little brother to tell lies is also wrong”. This must be so, since we may derive “Telling your little brother to tell lies is wrong” from them and both by modus ponens without any fallacy of equivocation. For example, the statement “it is wrong to get someone else to lie” can be derived logically as this modus ponens: “1. it is wrong to lie and 2. If it is wrong to lie, then it is wrong to get someone else to lie, therefore 3. It is wrong to get someone else to lie”.  Sure, separately they could be looked at as emotional attitudes or imperatives, but how do you explain the ability to infer one from the other, regardless of how we may feel about the proposition?  In the conditional “if lying is bad” we are not expressing a feeling or imperative, but rather a conditional truth statement, so to say that “lying is bad” expresses an attitude or imperative in one case but a truth statement in another case is a logical fallacy (equivocation).  So how can we arrive at a conclusion like “therefore getting other people to lie is also bad” unless it is a logical statement?  We either have to accept it or reject modus ponens. The problem with Blackburn’s response is his appeal to attitude consistency (logical). Hale argues that such an appeal requires explaining the consistency. How the quasi realist would go about this, without appealing to logical terms such as ‘belonging to a set’ or non-contradiction involves statements such as ‘x belongs to set S’ and ‘x is equal to y’, which are descriptive (and hence involve truth-aptness and evaluation). The quasi-realist cannot allow this. This is a problem, as whether or not some attitude is a member of some set is a matter of fact. The onus is on quasi-realist to provide a more detailed account of inconsistent attitudes which does not involve truth-aptitude, which would require appealing to logic. The Frege-Greach Problem is a question about the role of normative/expressive sentences and is a challenge to non-cognitivism. A positive solution to both challenges would open a room to the rationality of non-cognitive discourse in ethics. On the contrary, a negative one would show that the only option for rationalism in ethics is cognitivism or — in the worst case scenario — to irrationality and ethical nihilism.

Some questions and issues that were raised:

1. Can’t we state a preference or desire or feeling that is in contradiction to our moral stance?  For example, I may not like telling the truth but still think it is the right thing to do, no?  If so, doesn’t it follow that moral statements are not necessarily equivalent to emotive statements or statements of preference?

2. Couldn’t morality have non-natural properties (i.e. transcendent properties or logical properties or conceptual properties) that do not necessarily exist empirically but exist rationally?

3. Being that morality ultimately needs to be applied to individual situations each with their own unique set of circumstances, how could we, even theoretically, develop an objective morality to govern or even just guide us in determining what is right and wrong in each and every case?  Isn’t each and every case necessarily related to its unique set of circumstances and thus relative?

4. Bear in mind that morality is NOT about evaluating people as people, but rather people’s actions and behavior.  It is unnecessary and even meaningless to call a person bad or immoral, but it may serve a purpose to call an action bad or immoral.

5. What role does a person’s intention or motivation play in determining morality?

6. Being that values are subjective and individual and that we derive a sense of morality or moral code from them, doesn’t it follow that morality is then subjective as well?  What exactly is the connection between values and morality?  What would a cognitivist say in regards to this?

What do you think?  Any ideas or questions or comments?  Please leave them below!