Reading Group

Hey guys!  This post is going to be fairly short, I just wanted to keep you all updated on what we’ve decided to do regarding the monthly Reading Group.  I’ll also copy/paste this directly as a post in the discussion forum on the MeetUp website so you can see it and, if you like, respond to it there as well.

In any case, we will be having a reading group once a month, probably the 2nd or 3rd Sunday afternoon of the month.  We will be reading a mix of articles and sections from books.  We will try to keep them to a maximum of 12 pages so that they can be read fairly quickly and we can all take our time to think about them.  So even if you don’t have a lot of time to read in your busy schedule, that’s no problem: even just a weekend should suffice.  🙂

We are going to rotate between social issues, soft sciences (psychology, anthropology, etc), hard sciences (physics, biology, technology, etc.) and philosophy every month, so hopefully we will cover some topics that everyone can enjoy.  The topics will be largely accessible (so probably no metaphysics or quantum theory!) but with the intention of challenging preconceived notions of those topics.  For instance, we may bring up discussions about problems with the golden rule or the scientific method, or pros and cons of universal healthcare.  The intention is to broaden our minds about different sides of certain issues, as well as take them into consideration and develop stronger arguments for or against certain issues through productive and civil discussion and reading.

As far as specific material, I will be choosing it every month, but I am very happy to include your suggestions if you have any.  Alternatively, if there is a specific section of a book or an article that you would like to read as a group, I’m very, very happy to help you organize a reading group on a day and time of your choosing for the purpose of reading and discussing it.  In fact, I encourage you all to do so!  I would love to see multiple Reading groups (as well as other groups!) every month.

If you would like to join but are having trouble with the material, that’s fine!  Another purpose of the group is to help each other under the material, so please prepare some questions to bring to the discussion or, if you’d prefer, you can simply email me and I’ll try to answer your questions – as well as post them in the meet up details – as best I can.

For now we’re going to keep the group size fairly small (six members) but if it seems doable we’ll expand it in the future.

The expectations for this group are pretty minimal:

1. Please read the material (duh).

2. Please prepare at least one question and/or one comment about the material for the group.  If you didn’t understand something or want to get other people’s opinion on something, a question would be great.  If you want to share an understanding or interpretation of the material, or want to express something you agreed with or disagreed with, a comment would be great.  Of course, you are more than welcome to offer multiple questions and/or comments.

3. If you organize the meet up, then it’s your job to prepare a number of pertinent questions regarding the topic, as well as some back up material that you think may be of use for discussing or presenting multiple sides of the issue.  For instance, if you are going to use a section of Descartes Meditations, you might want to have some additional material (videos, articles, etc) that discuss dualism vs. monism, philosophy of mind, etc.

The tentative format will be as such:

1.  The organizer very briefly summarizes the Reading Discussion topic (what the material was, who the author was, some basic points brought up).

2. The organizer first fields all questions (no responses yet, just get them all down on paper first).

3. The organizer first offers some preliminary questions, or questions that are necessary for everyone to understand the material (please save more discussion-oriented questions for the end!)

4. The members all present their comments and everyone freely discusses the material and the comments that have been presented.

5. Finally, if any remain, the organizer presents the more discussion-oriented questions that have not been answered yet.

That’s a wrap!  Let me know if you have any questions about it, and I hope to see you at a future Reading Discussion Group!

March Meet Up: Consciousness

Every month we center on a particular theme which we discuss once a month.  This month’s topic will be split into an “open” group (a meet up where anyone and everyone can join) and a “closed” group (for members only).  This month’s blog post will, of course, be about this month’s open discussion group, so there will be no blog post for the topic next month, since it will be the same topic.  However, we will be starting a book/reading  discussion next month as well, so expect to see a new blog post topic about that.  In any case, this month’s topic was consciousness.  We focused on how we define consciousness, animal consciousness, machine consciousness, philosophical ideas about consciousness and “higher” consciousness.  So, without further ado:

What is consciousness?

Consciousness isn’t just being awake, although we typically use it synonymously with that definition (He’s not conscious!  Wake him up!).  There doesn’t seem to be much agreement about what it is.  Instead, it appears to be a philosophical primitive, or something that most people have an intuitive and shared understanding about when we use the word.  Basically it’s defined as the ability to be aware of external objects as well as internal states.  More specifically, it is characterized as having:

  1. Sentience
  2. Awareness
  3. Subjectivity
  4. A sense of selfhood
  5. Executive control system
  6. Wakefulness

I had a hard time distinguishing those terms, a lot of them are really similar, if not the same.  One could easily make the argument that we ought to combine several of those, or even add to or subtract from it.  In any case, I looked into how they are defined and came up with these descriptions of each:

1. Sentience: The ability to feel, or experience sensations (individually called qualia) subjectively.  It is distinguished from thinking or reasoning. It is distinct from creativity, intelligence, sapience, and self-awareness.  Sapience (not a necessary characteristic of consciousness) is the ability to act with appropriate judgment, more colloquially known as wisdom.

2. Awareness: The ability to perceive (or even experience or feel) events, objects or patterns.  It may be conscious, unconscious or even subconscious and, to this extent, animals have this capability as well.  It may be internal as well as external.

There was an interesting and very understandable confusion regarding sentience and awareness (understandable because I, too, had trouble distinguishing them!).  Sentience is the ability to interpret sensations whereas awareness is about perceptions.  Colloquially, these two words are sometimes used interchangeably, but in a philosophical or biological context, they are quite distinct.  A sensation is a subjective experience, like an emotion or an image in one’s mind.  It’s internal.  A perception, on the other hand, is an (arguably) objective experience.  It’s related to things that exist outside of the mind, externally, like seeing an apple or watching a movie.  How that movie makes you feel is your sensation of it, but the act of watching the movie itself is a matter of perception.

3. Subjectivity: Being a subject – me, I, myself, etc. – or the sense/feeling of being someone apart from others (I am me, and you are you, we’re not the same). The subject is the “form” or body that contains those things, and subjectivity is the feeling or understanding of that itself.  Subjectivity is a constantly changing quality and it is up for debate as to how permanent or transient it is, an issue tied into the idea of the “self”.

4. Selfhood: the subject in the subjectivity is the self.  Being aware of the self means knowing that you are a separate entity from the other entities around you.  It means being able to introspect, or to examine one’s conscious thoughts.  This thing that thinks is the self.  The capacity to identify with past actions and consider future actions just as well as we can one’s present is the capacity to have self-hood.

This was also, even more understandably, a confusing distinction.  Some argued – and I partially agreed – that we could put them together.  The only real distinction was this: is it possible that certain organisms have subjectivity – feelings of subjectivity but not selfhood?  Can other animals, for example, recognize that they are separate from other beings but not be able to recognize that sometimes, or examine this understanding?

5. Executive Control System: the management (regulation, control) of cognitive processes, including working memory, reasoning, task flexibility, and problem solving as well as planning, and execution.

6. Wakefulness: even our sleeping self, our unconscious self, is said to be different from our awake self, so wakefulness is an essential part of consciousness.  For example, if I murdered someone while sleepwalking we would not considerate it an act of self, at least not so much as if I did it awake and conscious.  As Locke has postulated, it “would be no more right, than to punish one twin for what his brother-twin did, whereof he knew nothing, because their outsides were so like, that they could not be distinguished.”  Whatever actions that one had done but had no conscious awareness of cannot be said to be appropriated to the person, at least not in the same way as if they were aware of it.

We discussed which of these characteristics were necessary, and which may be sufficient.  Often, it depends on how consciousness is being used.  For example, while the sixth characteristic (wakefulness) may be very useful in a court case, is it useful in a philosophical discussion?

Do other animals have consciousness?

Since we cannot directly communicate to them, it is hard to say how exactly they experience the world around them.  They seem to reflect certain characteristics, like sentience and perception, but what about subjectivity and selfhood?  Many of us intuitively feel that animals do, but how can we know?  You may wonder why it even matters.  One member of the group brought up how he was proud that in his home country, dolphins are now considered “non-person” beings, meaning they were not persons but there were legal norms requiring them to be regarded as persons.  Whether or not animals have consciousness is a question that led to such a conclusion; it is a pertinent question, the answer regarding which animal rights and the right treatment of animals depends on.  We can either justify the promotion of animal rights or even speciesist behavior towards them depending on our conclusions.  Question like these lead us directly to talking about animal rights.

What about machines?

Do machines have consciousness or could they, theoretically, have it?  Alan Turing came up with a hypothesis regarding consciousness and developed a test.  He posited that if a human could not differentiate between a human’s behavior and a machine’s, it could be said to have consciousness.  But is that true?  Is the ability to behave like a human indicative of consciousness?  Is it sufficient?  This question was tossed back and forth a bit in our group.  Some said yes, there is no way to distinguish between a human-like entity and an actual human one, so the best explanation is that such an entity has consciousness.  Others disagreed, saying it’s theoretically possible to give a machine enough info that it could simulate a human-like task without any actual characteristics of consciousness.  In any case, questions like these lead us to discussions on artificial intelligence.

Who cares about all this?  What about human consciousness?

Well, let’s look at it then!

Our exploration of consciousness begins with Plato and his separation of soul and body, but is properly formulated with Descartes who posited that consciousness existed in another “realm” called the res cogitans or the realm of thought.  This is called dualism.  Descartes divided consciousness (which he called mind) or mental contents from physical ones (which he called body).  In other words he considered the mind and the body separate entities, which instigated a centuries long philosophical struggle called the mind-body problem that is still going on to this day.  While many scientifically minded philosophers have wished to reduce mental states to physical ones, some physicians believe quantum theory may provide an answer to the dualist problem, tentatively called the quantum mind (which, unfortunately, we didn’t get a chance to talk about… maybe in the closed discussion!).

The other side of the argument is monism, a view first posited by Parmenides but later espoused by Spinoza.  This stance posits that there is no duality or distinction between mind and body, but rather they are derived of the same substance.  Many believe that any concept of consciousness is incoherent or illusory, and thus non-existent.

In any case, the mind-body problem seeks to determine what relationship there is, if any, between mental and physical processes.

Interestingly, most of the members agreed with dualism more.  It was apparent that monism was a more scientifically inclined view, whereas dualism was more intuitive, it seemed to make more sense on the surface.

Let’s look at dualism.

There are two types:

1. Substance Dualism holds that mind or mental properties are distinct from physical one and, as such, do not adhere to the laws of the physical universe.

2. Property Dualism holds that these properties belong to one substance (rather than being two separate ones) they do adhere to universal physical laws, but physics cannot be used to explain consciousness or mind.  There are a variety of explanations in regards to the relationship of the two properties.  So, instead of being two different things, they’re just two different ways of describing the same type of thing.

Dualism essentially looks at consciousness as a separate substance from anything physical like the brain or chemicals or anything like that.  For instance, Descartes argued that mental states, such as imagining a drink, had no extension; in other words, they did not take up any space and so could not be measured in terms of height, weight, length, etc.  Physical things, on the other hand, have extension.

The members were kind of divided as to which they agreed with more, but the leanings were definitely towards property dualism.

Let’s look at monism.

There are three main types:

Physicalism holds that mind is matter held in a specific way.

1. Type Physicalism holds that mental activity is most likely equivalent to neural/electrical activity in the brain.  In other words, the feeling of fear, for example, is the same as whatever neural activity that occurs when we express the feeling of fear.  So a statement like “I’m scared” really means “My neural activity is creating certain emotional and physiological changes in my body that are impelling me to flee.”  It’s a lot easier to say “I’m scared” of course.

2. Functionalism modifies this stance slightly and holds that mental events are simply functions of physical ones and serve as causal relations to other states.  They are much like what I see on a screen that comes from a software program on my computer (my computer being my brain, the software being the faculties or properties on my brain). This particular image I am looking at on the screen is not actually the software itself, but rather the medium through which the software functions.  Whereas a computer would use electricity as a conduit for function, the brain uses neural activity.  It has been suggested that mental properties supervene on physical ones, or that they must be grounded in some kind of physical property.  In other words, they require physical properties to exist, so while they are dependent on physical properties, they are not reducible to them (they are not one and the same).  So in other words, the physiological changes when we feel scared are the stuff going on in the hardware (brain), whereas the feeling of fear is what’s happening on the screen (our consciousness) that impels us to act.

3. Epiphenomenalism disagrees with those stances.  It holds that mental states are simply “byproducts” of physical brain states, nothing more.  They have no useful function and do not affect physical states.  Only physical events can have effects on each other.  So in other words, they are like weird “glitches” in the software that don’t mess up the program, but certainly aren’t doing anything useful.

We only got as far as these three types during the discussion, but I included a couple other related ideas below for you to look at as well.  It certainly raised a lot of discussion, mainly regarding epiphenomenalism.  Some didn’t understand quite what it was or how it worked, and even the most ardent scientist in the group was a bit put off by it.  It does seem a bit grim to consider that all of our consciousness experience are just unnecessary accidents, that our bodies could function just as well without them, but there are pretty solid arguments for why this may be the case.  One book I read about it was called Kluge.  Although I didn’t necessarily agree with all of the author’s conclusions, I recommend the book as an accessible introduction to the view.

In any case, here are a couple more ideas:

  • Idealism is the idea that there are no physical properties and matter is an illusion, so instead only mental ones exist.  In other words, everything we consider “real” is just a projection of our minds.  Whether or not they actually serve a useful function is debatable in this philosophy.
  • Neutral Monism holds that both mental and physical properties are actually derived from the same essential essence, but whether that essence is mental or physical is hard or even impossible to say, or irrelevant.

In a nutshell, monism takes the view that consciousness can be reduced or defined by physical properties and that any distinction between it and physical substances is illusory.  The difficulty with monism is in explaining why or how mental processes appear to be so different than physical ones, or how they were even able to emerge in the first place.

Some Problems with Both Views:

Dualism seems the most common sense solution to the problem.  After all, don’t physical states feel differently than mental ones?  Isn’t conscious experience – like how the emotions I experience or what I am thinking – distinct from inanimate matter like rocks and my toe?  We tend to associate those conscious states more with our notion of a “self” than we do with physical properties like our hair or even our organs.  On top of this, physical properties do not seem to have a subjective quality, whereas consciousness does.  For instance, when we burn our fingers and are hit with an emotion, we say that we are feeling that emotion (pain), but when nose grows in my hair or our body releases a certain chemical, we generally don’t say “we” are doing that, but rather the nose or the brain is.

This brought up a bit of discussion as well: the distinction between our physical properties and our mental ones can be pretty fuzzy, depending on the language we use and what we are talking about.  For example, when we eat a lot, we say both “I am full” and “my stomach is full.”  On the other hand, when we commit a communication faux pas, we will sometimes say “I must be going crazy” but sometimes say “My mind is playing tricks on me.”

We skipped this next part and jumped into higher consciousness, since it was of interest, but I’m going to include this information here as well:

– On the other hand, dualism can’t account for how mental events can cause changes in physical properties, like the physical creation of memories, or how physical changes can result in mental changes, like brain damage affecting personality and behavior.

A type of dualism called interactionist dualism has an answer to this.  It still holds that mind and body must be separate properties because of the extension (mental properties can’t be measured) issue, but that mental properties must interact with, or influence or affect, physical ones, since we can detect physical changes based on mental events (like when I think of somebody I don’t like and I get angry, my blood pressure increases and my heart rate goes up, etc.).  Mental and physical events interact with each other: My girlfriend sees a spider (physical) and feels a sense of fear (mental) which causes adrenalin to pour into her body and she screams (physical) which I hear and then feel alarmed (mental) and run over to see what happened (physical) and so on.

– One problem with this, though, is that it assumes that all of our conscious thoughts are clear and distinct; meaning we understand perfectly what they are and can distinguish them from other events.  But more and more both scientific and psychological discoveries are casting doubt on this (unconscious motivations, discovery methods, automatic heuristics, cultural customs and habits, neurological errors).

– Another problem is that the idea of cause and effect necessarily implies material impact, or the two real or physical things actually contacting or touching each other, and yet if mental properties do not have extension (no height, width, weight, etc), how can they be said to “impact” physical ones?  If mental properties are different from physical ones, how they affect the physical system without going against the law of the conversation of energy (they add energy to the system without taking it out)?

– Going back to monist ideas, Type Physicalism cannot account for the idiosyncrasies in different organisms physical states when experiencing the same or similar subjective phenomenon.  For instance, if we both hear the same word, why is it that I might feel sad at hearing it but you don’t?  It’s the same physical stimulus, so why are our mental sensations of it different?  Or more aptly, when listening to music we both say we enjoy for similar reasons, why are our physical changes unique?  If two or more organisms are affected by the same external stimuli, why are there idiosyncrasies in their physical states?

– Many philosophers say the essence of consciousness is experience, which is necessarily subjective.  But if consciousness is essentially subjective, then how do we know other entities have or don’t have consciousness?  Maybe when I see a snake, I “feel” fear, I have an experience. But how do I know that you do?  How do I know that rocks don’t?  How can we know whether these experiences are the same, or even remotely similar?  This is called the problem of other minds.

Let’s talk about “higher” consciousness

This is also called the collective conscious or God or cosmic consciousness.  Generally speaking, this is considered the level of consciousness that a human can reach where he or she realizes the reality, or reality as it really is, rather than how it may subjectively seem.  This type of reality is sometimes referred to as ultimate reality.  Many believe that evolution has endowed humans with the faculties to achieve this level of reality, but that it requires practice and development but most people do not put in the concerted effort to achieve it.  One underlying assumption is that people with ordinary consciousness are only partially aware of reality; they are still ignorant of certain truth(s) and they are prone to lower, more impulsive drives and wants.

This generated a bit of discussion as well.  One asked how it could even be defined.  Wouldn’t it depend on the person using the term?  What makes it better or worse than so-called lower consciousness?  One possible answer was that the less one’s thoughts were in connection with animal impulses and drives, the higher one’s consciousness could be said to be.  Another was that the less one’s thoughts were attached to worldly things (money, possessions, etc), the higher it could be said to be.  Another argument was that higher consciousness was not necessarily better, but an inevitable outcome of ongoing evolutionary change.  It was no “better” to have higher consciousness than to be a chimpanzee as opposed to a single-celled organism, it was just a natural outcome of generations of evolution.

We ended the conversation here, but there is one more very important aspect to the scientific and philosophical question of what is consciousness, and I’d like to include it here:

The Hard Problem of Consciousness

There are various formulations of the “hard problem”, a problem that philosopher David Chalmers does not believe can be answered even if we find solutions to the easy problems (how we store information, how we report mental states, how we focus attention, etc.).  Some of these are:

“How is it that some organisms are subjects of experience?”

“Why does awareness of sensory information exist at all?”

“Why do qualia exist?”

“Why is there a subjective component to experience?”

“Why aren’t we philosophical zombies?”

Chalmers argues that it is fundamentally impossible to explain these phenomena by physical means, and that another solution altogether is necessary.  Philosopher Thomas Nagel agrees, stating that since physical events are objective and mental ones subjective, we cannot conflate them.  Philosopher Daniel Dennett disagrees, dismissing the notion that there even is a hard problem.  He speculates that once the easy problems are solved their solutions will offer a viable explanation to all the supposedly “hard” questions, and that there is no need to posit a need for other properties to explain them.  He equates the phenomena of mental events to magic tricks, tricks that the brain plays on us to make it seem as though there is something separate from the physical going on.

Thanks for reading!  If interested, please be sure to check this stuff out:

Susan Greenfield: What is consciousness?

Jane Goodall: Animal Consciousness

Sam Harris: Physicalism (vs. Dualism)

John Searle: Problems with monism and dualism

Alan Watts: Human and Higher consciousness

February Meet-Up: Origins of Language

Every month we center on a particular theme which we discuss twice a month, once in an open group where anyone and everyone can join, and once in a closed group, which allows for a limited number of participants and is for group members only.  This month’s topic was language, particularly the origins of language.  We focused on what language is, how it differs between humans and other animals and how/why we may a language faculty may have evolved.  So, without further ado:

What is language?

Language is a tought term to pin down.  Essentially, language is the human capacity for acquiring and using complex systems of communication, and a language (like English or Japanese) is any specific example of such a system. When used more generally, it could refer to the cognitive ability (an ability which is related to learning, understanding, thinking about and remembering something) to use systems of complex communication, or to describe the rules that make up these systems (like grammar, for example), or the set of utterances that can be produced (in other words, speech) from those rules. All languages rely on semiosis (a sign process; any form of activity, conduct, or process that involves signs, including the production of meaning) to convey meanings. Linguistics and language often get confused, so as a side note, linguistics is the scientific study of language, rather than the communication system itself.  But linguistics is a discussion for another day.

Are languages restricted to humans only?

To answer this question, let’s talk about some different kinds of language first.  A natural language, also called an ordinary language, is any language which is created naturally (as opposed to a constructed language, which was created deliberately) as the result of our facility (or built-in ability) for language. Any normal human infant is able to learn any natural language without requiring instruction to do so. Both signed and spoken languages are considered natural languages.

Human language is open-ended and productive and based on a dual code, meaning it allows humans to produce infinite speech from finite elements (letters, words, sounds, grammar rules) and to create new words and sentences. Human language is modality-independent, which means it is not dependent on any one type of encoding (such as writing or sound) to be learned or acquired. The symbols and grammatical rules of a language are largely arbitrary, meaning that the system can only be acquired through social interaction. Human language is also unique in being able to refer to abstract concepts (like freedom) and to imagined or hypothetical events (like asking what we would do if we could live forever) as well as events that took place in the past or may happen in the future. It is unique because it has the properties of productivity, recursivity, and displacement.

Productivity, in this sense, means how much or how well we form words and grammatical expressions.  Our ability to produce novel sentences is evidence of our high level of productivity.  We don’t only repeat sentences we have picked up before, we can also produce our own sentences which have never been created before.

Recursion is the process of repeating items in a self-similar way, so when we use the same words over and over to mean the same things, that’s recursion.  Our ability to use the word freedom in more than one situation to mean the same thing is an example of this.

Displacement is the capability of language to communicate about things that are not immediately present spatially or temporally, things that are either not here or are not here now.  So when we talk about things we did yesterday or are going to do tomorrow, or talk about what’s in our house, or abstract ideas, that’s displacement.

How is this different from animal language?

Animal communication can only express a finite number of utterances that are mostly genetically transmitted.  They cannot produce novel sounds or symbols like we can, nor can they communication thoughts or opinions or ideas.  None have been able to learn as many different signs known by an average 4-year-old human (and only some other primates and dolphins have been able to do that much), nor have any acquired the complex grammar of human language. Language also relies entirely on social convention (a socially acceptable way of acting) and learning. Its complex structure allows a much wider range of possible expressions and uses than animal communication.

So where did language come from?

There are many theories about the origins of language, but the prevalent one holds that language started when early hominids’ primate communication gradually changed and they achieved the ability to form a theory of other minds and a shared intentionality. Theory of mind is the ability to attribute mental states to oneself and others as well as understand those states in others. It’s necessary for empathy, which helps us care about each other and work together.  Intentionality is the power of minds to be about something, or to represent things, properties and states of affairs.  So when two people express the same feeling, belief or opinion about something, they are sharing intentionality.

Languages evolve and diversify over time. All languages change as speakers adopt or invent new ways of speaking and pass them on to other members of their speech community (people who all speak the same language, basically). Language change happens at all levels, from the phonological level (the level of sounds, or what we refer to ask accents and pronunciation) to the levels of vocabulary, morphology (the parts of words, including roots or even intonations or stresses), syntax (the formation and organization of words and sentences), and discourse. Language change is often looked down on, at first, by native speakers who often consider call the changes “decay” or “degradation” or a sign of slipping norms of language usage, it is natural and inevitable.

There are different theories about the origin of language depending on the assumptions about what language is. Continuity-based theories are based on the idea that language is so complex that it must have evolved from earlier systems of our pre-human ancestors. The opposite viewpoint, discontinuity-based theories, holds that language is such a unique human trait that it cannot be compared to anything found among non-humans so it must therefore have appeared suddenly in our species. Most scholars agree with continuity-based theories but they don’t agree on how exactly it developed. Some see language as being mostly innate, like Steven Pinker, and believe it to be wholly animal cognition, whereas others see language as a socially learned tool of communication and think it developed from animal communication, either primate gestural or vocal communication, but works in conjunction with social learning.

So why do we have language at all?

One view sees our capacity for language as a mental faculty that allows humans to learn a means of communication and to produce and understand it.  Language is universal to all people and we have the neurological capacity to develop it, so it seems that it is this faculty is biologically innate. Proponents of this view often argue that this is supported by the fact that children who can access language in their environment acquire it even without instruction.  This is, as you can see, ties more strongly into the continuity-based theories.

Another view sees language as a formal system of signs governed by grammatical rules of combination to communicate meaning.  This is called the formal symbolic system theory. This view stresses that human languages are arbitrary, even man-made rules and systems that point signs to meanings. It concedes that our capacity for communication is innate, but formal systems of language are not necessarily.  Rather than focus on where language comes from historically, they focus on how rules and systems were made.  This one, on the other hand, ties more strongly into the discontinuity-based theories.

Yet another view sees language as a system of cooperation.  Rather than a natural ability, it sees language as a cultural creation used to enable people to express themselves and manipulate their surroundings.  This argues that while there is a connection between our animal language and our capacity for communication, it did not necessarily have to be language, but language was derived from a drive to cooperate.  This, too, ties more strongly into the discontinuity based theories.

Communicative style is the ways that language is used and understood within a particular culture. Communicative style also becomes a way of displaying and constructing group identity. Some would go so far as to say language is, in this way, divisive: Linguistic differences may be a factor in the divisions between social groups (speaking a language with a particular accent may imply membership of an ethnic or social group or status as a second language speaker). These kinds of differences are not part of the linguistic system, but are an important part of how people use language as a social tool for constructing groups. However, many languages such as Spanish or Japanese also have grammatical conventions that signal the social position of the speaker in relation to others through the use of registers. In many languages, there are stylistic or even grammatical differences between the ways men and women speak, between age groups, or between social classes, just as some languages employ different words depending on who is listening.

How do we learn language?

The learning of one’s own native language, typically that of one’s parents, normally occurs spontaneously in early human childhood and is biologically, socially and ecologically driven. A crucial role of this process is the ability of humans from an early age to engage in speech repetition and so quickly acquire a spoken vocabulary from the pronunciation of words spoken around them.

 Stuff to check out:

Theories of origins of language:

Steve Pinker on Language:

Noam Chomsky on Language:

Critical Thinking workshop

Every month we also have a study/discussion group (which are now called learning groups) about an aspect of critical thinking, logic or argumentation. Our first one in October was about informal logical fallacies, the previous one was on confirmation bias.  This month’s group was an overview of critical thinking.  Critical thinking is an enormous topic which encompasses numerous skills and philosophies, and I may consider making a whole separate meet up in dedication to it.  But in the very least I’d like to provide an introduction to it.  First off…

What is critical thinking not?

Critical thinking is not just thinking.  It is not even thinking a lot, nor thinking “deeply.”  One can be contemplative without being a critical thinker.  It is not knowing a lot.  Retaining vast amounts of knowledge can be credited to having a good memory, not being a good thinker.  In fact, a very good number of knowledgeable, even intelligent, people suffer from this misunderstanding: one’s amount and depth of knowledge does not automatically make one a good thinker.  Critical thinking includes skills that go beyond passive acquisition and retention of information.  It is about judgment, but not about being judgmental.  It is about having criteria, not about being a critic.

So what is critical thinking?

It is hard to pin down a precise definition of critical thinking.  First, to establish one thing: Thinking is a means of learning, explaining and, basically, communicating.  Essentially, it is a reasoned method for determining the truth or falsity of a claim.  It is also, thereby, a method for reaching valid conclusions.  It is a process, an ongoing process.

It is evidence-based, disciplined and rigorous.  It is domain-independent, or domain-general, meaning it can applied to any and all domains of life.

The purpose of it is to organize and clarify reasoning, as well as recognize errors and biases in reasoning (both one’s own and others). Listening and unequivocally accepting another’s belief or opinion as your own leads to “inherited” opinions; believing simply because someone told you so (such as our families when we were children).  There is nothing wrong with such opinions per se, but those opinions only become strong when they are supported by reason.

It is critical – no pun intended – towards a foundation of science and a democratic, autonomous society.  Most, if not all of us, have the right to and the capacity for decision-making.  The less informed our decisions are, the more precarious their results are. While persuasion itself is not a skill of critical thinking, ideally critical thinking skills themselves make arguments more persuasive.

It utilizes skepticism.  Skepticism is not the instant dismissal of any and all claims until they are irrefutably proven, it is a questioning attitude towards claims or positions particularly when they are stated as having a factual nature.  It is a tentatively held suspension of judgment or doubt until sufficient evidence for a claim is presented.  A critical thinker tends to examine the reasoning as well as possible assumptions and biases behind claims before accepting them.  A critical thinker realizes that the truth value of factual claims is not determined by the emotional impact that accompanies it nor people’s preferences but on the strength of the reasoning and evidence.

It does not ignore emotion, but rather validates it. As pointed out in our bias group, emotion is a useful tool as well, but only in certain situations or for certain reasons.  However, regardless of their utility (or lack thereof) we cannot detach ourselves from emotions completely.  Critical thinking is a system through which we can determine whether or not an emotion is justified; whether it should be given credence or not. For example, fear of drinking poison is highly justified; it should be heeded.  However, fear of talking to strangers at a party, for instance, should not; it is not justified, and one should behave, in fact, in a way that overcomes the fear. What this is to say is that emotions are certainly efficient, but they are not necessarily an effective way of assessing all situations; through the usage of reason and critical thinking, we can determine when they are, and this is quite important for decisions which will or may have a massive impact on our future.

It is also useful for knowing what not to do, which we have started to apply by studying logical fallacies and cognitive biases.  It also involves learning how to do certain skills well, and recognizing when they are done poorly. Critical thinking involves the following, roughly in the following order

  1. Defining
  2. Conceptualization
  3. Listening
  4. Analysis
  5. Inquiry
  6. Examination
  7. Inference
  8. Synthesis

We will talk about each of these individually in a future meet up.

Why should we think critically?

There are more reasons to think critically beyond what has been stated already.  Consider this: we often hold our beliefs and ideas sacred, but are they?  Is there some magical quality about them?  What harm does it do to change them or realize they were wrong and others are better?  I hold that there is nothing special about them, and that few, if any, of them are essential.  Concretely speaking, if we realized they were wrong we could abandon them or change them indiscriminately and no immediate harm would come to us.  Critical thinkers tend, therefore, not to have “cherished” beliefs.  They are willing to abandon beliefs should better arguments or evidence present itself.

Conversely, thinking uncritically can lead us to a whole host of problems.  Among them are the possibility that we:

  • Become impulsive and rush to erroneous conclusions
  • Fail to consider the implications of our positions
  • Ignore, miss or distort biases, evidence, information, errors, unjustified assumptions and fallacies in other people’s arguments as well as our own
  • Forget the purpose of the discourse, utilize irrelevant arguments, and/or focus on the trivial
  • Unwittingly hold unrealistic positions
  • Communicate poorly, vaguely or with unwarranted presuppositions or respond to others’ arguments incompetently
  • Confuse and conflate meanings and statements
  • Basically think narrowly, imprecisely, irrationally, simplistically, superficially, egocentrically or in a contradictory manner.  We become passive thinkers, going with whatever pops into our minds and pursuing any thought or desire
  • Become egocentric or sociocentric
  • Most importantly, make poor decisions as a result of all this that affect our and possibly other people’s lives (which is why I think this is so important)

If we are to develop as thinkers, we must learn the art of clarifying thinking, of pinning it down, spelling it out, and giving it a specific meaning.  The whole purpose of thinking at all is communication, in particular discourse; conversation that takes in the principles of critical thinking.  It is the most peaceful and civilized way of changing minds.  One could argue it’s the only way.

How do we become critical thinkers?

First, we must recognize and adhere to the standards of critical thinking.  Depending on which text you consult or which authority you ask, you will hear different ones.  However, I have put together what seem to be the most prominent.  They are, in order of priority:

  1. Relevance
  2. Clarity
  3. Precision
  4. Accuracy
  5. Depth

Relevance deals with whether or not the arguments are actually related to the topic at hand. For example, bringing up a politician’s marital status when discussing their foreign policy is most likely not relevant.

Clarity has to do with whether or not the arguments were understood by those listening to it. Life is concrete, life is not abstract, we don’t live life abstractly, so offering clear, concrete examples or explanations is important.  For example, a statement like “Hope is good” is less clear than a statement like “People who express optimism are less likely to suffer from depression.”

Precision is about detail and specificity.  It is contextual; it is dependent on the particular issue at hand.  For example, if I am ill, I don’t want my doctor saying “You’re sick,” I want him to tell me what, specifically, I have.  Context determines the level of precision required.  For example, if I’m taking someone’s temperature I want to know it to the tenth of a degree (36.8, for example), but if I’m measuring lead in water, I want to know it to the millionth of a degree; I need a more precise measurement.

Accuracy is assessing whether or not the information is true, or accurate.  For example, I could say “I am 42 meters, 16 centimeters tall” and that would be clear and precise, but hardly accurate.

Depth is related to how well the argument or information deals with the complexities of a given issue.  Not all issues require a lot of depth to deal with.  When giving an explanation for why you are late, “I didn’t hear my alarm” is sufficiently deep.  However, when trying to understand why someone killed themselves, attributing it to “drugs” may not be.  The relevance of depth is in direct relation to the complexity of the issue.  The less complex, the less relevant depth is.  You cannot adequately deal with complex questions with superficial or shallow answers or reasoning.

Some examples of questions to ask yourself or others to ensure these standards, again in order:

  1. Does it have any bearing on the issue/question at hand? (relevance)
  2. What exactly is meant?  Can examples be provided? (clarity)
  3. How much/many?  How can we measure that? (precision)
  4. How do you/I know that?  How can we test, check or observe that? (accuracy)
  5. Does it completely deal with the intricacies or complexity of the issue/question? (depth)

Richard Paul and Linda Elder, two experts on practical critical thinking, talk about five stages of thinkers in their book Critical Thinking: Tools for Taking Charge of Your Learning. It’s a rather lengthy description, so I’m going to try and sum it up here:

Stage One: The Unreflective Thinker (we are unaware of significant problems in our thinking)

Stage Two: The Challenged Thinker (we become aware of problems in our thinking)

Stage Three: The Beginning Thinker (we try to improve but without regular practice)

Stage Four: The Practicing Thinker (we recognize the necessity of regular practice)

Stage Five: The Advanced Thinker (we advance in accordance with our practice)

Stage Six: The Master Thinker (skilled & insightful thinking become second nature to us)

To elaborate:

Stage One: Unreflective Thinkers

We start unaware of the role thinking plays in our lives, how it helps us or causes problems for us.  We assume what we believe and think of is true, we assume we are, at least compared to others, unbiased, objective and rational.  We assume experience, feelings and common sense are sufficient and base our beliefs and positions on them.  Basically, if it feels good, it must be right/true.

Our intuition may be quite competent at this stage, or we may even be good critical thinkers in particular domains.  But we have no sense of “metacognition”, the ability to think about our own thinking.

Stage Two: The Challenged Thinker

We stop denying we have a problem; we admit that we are weak thinkers and largely ignorant. We acknowledge and accept that, as Einstein once said, one cannot solve a problem at the level on which it was created.  We start to recognize our own fallacies and assumptions.  We begin to properly utilize information and concepts, make inferences, understand implications, define terms and problems and admit to our own fallibility.  We start to understand the tremendous long-term challenge before us.  It’s easy to retreat to the first stage at this point.

Stage Three: The Beginning Thinker

We embark on this challenge.  We are, in a sense, “beginning” critical thinkers.  We begin to take thinking seriously.  We simultaneously see the vast expanse of knowledge that is critical thinking but also feel invigorated by our new quest.  We begin to be more perceptive of our own flawed reasoning and biases.  We begin, maybe instinctively, to analyze the logic of situations and ideas.  We begin to question ourselves more.  We start to prioritize not only the information itself, but its accuracy and relevance.  We become aware of our own interpretations and vigilant of it. We pay attention to the meanings of words and the implications of our reasoning.  We begin to consider and respect alternatives.  We begin, most importantly, to apply standards to our thinking.  Our values shift.  We value reasoning, intellectual honesty and rigor more.  As tempting as it may be to give up on difficult problems, in order to move past this stage, we must change our values, lest we assume that we are “good enough” thinkers, or that we cannot improve as thinkers.

Stage Four: The Practical Thinker

We begin to develop a systematic plan for how to think. We become intellectually organized and rigorous about changing our thinking.  We commit to this change.

Stage Five: The Advanced Thinker

Our regimen for thinking starts to pay off.  We can routinely identify and find solutions for problems in our thinking.  We are aware of the particular domains that need the most work and strive to improve them.  We find and begin to reduce bias in our thoughts.  We no longer have an ego investment in being right or winning arguments, but rather learning and improving our thinking.  We begin to enjoy constructive criticism and entering other’s perspectives.  We find the process satisfying and fulfilling.  We continue to look for ways in which we need to improve.  We also, sadly, begin to see how egocentricity and bias can be destructive.  We acknowledge its innateness, but reject its necessity or benefit.  We regularly monitor and assess our own thinking as well as others’.

Stage 6: The Master Thinker

We have developed a systematic strategy for thinking which we are constantly improving.  The basic skills and standards of thinking are deeply internalized and have become intuitive.  Thoughts are now much more rigorous, objective and careful.  We are intellectually humble, honest, persevering, responsible and autonomous.

How do we further improve our critical thinking?

To move past stage 1 we must first recognize and acknowledge that we are not perfect thinkers.  It’s that simple.  But many people do not even do this.  Our recognition must also be specific.  How is your thinking flawed?  What specific biases do you have?  What specific fallacies are you guilty of committing?  We continue to improve by practicing, such as we did during the meet up.  We assess the clarity of the questions and statements presented to us.  We ask questions for more clarification.  We even offer ways for others to clarify their points to us.  We assess the relevance of what is being said or asked.  Is it tangential?  Does it pertain, perhaps in a way I hadn’t recognized earlier?  We develop an intuition for irrationality and illogical in both our and others’ arguments.  We strive to gather and assess all the relevant data.  We work to recognize and break down our own biases and prejudices.

Can you be more specific?

We tried out a couple exercises during the meet up that you can also try to do on your own or, preferably, with a partner or partners.  Here they are:

1. Analyze social norms

We can analyze the behavior that is expected of us in our social groups.  What is encouraged and discouraged?  What are we expected to believe and agree on, and what are we expected to deny, reject or disagree with?  Do you agree with those norms?  Why do you think we have them?

2. Tour Guide for an Alien

Pretend that you have been assigned the task of conducting a tour for aliens who are visiting earth and observing human life. You’re riding along in a blimp, and you float over a professional baseball stadium. One of your aliens looks down and becomes very confused, so you tell him that there is a game going on.

Try to answer the following questions for him.

1. What is a game?  What is a team?

2. Why are there no female players?

3. Why do people get so passionate watching other people play games?

4. Why can’t the people in the seats just go down on the field and join in?

Half of the group can take on the role of the incredulous and highly skeptical aliens, while the other half takes the role of the tour guides.

3. Consider a complex problem

Let’s consider a complex issue, an issue with some depth.  Let’s analyze the elements of this issue.  First, why is this issue important?  Why should anyone care about it?  What is/are the problem(s)?  What are our underlying assumptions about it?  Are they fair and rational ones?  Do others have any objections to them?  What questions do we have about our own position or the opposition’s?  What are we trying to solve?  What information do we need in order to do this?  How can we get it if we don’t already have it?  Be as specific as possible.  Our purpose is not necessarily to reach a conclusion; we probably can’t do that in one day.  It is, instead, to calmly and deliberately reason about the problem.

Some things you can do at home, by yourself:

1. Ask yourself some fundamental questions.  What are your values and beliefs, particularly ones that you or others haven’t questioned?  What do you take for granted or assume is common sense?  Identify them.  If you can, try to question them: Why do you believe those things?  Do you have reasons?  Can you rationally justify them?

2. A problem a day: Find some time each day, even just 5-10 minutes, and consider a problem, maybe an issue you’re not sure about or that you’ve found some opposition to or that you realized you didn’t have fully thought out.  What exactly is the problem?  What relevant questions could I ask an opponent or myself to determine the solution?  How does it relate to my assumptions, beliefs or ideas?  What are the implications of my ideas?  Where could you find some helpful information?  If you have time, look for it.  Share it at the next group meet up.

3. Keep a critical thinking journal.  Write situations or issues that are emotionally significant to you, maybe even hot button issues.  Keep it to one issue per entry.  Describe your mental reactions to this situation.  What do you think about it?  What are your beliefs and opinions in relation to it?  Be specific.  After, analyze them: how objective are they?  What are the implications of your positions?  Are you left with any questions or problems?

Your attitude is also important.  You must be willing to figure out the answers for yourself, rather than demanding that someone provide them for you.  You must be willing to exercise your mental energy in pursuit of a solution rather than indolently relying on feelings.  You must be willing to review assumptions, claims, and information over and over and over again.  You must be willing to change your position, perhaps several times, should better arguments or evidence present themselves.  You must be willing, most importantly, to be wrong and to be criticized (hopefully in an objective way) for it again and again.  You must be open-minded and prudent.  You must desire to be well-informed and knowledgeable.  You must be willing to make unbiased and objective judgments on the credibility of sources.  You must want to ask questions.

To start with we are going to use Robert Ennis’ three underlying strategies he calls “RRA”.

Reflection, Reasons and Alternatives:

1. Reflection: stop and think rather than make snap judgments or go with the first idea that pops in your mind.  Give yourself some time and space to formulate your position, your counter argument or questions.

2. Reasons: Consider and question reasons.  Ask “How do you know?” or “What reason(s) do you have to believe that?” or “What are your sources?”

3. Alternatives: Remain alert for possible alternatives.  Offer them by asking “What about…?” or “Is it possible that…?”



January Meet Up: Education in Japan

Been awhile since I’ve updated this blog, my apologies.  Just got back from a great vacation, but now it’s time to think critically again!

Every month we center on a particular theme which we discuss twice a month, once in an open group where anyone and everyone can join, and once in a closed group, which allows for a limited number of participants and is for group members only.  This month’s topic was education, particularly at the primary and secondary levels in Japan.  We focused on cram schools, differences between Japanese and Western education and bullying.  We also talked a bit about the history of education in Japan, philosophy of education and people’s personal experiences either attending or working at Japanese schools.

 What is education?

Education is passing knowledge, or skills from one generation to the next through some form of instruction (generally teaching or training). Education usually takes place under the guidance of others, but may also be autodidactic. Education can take place in formal or informal educational settings.

What is the history of education in Japan?

When the Tokugawa period began, few common people in Japan could read or write but by the period’s end, learning had become widespread.   This started around 1600 and ended around the middle of the century.  During the Tokugawa period, the role of many of the samurai, changed from warrior to government bureaucrat, and as a consequence, their formal education and their literacy increased proportionally. Traditional Samurai curricula for elites stressed morality and the martial arts and Confucian classics. Arithmetic and calligraphy were also studied. Education of commoners was generally practically oriented, providing basic 3-Rs (reading, writing and arithmetic), calligraphy and use of the abacus. By the 1860s, 40-50% of Japanese boys, and 15% of the girls, had some schooling outside the home. These rates were comparable to major European nations at the time. The Meiji period facilitated Japan’s transition from feudal society to modern nation paying close attention to Western science, technology and educational methods. Reformers set Japan on a rapid course of modernization, with a public education system. The Iwakura mission were sent abroad to study the education systems of leading Western countries. Elementary school enrollment climbed from about 40 or 50 percent of the school-age population in the 1870s to more than 90 percent by 1900, despite strong public protest, especially against school fees. After 1870 school textbooks based on Confucianism were replaced by westernized texts. However by the 1890s, a reaction set in and a more authoritarian approach was imposed. Traditional Confucian and Shinto precepts were again stressed, especially those concerning the hierarchical nature of human relations, service to the new state, the pursuit of learning, and morality. In the early 20th century, education at the primary level was egalitarian and virtually universal, but at higher levels it was highly selective and elitist. Occupation policy makers and the United States Education Mission, set up in 1946, made a number of changes aimed at democratizing Japanese education: instituting the six-three-three grade structure (six years of elementary school, three of lower-secondary school, and three of upper-secondary school) and extending compulsory schooling to nine years. They replaced the prewar system of higher-secondary schools with comprehensive upper-secondary schools (high schools). Curricula and textbooks were revised, the nationalistic morals course was abolished and replaced with social studies, locally elected school boards were introduced, and teachers unions established. After the restoration of full national sovereignty in 1952, Japan immediately began to modify some of the changes in education, to reflect Japanese ideas about education and educational administration. The postwar Ministry of Education regained a great deal of power. A course in moral education was reinstituted in modified form, despite substantial initial concern that it would lead to a renewal of heightened nationalism.

What is the Japanese education system like now?

Japanese education has been run as a nation-wide standardized system under the full control of the Ministry of Education. The only alternative option is private schools that have more freedom to offer different curriculum including the choice of textbooks (public schools can use only the government approved textbooks) and foreign languages. However, almost all of these private schools require students to take an entrance examination and pay a high tuition. Japan has a 100% enrollment in compulsory grades and near zero illiteracy. High school enrollment is over 96% nationwide and nearly 100% in the cities even though it is not compulsory. The high school drop-out rate is about 2% but has been increasing. Almost half of high school graduates go on to university or junior college. The average school day on weekdays is 6 hours, one of the longest school days in the world and vacations are 6 weeks in the summer and about 2 weeks each for winter and spring breaks. There is often homework over these vacations.

According to the material I encountered, in elementary school students spend a lot of time on music, art and physical education. In 1959 students started taking moral lessons again, as part of holistic education which is seen as the main task of the elementary school. However, a number of Japanese students said that their personal experiences were different.  For example, they recalled spending more time on academic subjects and not an especially distinct amount of time on physical education, or they didn’t recall taking any classes on morals.

Also, even though the sources I read said that the middle school and high school curriculums still have music, art, physical education, field trips, clubs and home room time, the Japanese who attended the meet up said these things had diminished significantly and were replaced with more academic subjects and learning: Japanese, mathematics, social studies, science, and English. The pace is quick and intense and instruction is structured, fact-heavy and routine-based because teachers have to cover a lot of ground in preparation for high-school entrance examinations. Hierarchical teacher-peer and senior-to-junior relationships as well as highly organized, disciplined and hierarchical work environments such as various established student committees, are observed at middle schools.

There is some evidence that teachers feel it is their duty to develop children “holistically”,  focusing on health, nutrition, sleep, manners and so on.  Again, however, a number of the Japanese people who attended stated that not all of their teachers focused on these things. Students are also taught how to speak politely and how to talk to their teachers and peers appropriately (it differs in Japan).

Is it a good system?  Some critics say very that since creativity and critical thinking are not developed, little learning actually occurs. The education system was designed in an era when most people would finish up high school and work in factories, either in management or labor, which was fine when Japan was industrialized and mass production and consumption drove the economy. But these critics say the world is becoming a post-consumerist, global society where creative ideas and solutions are becoming increasingly important. Japan seems does not seem to be adapting to or possibly even understanding this.

As I perused the internet for critical insights into possible problems in the Japanese education system, I came across the following examples:
1. Lack of competition
Japanese education is often rigid and uniform. In order to be applicable at a public school, approval from the Ministry of Education must be met.  Anecdotal evidence suggests that while this may be the case in theory, in reality a lot of teachers are actually applying diverse methods unbeknownst to the government in practice.  In any case, the diversity of school books and other materials is limited, and there is little room for developing new educational materials and methods.

2. Exam wars

High school, junior high school and increasingly nowadays even elementary school students are spending more time at cram schools and coming home later. A survey has shown that 27% of elementary school students and 64% of junior high school children feel fatigue in their daily lives. Examination wars prevent children from growing up with sound minds, which makes their future of Japan gloomy.  Many people worry that students are thereby forfeiting time needed for play, leisure and the natural development of social skills.
3. The risk of the nationally unified education

Since a government agency decides educational content, if the agency makes a mistake, all schools are forced to go along with it.  On top of this, a system like this is more susceptible to national indoctrination than one which is more diversified.
4. Japanese education rejects individual differences

The students who achieved excellent results in a subject can frequently progress faster or proceed to the next grade in the United States. The absence of a national curriculum allows such flexibility. No educational theory nor educational psychology argues that every child at each grade develops at the same speed.  For instance, none of the Japanese students had heard anything like AP classes or jumping ahead grades like we have in the U.S.
5. Educational system disturbing freedom of thought and education

The description and interpretation of school books on history have been variously argued in Japan. Strictly speaking, there are about 1,200 million Japanese nationals, and accordingly, there must be the same number of historical views since all of them were born at different times in different environments. Today, Japanese schools nationwide teach a unified historical view.  However, one of the Japanese people who attended claimed that she was taught to feel guilty for what Japanese had done in history, while another claimed she was not taught to feel this way about it.

So what are these “cram schools”?

A lot of students nowadays go to cram school, called “juku”. These are specialized schools whose ostensible purpose is to help students prepare for special entrance exams to get into either preferred or more prestigious schools.  According to the Japanese members of our group, these tests are quite strict: students may only select one school and only have one chance to pass the entrance exam, otherwise the next school they will attend will be decided for them.  Japan even has them for entering prestigious private kindergartens.  Some Japanese parents are eager to send their children to such kindergartens, which are also associated with prestigious university, and in most cases guarantee that the students can go on all the way to university. There was a time in Japan, when people relied on private tutors, but private tutors could not cope with the intensifying entrance examination competition. Many students go to juku to prepare themselves to successfully pass entrance examinations. These exams weed out applicants by a rote-learning type of written test and special training is required. The standard education received at school alone is not enough to survive examination war and juku makes the difference. It is not unusual to see children going to juku 2-3 hours a day after school, 3-4 days a week. A fiscal 1993 Ministry of education study found 24% of elementary school children, 50% of junior high school students and 60% of high school students are going to juku. The diploma from first-rated universities is one of the important requirements to get quickly promoted in a job, which is behind this entire obsession. While cram schools are ostensibly for passing these exams, the Japanese people who attended the meeting claimed that they had attended cram school either as an opportunity for making friends, or because they felt pressured to do so because their friends were also doing it.  It has, apparently, become quite the norm to attend at least some cram school, which is not surprising, considering more than half of high schools students do so.

Are cram schools good or bad?

Anne Conduit, author of “Educating Andy. The Experience of a Foreign Family in the Japanese Elementary School System” seems to think it’s a combination of the primary school and the cram school which produces such high achievements in mathematics for the Japanese. Monbusho (Education Ministry), however, seems to think differently. For many parents and students, it seems, they are a necessary part of life. There is disagreement as to whether cram schools are serving an educational needs, or they are responsible for manufacturing such a need.  Whichever the case may be, cram school attendance is on the rise, despite the Ministry of Education’s best efforts.

There are ambivalent attitudes towards the commercial nature of cram schools, as well.  Many feel that their profit-driven motives are detrimental to any actual educational directives.  The necessarily high costs also driven a wedge between have’s and have not’s.  Proponents of cram schools, however, counter by stating that if they did not produce results that parents and students are happy with, they would lose profits, and by virtue of that they must educationally beneficial results or their profits would inevitably drop. The results are easy to measure since they depend on how many graduates pass the examinations for private school. The profit motive, in other words, provides an incentive to create an atmosphere in which students want to learn.  Proponents also point to the rise of jukus as an example of Japanese success, a reflection of a system of meritocratic advancement.

Critics, however, also point out that it is forcing children to surrender their childhood to an adult-like obsession with status and achievement. Others say jukus are hampering a child’s free time to play and develop social skills. It is not healthy to become completely caught up in competition and status at such a young age.  The exam war and intense preparation leading up to entrance exams is said to be the chief cause of stress for most middle class children. Children suffer from study stress, from bullying at school, from the effects of kireru (a sudden, unexpected explosion of hostility or even violence), as well as social withdrawal; “acting in” rather than acting out.

Ijime (Bullying)

Ijime, or bullying has been the most publicly discussed educational problem of the century since a recent wave of suicides beginning in 1994: 11 cases over an 18 month-period. Nationwide, 60,096 incidents of bullying were reported in 1995. Legal affairs bureaus made cases out nearly 4,000 cases of bullying in 2012. The national police agency fully investigated 260 cases of school bullying that year, double that in 2011, the highest in 25 years. The report said 511 students were arrested or taken into custody for bullying, more than twice the year before. The 2004 results of survey showed that about 55% of both girls and boys replied that whenever they witnessed bullying, they pretended not to notice.

According to investigations by MOE’s specialist research group on bullying, 12% of the students were bullied and that 17% inflicted bullying on others. According to this study, any student could be a target of bullying and that bullying occurs among friends and ordinary classmates.  Around half of home room teachers incorrectly thought that their classes had no incidence of bullying. Bullied children were very unlikely to reveal the happenings to their teachers. Only 40% of the bullied thought their teachers knew about the happenings, and 30-50% thought their parents were unaware of the happenings. About 80% of the parents were actually unaware that their children were bullies.

Japanese students are two times less likely to intervene in bullying than their counterparts in other countries (only 1 in 5 said they would). Japanese youth are more ambivalent about the nature of bullying, only 2 out of 3 said you shouldn’t bully. 10.7% of Japanese boys and 3.8% of girls surveyed say they would join in the bullying compared to 5.2% and 2.6% of their respective counterparts in the US. Moving up the grades, bullying tends to change from exclusion to violence. Bullying escalates at middle schools showing up more incidents than at elementary schools, while the opposite trend is observed in other countries.

Some 80% of bullying among school students in Japan qualifies as “collective” violence, meaning entire classrooms vs. a single victim, and 90% of the cases are considered ongoing, lasting more than a week.  Some examples are pretty horrific. One student was taunted, then beaten, then forced to shoplift items for the bullies, and eventually forced to eat dead bees over a period of months. That student sparked a recent national outcry on bullying when he committed suicide at the age of 13. Teachers at the school were aware of the problem, but had only responded with a verbal warning.


  • While Japanese education appears to be superior in its quantitive surface, is it really?  Are the statistics that unusual for education in any developed country?  In addition, are they sufficient for pointing to the Japanese education system for a model of superior education?  What about qualitative measurements?
  • To what extent does Japan’s collectivism influence its educational system and vice versa?  Is holistic education more common in collectivist country than in individualist?  Most of the members who joined this discussion were Western; are we simply biased into thinking that our more progressive educational systems are “superior”?
  • In relation to that, what exactly should a teacher’s roles be?  Students at the primary and secondary level spend arguably most of their time at school, should teachers be responsible for teaching more than just academic knowledge?  Should they be responsible for educating a child holistically?  Isn’t it almost inevitable, given how much time teachers spend with children, that these lessons will be taught, inadvertently or otherwise?
  • Are cram schools really to “blame” for any educational problems, or are the problems more central to educational systems in Japan or even societal issues?
  • While the data seems to indicate that Japanese children’s proposed attitudes towards bullying are not as enlightened as attitudes of children in other societies, does this necessarily point to anything in real life?  Is actual bullying any worse than it is in other countries?  Or is it that educational problems in other countries are much more serious (school shootings, lack of attendance, teen pregnancy, drop-out rate, etc.) so they do not have the resources to focus on bullying like Japan – a country which does not suffer from these more severe issues – does?

Further reading:



Problems with Japanese education:

December learning groups: Confirmation Bias

Every month we also have a study/discussion group (which are now called learning groups) about an aspect of critical thinking, logic or argumentation. Our first one in October was about informal logical fallacies, the previous one was on evidence.  Initially this one was going to be on bias in general, but as usual, I discovered the scope to be much too wide to fit in just one meet up.  In fact, all total it may take about a year to cover. So I decided to narrow it down to confirmation bias.

But first, let’s look at what bias is, exactly.


We all work within a subjective social reality, a way of looking at the social world from our personal perspective.  But it is potentially rife with distortion, as well as inaccurate judgments and interpretations; in other words, irrationality.

Bias is a subjective and often flawed desire or tendency to hold a particular perspective or outlook.  It could be by dismissing or denying other points of view.  It is generally held as a stance towards a particular object, such as an individual person (such as a boss), an individual non-human object (for example, black cats) or even a group of people (a nationality or even a gender).  There are, as I discovered, several types of biases and numerous examples of each type.  Some affect judgment or decision-making (one of which we will be covering today).  These are called cognitive biases.  Others affect our social behavior, others our memory.  Some biases exist externally, such as statistical bias and media bias, as well.

Biases are not necessarily always detrimental.  They lead to more efficient – if sometimes less effective – decisions. They also enable us to be more proactive, rather than wallowing in perpetual over-analyzation.

So why do people have bias?

Quite frankly, it would seem that we don’t have much choice:

  1. Our rationality is bounded, meaning we have cognitive limitations: a finite amount of time within which to acquire a finite amount of information and to make decisions.  Often our decisions need to be quick, and a deliberate and analytical process is unproductive or even unfeasible.  We often seek satisfying decisions rather than optimal ones.
  2. Most of the time evidence is neither simple nor clear-cut; it is often complex, ambiguous and/or confusing or even contradictory.  Again, because of bounded reality, we can’t possibly weigh all the evidence efficiently, much less objectively.
  3. We are biased for our own sanity: we are bombarded with stimuli every day, particularly in modern society, much too much to take in completely, at least at a conscious level. We have developed a mechanism called heuristics, which we will talk about in a future meet up, to compensate.
  4. It pays to be selectively perceptive.  There are intrinsic social costs to being wrong about your beliefs, as opposed to having objectively correct beliefs.  It makes us look, validly or not, foolish, unintelligent, gullible or dishonest.  It discredits us as potential leaders, thus lowering our status.  It ostracizes us from any groups we have identified with.    For this reason, we benefit from forgetting or ignoring any stimuli that is emotionally discomforting or contradictory to our paradigms and we focus on the information or stimuli that confirms our current paradigm.
  5. Unfortunately it is more or less reflexive: Confirmation bias is often the result of the Semmelweis reflex, a tendency to instinctively reject any evidence or knowledge that contradicts our own norms, beliefs or paradigms. There is reason to think that an admission of error can be a socially costly one; you lose face, you lose reputation and trustworthiness, etc.
  6. On the other hand, self-verification and self-enhancement are to our advantage. For this reason we tend to be one-sided in looking for evidence or arguments that bolster our ideas, rather than one’s that may contradict them, even though they made lead us to a more likely conclusion. This leads to presumptuous and loaded questions and inquiry, as well as appeals and other logically fallacious thought processes.
  7. Wishful thinking is a concept based on the “Pollyanna Principle” which dictates that if a conclusion is pleasant it is therefore favorable over an unpleasant one, no matter how cogent or sound the unpleasant one may be.  Basically, it’s the idea that if we simply wish something were true, that should be sufficient to make it true, and anything that contradicts that is to be dismissed, ignored or ridiculed.

Another related bias which explains confirmation bias is called subjective validation.  People consider a piece of information more valid if it has any personal significance for them.  So, for instance, a superstitious person feel validated seeing a correlation between them getting fired and the date being the 13th (and consequently relieve themselves of personal responsibility), whereas another person would not see the same validation.  Related is the Forer effect, the tendency for us to associate ourselves with positive attributions about ourselves that psychics, mentalists, cold readers or other scammers use.

  • Example: This was shown using a fictional child custody case. Subjects read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take him or her away for long periods. When asked, “Which parent should have custody of the child?” the subjects looked for positive attributes and a majority chose Parent B. However, when the question was, “Which parent should be denied custody of the child?” they looked for negative attributes, but again a majority answered Parent B, implying that Parent A should have custody. (This could be a good reason why it’s beneficial to ask positive questions rather than negative ones, as opposed to the act of asking positive questions being “idealistic” or “naive”)

So what is confirmation bias?

We are going to focus on cognitive biases, specifically judgment and decision-making biases, starting with confirmation bias, and moving on to illusory biases which make us look at the world in an overly positive or negative way, as well as attentional biases, probability biases and what I call “comfort” biases, biases that motivate us to retain a status quo.

Confirmation bias is a tendency of people to favor information that confirms their beliefs or hypotheses.  It may be a bias towards collecting or searching for information (we tend to read books or magazines or check out websites that go along with our own views), or towards remembering certain information (we tend to remember information that supports our views better than information that contradicts it), or even towards paying attention to or accepting certain information (when people talk to us when tend to pick out the information that confirms certain positions we hold).  It can lead a person to interpret ambiguous information as favorable towards their position.  For instance, when disaster strikes an atheist might say “See, a loving God wouldn’t do that” whereas a theist might say “See, that’s God showing his disapproval.”  We tend to interpret information in a biased way when we have a strong opinion one way or another. Our standards are more lenient for evidence that supports our beliefs, and stricter for opposing evidence. In regards to opposing evidence, this is also called “disconfirmation” bias.

  • Example: A team at Stanford University ran an experiment with subjects who felt strongly about capital punishment, with half in favor and half against. Each of these subjects read descriptions of two studies; a comparison of U.S. states with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the subjects were asked whether their opinions had changed or not. They then read a much more detailed account of each study’s procedure and had to rate how well-conducted and convincing that research was. In fact, the studies were fictional. Half the subjects were told that one kind of study supported the deterrent effect and the other undermined it, while for other subjects the conclusions were swapped. The subjects, whether proponents or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Subjects described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways. Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, “The research didn’t cover a long enough period of time”, while an opponent’s comment on the same study said, “No strong evidence to contradict the researchers has been presented”. Subjects made their judgments while in a magnetic resonance imaging (MRI) scanner which monitored their brain activity. As subjects evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures.

Yeah… so what?

Well, for one thing, confirmation bias can lead to attitude polarization, the phenomenon by which people’s attitudes draw even further and further apart from those who disagree with them, and towards belief preservation, the persistence of a belief even after it has been demonstrated to be false and finally towards illusory correlation, falsely seeing correlations where there actually are none.  Conspiracy theorists are a good example of attitude polarization and belief preservation at work; even if they find contradictory evidence, they chalk it up to part of the conspiracy, and feel that much more ensured that their beliefs are true.  They are also a good example of illusory correlation, seeing signs that the illuminati is at work in the supposed “symbols” and gestures that celebrities exhibit.

  • Example of attitude polarization: A study was done at the Stanford in which subjects with strong opinions about the death penalty read about mixed experimental evidence. Twenty-three percent of the subjects reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes. Another example: They measured the attitudes of their subjects towards these issues before and after reading arguments on each side of the debate. Two groups of subjects showed attitude polarization; those with strong prior opinions and those who were politically knowledgeable. In part of this study, subjects chose which information sources to read, from a list prepared by the experimenters. For example they could read the National Rifle Association’s and the Brady Anti-Handgun Coalition’s arguments on gun control. Subjects were more likely to read arguments that supported their existing attitudes.
  • The belief perseverance effect has been shown by a series of experiments using what is called the “debriefing paradigm”: subjects read fake evidence for a hypothesis, their attitude change is measured, and then the fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous level. A typical finding is that at least some of the initial belief remains even after a full debrief. In one experiment, subjects had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, subjects were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.

Confirmation bias can also lead us to see relationships where none exist. This is called illusory correlation or illusory association.

  • Example: A study recorded the symptoms experienced by arthritic patients, along with weather conditions over a 15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero. People rely heavily on the number of positive-positive cases when judging correlation: in this example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation (of no pain and/or good weather).

Confirmation bias can also lead to the “backfire effect”; when people’s confirmation bias is quite strong they will tend to be more repelled by contradictory evidence than someone who is more objective.  In other words, contradictory evidence makes them more adamant about their current beliefs!  This effect explains the behavior of the conspiracy theorist, as well.

In more practical terms, it can lead to bad financial decisions; it can result in overconfidence in one’s decisions which leads one to ignore evidence that their decisions may be bad ones.  It can lead to inhibition of medical advances when professionals are certain that their current knowledge is optimal.  It is a product of depression, leading depressed individuals to seek out and confirm evidence that fits in and reinforces their negative paradigms.  It can lead to a tendency to believe in the paranormal, such as psychic readings.  If one wants to believe in such phenomenon, he or she will make a cognitive effort to make connections between himself or herself and the reading.  It can lead to exacerbating or extending conflict, whether between individuals or even nations, when both sides are certain of the veracity of their side of the argument.

The underlying idea is that, optimally, we want to make rational rather than irrational decisions, or at least interject as much rationality as we can in our judgments and decisions.  While it may be impossible to eliminate emotions from decisions, especially those which affect us personally, ideally we want our decisions to be based on an optimal, formal process, particularly decisions that are going to impact us and those around us most strongly and most long-term.  Emotions tend to lead towards detrimental decisions when considered in the long run.  The more intense and immediate the anticipated results and the emotions themselves are, the more impact we feel they have.  We tend to focus on the short-term effects and anticipated negative emotions for the purpose of alleviating our present emotional state or currently perceived negative impact, rather than look at the objective results of our decisions long-term. In these cases, our desires or fears override our reasoning, and our beliefs or actions suffer. Fear and sadness tend to lead to irrational pessimism, anger towards overly quick and necessarily unanalytical decisions.  Stress generated from emotional upset can add to cognitive “load” which makes it difficult to remain rigorous when making decisions.  Also, fear of potential regret or disappointment in the future can negatively affect a decision in the present. They can affect not only trivial decisions, but major financial or even medical ones.  Neuroscience experiments have shown how emotions and cognition, which are present in different areas of the human brain, interfere with each other in decision-making process, resulting often in a primacy of emotions over reasoning, meaning that we place more importance on emotions rather than reasoning.  Biases are not only detrimental on such a small-scale.  Consider that many social institutions rely on individuals to make rational judgments, like courts, or that people in leadership positions are susceptible to bias as well.


1. An investor who imagines losing a small amount of money even after a big gain will generally focus with disappointment on the lost investment, rather than with pleasure on the overall amount still owned.

2. A dieter who anticipates losing two pounds may imagine feeling pleasure even though those two pounds are a very small percentage of what needs to be lost overall.

3. Game participants who could win $1000 and end up with nothing base their disappointment on the loss of the hoped-for prize, rather than on the fact that they have no less money than they had when they began the game.

4. A fear of flying experienced while deciding how to travel may lead a person to choose driving even though air safety statistics would show air travel to be statistically less likely to present a danger. A fear of flying may be enhanced by the vividness of the mental image of a plane crash may be in the mind of the decision-maker.

Confirmation bias, among other biases which we will discuss in the future, can affect our memory.  This is called “selective recall“, “confirmatory memory” or “access-biased memory“.  Information that matches expectations or beliefs is more easily stored and recalled.

  • Example: In a group of subjects were shown evidence that extroverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated, study, they were asked to recall events from their lives in which they had been either introverted or extroverted. Each group of subjects provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.

So, as we can see, bias can affect us pretty severely.

Ok, so what can we do about it?

Some would say very little to nothing.  However, some studies have shown that awareness of biases has the tendency to decrease the likelihood of the bias.  Simply put, the more conscious we become of our unconscious, the less influence it has.  By refocusing our attention to our behaviors rather than trying to decipher their inner workings we can become more objective, and thus more accurate, about our own biases.

In the future we will discuss cognitive bias mitigation, or ways to handle bias when it arises, and cognitive bias modification, ways that we can change our own biases to prevent them from interfering when we need to think clearly and rationally.

 Recommended Books

A Mind of Its Own: How Your Brain Distorts and Deceives – Cordelia Fine

Cognitive Illusions: A Handbook on Fallacies and Biases in Thinking, Judgment and Memory – Rudiger Pohl

Don’t Believe Everything You Think – Thomas E. Kida

Mistakes Were Made (But Not By Me) – Carol Tavris

Social Cognition: Making Sense of People – Ziva Kunda

Strangers to Ourselves – Timothy Wilson

Thinking, Fast and Slow – Daniel Kahneman

Thinking and Deciding – Johnathan Baron

When Prophecy Fails – Leon Festing

 Recommended Videos and Websites

Selective perception videos:

Illusory Correlation:

Amusing video on confirmation bias:

Website on confirmation bias (as well as other biases):

December Meet-Up: Meta-ethics (semantical theories)

This month’s topic: Meta-ethics

First question first: what is morality?

Morals and ethics are very similar, even the same under some definitions, but in a general sense, morals are meant to refer to the internally created differentiation between good and bad, right and wrong, as well as our personal intentions, decisions and actions in regards to those determinations, while ethics are the specific application of morals in a specific context.  For the purposes of discussing metaethics, however, it is fine to conflate the two, as metaethics entails discussing the presumptions which underlie normative morals and ethics.

There are several branches of ethics.  One of them called meta-ethics is about knowing, understanding and defining ethics. Normative and applied ethics focus on what is moral (and how we can act morally), while metaethics focuses on morality itself.  So normative or applied ethics might ask “What should we do?” or “How should we handle this situation?” metaethics asks “What is goodness?” or “What does moral mean?” or “How do we distinguish right and wrong?”  It seeks to understand the nature of morality.

We’re going to talk about metaethics today, specifically what are called semantic theories.  That might sound quite lofty and nebulous, but it’s actually a simple aspect of ethics and morality with complex implications.  It’s about what we mean when we say “good” and “bad”.

I assume that it goes without saying why a discussion of morals and ethics is so important, but the same may not necessarily be said about metaethics, so I’d like to elaborate on why I chose this topic: our expanding knowledge regarding science, particularly evolution and genetics, has led us to a wide variety of conclusions about morals and ethics, some less substantiated than others.  For example, the conclusion that certain races are more deserving of life than others (so-called social Darwinism, taken to a more extreme form).  I believe that the ability to define morality, especially in practical deliberation, is an increasingly important one in terms of evaluating morality and making moral decisions.

I realized there are a lot of questions regarding ethics and morals, and each of them entails some pretty heavy philosophical consideration, so today I’d like to focus on one central question: What is the meaning of good/bad, right/wrong?  What exactly do people mean when something is moral or immoral?  Is there some factual, truth-related basis to it, or is it just a matter of preference or social prescription?  Hopefully in the future we can address the other very important questions as well.

To discuss it further, I’d like to talk about two broad schools of thought called cognitivism and non-cognitivism

 Cognitivism holds that right and wrong are factual matters, just like saying “Water is H2O”.  They are called truth-apt statements, or statements that can be either true or false. Because of this it can logically accommodate the connection between moral and non-moral thought and talk, but has a harder time figuring out the nature of morality.  They hold that non-cognitivism holds that the burden of proof lies on the non-cognitivists to explain why we speak of morality in terms of truth-apt statements.

Non-cognitivism has the opposite problem.  Non-cognitivism is, of course, the opposing view that moral statements cannot be true or false, but rather they are statements of feeling or preference.  It implies that moral “knowledge” is impossible. Hume noticed that moral disputes often carry heavy emotions and we can’t adjudicate to verification or falsification like we can with scientific statements.  For example, with water we can point to it and give it qualities like “wet” or measure it, but we can’t do that when someone says “slavery is wrong!”.  Non-cognitivists are interested in the attitudes which are expressed and what people are actually doing when making such claims (for example, reacting to the world or expressing a desire for an ideal world).  They posit that language, and thus a moral statement, is simply a tool for influencing others. They also hold that the burden of proof lies on the cognitivists to demonstrate that moral statements can be true or be a property.

Let’s look at some cognitivist ideas now.

Moral realism is the stance that moral statements are mind-independent (they don’t have to exist in a mind), objective facts about the world.  One form of this is Ethical NaturalismEthical Naturalism suggests that we can gain moral knowledge by inquiring into the natural world, just like what we do with scientific knowledge. They assume that there is natural justification for morality.  They look to scientific models of reductionism (breaking things down into parts, like from an object to its atoms) to understand moral reality.  In other words, it holds that there is a “science of morality.”  It holds that moral concepts are natural properties of the natural world, much like “hardness” or “dampness”.  It suggests that we can base ethics on rational and empirical consideration of the natural world.  It supposes that there are objective answers to moral questions.  For example, why is it right to protect family members and loved ones?  They would point to our biological make-up creating the imperative.  It does not necessarily suggest that the answers we find will be absolutely certain, just like we do not suppose scientific facts are absolutely certain, either.

Ethical Non-Naturalism, also a type of cognitivism, holds that moral statements are factual, mind-independent and objective, but while Ethical Naturalism holds that these they are properties of nature, Ethical Non-Naturalism holds that they are simply undefinable, non-natural (not to be confused with supernatural) properties, kind of like the laws of logic.  They allow for the justification of moral beliefs to be grounded in brute facts, or facts that are true a priori, such as “killing someone innocent is wrong”. It holds that moral properties are sui generis (a concept or property of its own). What this means is that, unlike Naturalism, an Ethical Non-Naturalist cannot reduce, for example, “goodness” to a need or a want or a pleasure (as naturalists do) like they could with natural properties like “hardness” or “dampness”, but only hold that goodness is goodness; it is undefinable beyond that.

Ethical Subjectivism is the stance that moral statements are made facts by individual people or society.  In other word, moral statements are true about the attitudes of people or societies, but not about nature or the world itself.  So if the members of a community hold that “lying is completely and always wrong”, then that particular statement becomes factual for that community.  We could hold “Lying is completely and always wrong in that community” as a factual statement.

Ideal Observer Theory holds that we can determine what is objectively right by imagining a hypothetical “ideal observer” and assuming that what this observer evaluates as right is, in fact, right, regardless of how we feel about it.  So when we consider a situation, we can determine its objective moral value by imagining a third, disinterested party observing and evaluating it.

Divine Command Theory holds that a unique being, such as God, is necessary for determining what is factually right.

Ok, now on to the non-cognitivist theories:

Prescriptivism holds that moral statements are merely prescriptions or authoritative recommendations.  So a statement like “killing is bad” may seem like a truth-apt statement on the surface, but actually it is equivalent to “you shouldn’t kill”, which is not a statement of fact but opinion.  Therefore, morality isn’t about “knowing” what’s right or wrong, but about judging people’s and action’s character and then prescribing an action.  It is the feeling – for example, of disgust towards murder – that keeps us from behaving badly, not any “facts” about the world or any property of “wrongness”. Universal Prescriptivism holds that moral statements are universalized imperatives or commands.  So while cognitivists treat “Murder is wrong” as a statement of fact, Universal Prescriptivists treat it like a command: it means “do not murder” and they expect to be obeyed universally.

Kant’s Categorical Imperative holds that one reasoned imperative entails all subsequent obligations.  This imperative is absolute and unconditional.  It is an intrinsic property; an end-in-itself: “Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”   Kant believed that through this type of reasoning we could derive all morality.  So whether or not killing was an “objectively” bad thing to do was irrelevant; we would just have to ask ourselves “Does it contradict the imperative for me?”  If it does, it’s immoral.  If not, it’s moral.  Empirical experience from the natural world was unnecessary.

Emotivism holds that ethical sentences serve merely to express emotions.  Emotivists hold that moral statements cannot be empirically verified, and so are meaningless in any factual sense.  The only empirically verifiable aspect is the action, so if I say “stealing that money was bad” this breaks down to “You stole that money.  It was bad.”  The only fact is “you stole that money” and “it was bad” is meaningless.  Instead, it is simply a declaration of my disapproval, like saying “Boo to stealing money!” which is not factual at all.

Quasi-realism holds that moral statements “act” like factual statements but are not really.  So, for all intentions and purposes we treat them like facts and properties but they actually just express emotional attitudes.

Problems with cognitivism:

l  What does it mean to say a moral statement is “true”?  Does it mean it actually describes the external state of the world, independent of the proposition?  In other words, when we say “Murder is wrong” does it correspond to some fact in the world (a la the correspondence theory of truth)?  Can we “point to” wrongness?  Is it a spatial-temporal object, like a cat or a tree?

l  Response to Frege-Geach Problem (the Embedding Problem): Blackburn’s quasi-realism holds that we project our attitudes on to the world as if they were true (and thus susceptible to verification and falsification), thus accounting for their apparent truth-apt, logical structure. Blackburn recognizes that any form of expressivism is susceptible to the equivocation problem, if we accept the conditional as being logical. But Blackburn asks “what commitment do we put into conditionals?” as the quasi-realist accepts that moral discourse has surface logical structure, this does not mean that they have actual, classically logical structure. Once one holds sincerely that, for example, “Lying is wrong,” this is just a different way of saying, for example, “Lying has the property of being wrong” or “I believe that lying is wrong” or “It is true that lying is wrong” or that “It is a fact that lying is wrong.” The solution Blackburn proposes is the introduction of an expressive language with a corresponding logic of attitudes (Hooray and Boo). This language is meant to express approval or disapproval of an action, rather than a logical relationship. For example, “Lying is wrong” translates as “Boo! to lying”. It can be argued that this serves no practical purpose though; while it reconciles moral discourse with non-cognitivism, people will still continue to conduct discourse in this manner.

2  (This is a problem with naturalism) G.E. Moore agreed that morals have properties, but that it was wrong to assign them natural or even supernatural properties.  Basically, the idea was that by virtue of the fact that we ask if X is good it means that it is meaningless to definitively and conclusively state that X is good as though it indisputable.  The fact that the goodness of X is an open question, a question that cannot be deduced from the conceptual terms alone, so it cannot be answered as though it were a natural or supernatural property.  It’s still controversial whether good is the same thing as pleasure, for example.  If this is true, then moral facts cannot be reduced to natural properties and that therefore ethical naturalism is false.  Moore used this in defense of ethical non-naturalism.  One contention with this is that if goodness is a property discovered a posteriori, then it is not a meaningless question, as it requires inquiry and discovery to find the answer to that question, much like the statement “Water is H2O” (and hence it is still open).  This is done by invoking rightness and wrongness to explain certain empirical phenomena, and then discovering a posteriori whether maximizing utility occupies the relevant explanatory role, the assumption being we can find an empirical explanation of what we mean by “rightness” or “goodness”. Only if goodness is considered an a priori property can we say it is closed. However, the above account of a sort of a posteriori moral search is unsatisfactory in that normal value, and not moral value, can be used to explain the relevant events. Normal value arises from the relationship between desire and a state of affairs. People tend also to objectify such value, into categorical moral value, though this is fallacious. So, a situation that can be explained by the existence of real moral value (e.g. the fulfillment of preferences, the tendency towards social stability) can also be explained by non-moral value. This explanation is far simpler, given the ontological difficulties surrounding moral value. Another general problem with this is that Moore assumes the open question is a meaningful one.  Some say this begs the question (how does he know?).  One attempt to answer this utilized psychology: If X is good, then X will act as an intrinsic motivator to do it (naturally), but a person can understand that Action X will produce X and yet the person may still not do X, so X is still an open question.  It also assumes that morals are properties at all or that they even express beliefs (as opposed to feelings or imperatives or prescriptions), a contention that all non-cognitivists hold.  They would argue that it is open simply because we aren’t attributing any properties at all.

Problems with non-cognitivism:

l  If moral statements are simply statements of emotion, attitude or prescription, then why is it so plausible to express them as beliefs or attribute them as properties to actions or people?

2  If moral statements are imperatives or prescriptions, then why can we apply rules of logic to them and come to understand them better even though imperatives, feelings and prescriptions don’t?

3  If morals are just quasi-realistic expressions of attitudes or emotions, then why can I feel positively about something yet know it is immoral or vice versa?  How can I like/prefer something and yet simultaneously hold that it is immoral and vice versa?

Frege-Geach Problem (Embedding Problem): Indeed, we tend to express morality cognitively when we say “Lying is bad”, for instance.  If cognitivism is not true then how can we make logical inferences based on it?  According to Geach, the sentence “Telling lies is wrong” has the same meaning regardless of whether it occurs on its own or as the antecedent of “If telling the lies is wrong, then getting your little brother to tell lies is also wrong”. This must be so, since we may derive “Telling your little brother to tell lies is wrong” from them and both by modus ponens without any fallacy of equivocation. For example, the statement “it is wrong to get someone else to lie” can be derived logically as this modus ponens: “1. it is wrong to lie and 2. If it is wrong to lie, then it is wrong to get someone else to lie, therefore 3. It is wrong to get someone else to lie”.  Sure, separately they could be looked at as emotional attitudes or imperatives, but how do you explain the ability to infer one from the other, regardless of how we may feel about the proposition?  In the conditional “if lying is bad” we are not expressing a feeling or imperative, but rather a conditional truth statement, so to say that “lying is bad” expresses an attitude or imperative in one case but a truth statement in another case is a logical fallacy (equivocation).  So how can we arrive at a conclusion like “therefore getting other people to lie is also bad” unless it is a logical statement?  We either have to accept it or reject modus ponens. The problem with Blackburn’s response is his appeal to attitude consistency (logical). Hale argues that such an appeal requires explaining the consistency. How the quasi realist would go about this, without appealing to logical terms such as ‘belonging to a set’ or non-contradiction involves statements such as ‘x belongs to set S’ and ‘x is equal to y’, which are descriptive (and hence involve truth-aptness and evaluation). The quasi-realist cannot allow this. This is a problem, as whether or not some attitude is a member of some set is a matter of fact. The onus is on quasi-realist to provide a more detailed account of inconsistent attitudes which does not involve truth-aptitude, which would require appealing to logic. The Frege-Greach Problem is a question about the role of normative/expressive sentences and is a challenge to non-cognitivism. A positive solution to both challenges would open a room to the rationality of non-cognitive discourse in ethics. On the contrary, a negative one would show that the only option for rationalism in ethics is cognitivism or — in the worst case scenario — to irrationality and ethical nihilism.

Some questions and issues that were raised:

1. Can’t we state a preference or desire or feeling that is in contradiction to our moral stance?  For example, I may not like telling the truth but still think it is the right thing to do, no?  If so, doesn’t it follow that moral statements are not necessarily equivalent to emotive statements or statements of preference?

2. Couldn’t morality have non-natural properties (i.e. transcendent properties or logical properties or conceptual properties) that do not necessarily exist empirically but exist rationally?

3. Being that morality ultimately needs to be applied to individual situations each with their own unique set of circumstances, how could we, even theoretically, develop an objective morality to govern or even just guide us in determining what is right and wrong in each and every case?  Isn’t each and every case necessarily related to its unique set of circumstances and thus relative?

4. Bear in mind that morality is NOT about evaluating people as people, but rather people’s actions and behavior.  It is unnecessary and even meaningless to call a person bad or immoral, but it may serve a purpose to call an action bad or immoral.

5. What role does a person’s intention or motivation play in determining morality?

6. Being that values are subjective and individual and that we derive a sense of morality or moral code from them, doesn’t it follow that morality is then subjective as well?  What exactly is the connection between values and morality?  What would a cognitivist say in regards to this?

What do you think?  Any ideas or questions or comments?  Please leave them below!

November Study Group: Evidence

Every month we also have a study/discussion group about an aspect of critical thinking, logic or argumentation. Our second one in November was about evidence.

“Hitchens’s razor: What can be asserted without evidence can be dismissed without evidence.”


What is evidence?

He who makes an assertion has the burden of proof, or the obligation to produce evidence that leads to a conclusion.  Without evidence, any assertion can, maybe even should, be dismissed.  So what is evidence?  In layman’s terms, evidence is actual tangible, visible objects which lead us to a definitive conclusion about the veracity of something, like a crime.  In reality, evidence can cover much more than this.  Even within the bounds of the law, there are other types of evidence: consider witness testimony or expert evaluation.  Outside of the boundaries of law, philosophy considers such things as experience, observations, states or propositions as evidence.  Evidence verifies or refutes, our beliefs.  When our beliefs are justified, we can then use them as evidence in some cases.

So when we ask “what’s your evidence?” what we’re really asking is “how do you know?” or “what reason do you have to believe?”  When we assert something, we have the “evidential burden”, which is to say it’s up to us to provide evidence to back up our assertions.  Evidence is essentially a means of distinguishing between knowledge and belief, which falls under a branch of philosophy called “epistemology.”  In an argument, the conclusion should necessarily follow the evidence presented.  For the sake of simplicity, we’re not going to get too deep into epistemology, philosophy of science or argumentation theory today.  Instead, we’re going to focus on what evidence is (and isn’t).


Knowledge vs. Belief


First a little on the distinction between knowledge and belief.  While this topic is by no means completely settled, basically belief can be described as a subset of knowledge.  We may believe many things that are not true, but we cannot claim to “know” something that is false.  For example, if a person believes that a bridge is safe enough to support him, and attempts to cross it, but the bridge then collapses under his weight, it could be said that he believed that the bridge was safe but that his belief was mistaken; he didn’t know that it was safe.  In fact, we now know that the bridge was not safe.  By contrast, if the bridge actually supported his weight, then he might say that he believed that the bridge was safe, and now, after proving it to himself (by crossing it), he knows that it is safe.  Knowledge, specifically, is a “justified true” belief, a term coined by Plato, which means that knowledge is belief which has sufficient and justified evidence that makes it true.  So P is true (based on evidence), S believes P is true, therefore S is justified in believing P.  So you could say, the difference between belief and knowledge IS evidence, a stance called “evidentialism.”  When we speak of justification in this context, we are not necessarily speaking of moral or political justification, but rather intellectual, an important distinction.  For instance, while scientists may be intellectually justified in performing experiments on animals, whether or not they are morally justified is a separate question.

According to evidentialism, any two individuals who possessed the same evidence would be equally justified in their beliefs (we will see why this is problematic later).  This is not necessarily the case, though, as will be addressed later.

Without evidence, one can only believe something, not know it.  This, in fact, is often what happens and why I think this topic is so important: people often form beliefs first and then seek out evidence that supports it later, which often results in either cognitive dissonance when contradictory evidence is found, confirmation bias when they selectively search only for evidence which confirms their beliefs or ignorance, intellectual laziness, fallacious beliefs or even dishonesty when they cannot find evidence, or simply don’t bother to.  While a person may feel very certain about a belief, it does not mean they actually know it.  With strong and sufficient evidence, we can call beliefs true and thereby justified, and then it can be called knowledge.  Consider two similar situations: I believe someone has broken into my home.  In the first scenario, nothing was stolen, nothing was moved, my doors and windows are all locked and none are broken.  I have no evidence.  I may vehemently believe my claim, I may feel certain, but do I have cause to?  Not really.  So I can’t accurately say I know someone broke into my home.  On the other hand, if my door was broken open, my room is a mess, things are missing and police find fingerprints of strangers, then with this much evidence, I have good cause to believe my claim.  I could claim that I “know” someone did, particularly if someone was arrested and later convicted of doing so.  In science in particular evidence becomes important in discriminating between scientific theories and speculation or conjecture.  On the basis of evidence, among other things, scientific theories are verified or refuted.

How do we determine what beliefs we are justified in having, or what evidence to adhere to?  There are several theories which I will present very briefly here: Coherentism states that one’s belief should correspond to and not contradict other beliefs they hold.  Foundationalism states that “basic” beliefs, or self-evident beliefs, justify other beliefs.  Externalism states that we should base our beliefs on what is evident externally.  Internalism states that we should base beliefs on our internal knowledge.    While many seem to be opposed to each other, some philosophers have found compatibility between the varying theories.


Now a couple more terms, real quick:


Axioms are statements that all parties involved agree upon.  Loosely, they may simply be statements that two people who are discussing something agree on even if others wouldn’t (like “People want to be happy”).  If they are not agreed upon, then they are no longer axioms and are subject to scrutiny.  More broadly, axioms are statements that people in general agree on, like the existence of reality, the existence of natural laws such as gravity and the constancy of natural laws.  They are the starting points for arguments.  In the broad sense, evidence is not required for axioms; they are part of our shared reality.  On the contrary, if one wishes to reject an axiom, it would require that person to provide evidence for their rejection, as it opposes everything we understand about the universe.

Justification: it is easy to conflate this with an explanation, but they are not the same.  Justification is the reason to hold a belief, or to belief a thing to be knowledge.  It tells us (possibly erroneously) why a belief is true or how one knows a belief is true.  An explanation, on the other hand, simply describes facts to clarify context, cause or consequence.  Science, for instance, is interested in explaining natural phenomenon, like where the universe came from, what forces are at work when you drop an object from a height, or what is going to happen to our universe.  Explanations offer new understanding or facts.  One could use explanations to justify something, but they are not the same thing.  An example: Joe believes Tim’s dog has fleas.  When asked why he thinks so, Joe says Tim’s dog is scratching himself a lot.  That’s justification.  If, based on that belief, Joe and Tim examine Tim’s dog and find out that he does, in fact, have fleas, and Tim asks Joe how dogs get fleas, and Joe suggests that it might be from (the cause might be) the damp weather.  That’s an explanation.  Similarly, a criminal profiler may explain a criminal’s background – for example, by saying that he was recently evicted or grew up in a violent home – but this is not justification.  The criminal profiler may even explain what happened – the criminal broke into someone’s home and stole something, for instance – but this is still not justification.  Justification would be saying WHY a criminal did what he did by, for example, connecting the background to the behavior.

Skepticism: This is another term that gets thrown around and misused a lot.  Typically, when people think of skepticism, they think of stubbornly refusing to believe something, regardless of the evidence.  Skepticism, if anything, is the antithesis of that.  Skepticism is simply an approach to knowledge and belief.  It requires beliefs to be supported by evidence and, if not, that they should be dismissed.  As long as the evidence is available and in accordance with the belief, a skeptic would accept it.  For instance, if I told a skeptic I had a disease but presented no physical symptoms and claimed to have no hospital records, he would reject the claim.  However, if I had skin lesions and documents signed by doctors with diagnosis as well as medicine that I could show him and he could verify is for the particular disease, then he would easily accept it as true.

Self-evidence: if something is self-evident, we do not require outside evidence, we only have to understand the thing to “know” it.  They may be said to be “tautologous”, or true by virtue of form alone, like “What we don’t know is unknown” or “All men are male.”  2+2=4 is a common example.  It is not true “because of” some evidence, it is true simply because the subject and predicate express the same thing.  Therefore, we do not require any external experience or observation or evidence of any kind to verify its truth, only to recognize and understand the concepts.  However, no matter how obvious it may seem not all knowledge is self-evident.  For example, while we may take it for granted nowadays that the earth is round, this is not “self-evident.”  It required gathering evidence before it was believed, but there is no evidence against it, so it is accepted as knowledge.  Earth’s roundness is not self-evident, but it is a fact: it is actually the case or it has actually happened.  In science, this means it is something objective and verifiable.


Ok, back to evidence:


The general rule is that the more evidence I have that corresponds to my belief and the stronger it is, the more likely my beliefs are true.  The less evidence I have, or the more evidence I have that contradicts my beliefs, the less likely.  Consilience – independent sources converging together to verify something – is also important.  It strengthens a claim, and lack of it weakens one.  Consilience is an essential part of science as well as history.  In science, what this means is the more methods that converge on the same conclusions, the stronger the evidence is.  The convergence of geological, biological, psychological and physiological (as well as other) evidence are all strong evidence for evolution, whereas the bible by itself is weaker.  In history, the more varied sources that confirm the same conclusion, the stronger it is.


Propositional Evidence


Claims and assertions are similar; they are statements that something is true or false. We often uses these to justify a belief; whichever belief has the best justification is most likely true.  Evidence leads us to make correct, or at least more likely, conclusions, which lead to correct or more likely true beliefs.  Deductive arguments, or arguments about facts or truth, are necessarily based upon this line of reasoning: If the claims are true, then it follows that the conclusions is also necessarily true.  It is for this reason that evidence is responsible for justified beliefs.  If the evidence demonstrates the beliefs to be true, then we are justified in our knowledge.  As long as our line of thinking relies on a method which has produced more true beliefs than not, we can call it reliable.  We can then often use this same method to come to conclusions about other beliefs as well.


So how much evidence is enough?


So what kind of evidence do we need for what kind of arguments?  The type and scope of evidence necessary depends on the type of claim being made.  The broader and more extraordinary the claim, the more evidence will be necessary.  For example, the claim “I have a pet dog” (a very ordinary and limited claim) should require little evidence.  After all, it’s pretty common to own dogs and it’s only about me.  Whether or not it is true has little to no impact on your or anyone else’s life.  However, if the claim were “I have a pet dragon”, this would require more evidence; it is much more extraordinary and, if true, could impact our reality.  The existence of dragons would raise innumerable questions.

By the same token, saying “At least one swan is white” would require little evidence: it is very limited. It only requires the verification of one swan being white.  “Some swans are white” would require more, since it is broader, while stating that “All swans are white” would require a copious amount of evidence, since it is the widest you can get.  Sometimes we can not assert that a claim is absolutely true, ever, but only that it is more and more likely based on the evidence since it is essentially impossible to affirm the color of all swans that have existed, exist and will ever exist (but this is another problem altogether). Conversely, it is also easier to disprove a claim with such a wide scope.  Even if you had 10,000 examples of white swans, all you would need is an example of ONE black swan – a counterexample – to render such a claim false.  On the other hand, one counterexample would not render the other two claims false.  They are less dependent on large amounts of evidence.  On the contrary, disproving them would be difficult, if not impossible.  In the same vein, if we said, for example, “Humans and monkeys are exactly the same (in all ways)” all we would need is one counterexample to refute such a claim, but saying “humans and monkeys are similar in at least one regard” would be easier to establish but harder to refute.


Our responsibility towards the evidence


Sometimes, however, not only is evidence responsible for justified beliefs, but we must be responsible for our evidence when forming beliefs.  What do I mean by this?  First, it means one must be aware that evidence is evidence and what it is evidence for.  For instance, if I didn’t have anything to drink for a long time and started feeling a headache, but wasn’t aware that headaches were a sign of dehydration, then I would not “know” that I was dehydrated.  I would not know that the evidence (headaches) is, in fact, evidence.  Alternatively, if I assumed it was from stress instead of dehydration, I would be wrong about my conclusion, even though I would be right in saying there was evidence.  I would know that it is evidence, but not what it is evidence for.  In neither case was I being irrational, but my ignorance or faulty reasoning led me to incorrect conclusions.  What this means is we must be in a position to understand the evidence in order to recognize it and come to correct conclusions; we are responsible for our background knowledge.  To this end, two people may have nearly or even exactly the same evidence and come to very different conclusions.  For example, take the social issue of poverty.  While two academics or thinkers may possess exactly the same evidence, due to their background knowledge (as well as biases), one may come to the conclusion that poverty is the fault of psychological states while another may attribute it to social conditions.  One may state that the remedy is more free market while another that it is more government intervention.  This may be the case even if both people are impeccably rational, owing not to bias but lack of knowledge or misuse or misunderstanding of evidence.  Ideally, this would be the case; both parties agree about the evidence.  Who is right?  Based on the available evidence, both.  However, one conclusion must either be correct or more likely than another.  This is dependent on additional evidence and background knowledge.

Evidence and beliefs do not exist in a vacuum.  One cannot come to a conclusion simply because it is based on the conclusion and its evidence; it must be “well-founded” or based on the evidence, rather than one’s preferences.  There are always other beliefs out there, and more evidence out there.  Simply having some evidence for a belief is not sufficient for holding it.  The total sum of evidence out there must be taken into account.  Let’s say I want to believe in alien life.  I may accumulate evidence that suggests alien life is possible, and use that to accurately justify my belief, possibly while ignoring or denying other evidence.  However, let’s say I then state that, based on my belief that alien life is possible, the “rivers” on Mars were probably made by aliens.  This belief is unjustified.  As long as there is contradicting evidence out there, all of it must be weighed before we can be sure of which belief is most likely true.  Evidence, in fact, can be said to be a neutral “arbiter” in determining the truth.  Evidence itself is not predisposed one way or the other. If there is no contradictory evidence, then your position is “indefeasible” and thereby correct.  But there usually is.  For instance, someone may tell me that his name is Bob, but if an associate of his tells me his name is actually Fred (a type of evidence called a “defeater” or “defeating evidence”), then I must take that into consideration as well.  Should better evidence – for example, information that the associate is a pathological liar – arise, that too, must be taken into consideration.  Often, people will stick to unjustified and fallacious beliefs precisely because they refuse to acknowledge evidence outside of their preferred subset.  Of course, being aware of all the possible beliefs and evidence out there is another matter entirely.


Is personal experience evidence?


Beliefs based on personal experience are highly problematic.  For one thing, while we often utilize the testimony of others for trivial things, testimony is prone to several problems and thus weak when it comes to more extraordinary claims.  It may be implausible or lack credibility.  Personal anecdotes can be embellished or even fabricated.  They also provide a very small sample (only one person’s personal experience).  In fact, they often point to idiosyncratic instances, rather than an objective, generalized group of instances and thus are highly unreliable as evidence. Hearsay is worse, as not only does it have the same problems as personal anecdotes, but the conveyer of the information may himself be biased, in error, or have underlying intentions for conveying fallacious information.  Personal experience is also a problem.  We can sometimes be said to be justified in our own beliefs based on experience, but how can we justify it to other people?  Personal perception about, for example, pain or things we see are verified by our own experience of it, but should other people believe based on that?  After all, the pain could be psychosomatic or the vision could be a hallucination.  Our senses are also limited and subject to bias and error.  While we may make some concessions for less extraordinary evidence, like someone telling us they have a headache, or a friend telling us he or she had a bad day at work, when it comes to more extraordinary and impactful claims, such as scientific theories or criminal cases, something which can be independently verified is crucial.  Scientific instruments, such as X-rays, or physical evidence of what you have seen is more objective and less prone to error (not to mention easier to correct) than personal experience.  The same is not true for one’s beliefs, or experiences, or feelings of certainty. If we are to agree that evidence should serve as a neutral arbiter of the truth, then it must be public – accessible to anyone – if it is to remain objective and neutral.  Personal evidence is necessarily subjective and cannot serve this function; it cannot be shared in an objective manner.  Fossil evidence, for example, is stronger evidence of evolution than a personal belief that God exists is for creationism.  By the same token, it is important to use language which is as objective and comprehensible and transmittable – in other words, coherent – as possible when making philosophical claims.  A term like “transcendent” or “soul” is none of these things, but a term like “necessary” is.


A few types of (legal) evidence


If we trace a true belief back to the sources by which we verified its truth, we will find not beliefs, not personal experiences, not perception, but evidence.  In legal terms, the most substantial evidence is called “real” or “material” or “physical” evidence.  It is concrete, objective, neutral and requires little to no inference.  Documentary evidence, while not as strong as physical evidence, is also helpful.  It is essentially information that points to one conclusion or another.  However, it requires inference, which is subject to interpretation problems.  Other types of evidence, such as the above personal experiences, are circumstantial and are quite weak in and of themselves.  Generally, a number of justifications can be produced on account of single circumstantial instances.  However, if compiled with other evidence (producing consilience) – in other words, if they corroborate – then they can become stronger.


Wrap up


That’s it for today.  Keep in mind, you may be labelled as stubborn or close minded or radically skeptical for trying to adhere to the evidence.  You may be accused of “naively” trusting the evidence, and with some cause: evidence can be misleading.  But making mistakes is not the same thing as being irrational.  As long as you are willing to concede that you were in error and make the appropriate changes, you are still being rational.  The more you follow the evidence where it leads, the more correct beliefs you will have, at least compared to someone who ignores or selectively follows evidence.  As W.V. Quine, one of the most prominent logical thinkers, once said, “insofar as we are rational, the intensity of our beliefs will correspond to the firmness of the available evidence.  Insofar as we are rational, we will drop a belief after we have tried in vain to find evidence for it (emphasis added).”  In other words, as long as the evidence – as opposed to our feelings or preferences – lead us to a certain conclusion, we should stick by it.  On the other hand, if it does not lead us to a certain conclusion, no matter how much we may like it or want it to be true, or if there is no evidence for our preferred beliefs, we should drop it.  This is not radical skepticism or close mindedness or stubbornness, this is rationality.  It is the exact opposite of close mindedness and stubbornness.


Axiom: a statement accepted as true as the basis for argument or inference

Belief: a feeling of being sure that something is true

Counter-example: an example that refutes or disproves something

Epistemology: the study or a theory of knowledge

Evidence: something which shows that something else exists or is true

Evidentialism: a theory of justification; states thatthe justification of a belief depends on the evidence for it

Knowledge: something that is known OR the sum of what is known

Reliabilism: a theory of justification; states a belief is justified by whether or not it is related to other beliefs

Skepticism: the method of suspended judgment, doubt, or criticism in regards to a claim


Wikipedia entry:

Simple summary:

More difficult but clear summary:

Highly in-depth look at evidence:


Knowledge and Its Limits by Timothy Williamson

The Book of Evidence by Peter Achinstein

Epistemology: A Contemporary Introduction to the Theory of Knowledge by Robert Audi

Evidentialism, Earl Conee, Richard Feldman

October Study Group: Informal Logical Fallacies

Every month we also have a study/discussion group about an aspect of critical thinking, logic or argumentation. Our first one in October was about informal logical fallacies. Because there are a vast number of them, we only covered some of the more common and problematic ones.

Informal Logical Fallacies

I firmly believe that one of the main causes of strife in the world, both on a personal and global level, is a lack of critical thinking skills. We don’t rationally and critically evaluate what we hear or are taught, and often make decisions and form beliefs for irrational reasons without really criticizing or inquiring ourselves as to why. The purpose of this group is to learn, teach and practice evaluating both our own and other people’s arguments, whether in a formal debate setting or a casual one.

When we say “argument”, what do you think of? Do you think of two people verbally going at it, maybe ready to throw things? Well, some people think of it that way, but when our group talks about arguments, we are talking about something entirely different.

An argument is a rational and logical way of trying to uncover the truth behind an idea by asking critical questions, understanding how logic works and recognizing fallacies. It applies to both serious debates and everyday conversations and discussions. In terms of informal logic, the purpose isn’t to demonstrate a fact so we can’t “prove” it or say they are “true” or “false”, but rather it is an opinion or an evaluation, something that is accurate, or likely, or better than something else, so we can say they are “strong” or “weak”. So, for example, “cats are mammals so my pet cat is a mammal” is not an informal argument because it states a fact, it’s either true or it isn’t, but “you should stop smoking because smoking increases the chance of lung cancer” is. While the second statement (smoking increases the chance of lung cancer) is a statement of fact, the argument (you should stop smoking) is either strong (good) or weak (bad).

One aspect of good argumentation skills is recognizing informal logical fallacies. First let’s look at those words:


Does this mean “casual”? No, it doesn’t. Formal logic is logic relating to the form of the statements. For example, saying “Tomorrow I went to the beach” would be a formal grammatical error. So informal means logic that isn’t related to the form itself, but rather the thinking behind it.


This can basically be understood as reasoning or thinking based on a strict set of rules. Imagine you asked someone to play a sport with you and you explained all the rules to them, and they proceeded to break them in order to score a goal. Even if they scored the goal, would it still count? No, because you need to adhere to the rules in order to score a valid goal. It’s basically the same thing when it comes to arguments and logic; it doesn’t matter if your opinion is possibly “right”; if you get to it without using logic, you’ve broken the rules.


These are arguments that use bad or “false” reasoning.

So, informal logical fallacies are mistakes in the rules of arguing that have to do with the thinking behind the argument.

That might still be confusing, so I’d like to break it down a little more.

There are basically four types of informal logical fallacies:
1. Fallacies of ambiguity
2. Fallacies of presumption
3. Fallacies of weak inference
4. Fallacies of relevance


Arguments that commit fallacies of ambiguity manipulate language in misleading ways.
Fallacies of ambiguity appear to support their conclusions only due to their imprecise use of language.

The most common logical fallacy of ambiguity is probably equivocation.

This when people take a word to mean something different than the way it was originally used in the argument. In other words, the word is used to mean two different things.

a) The sign said “fine for parking here”, and since it was fine, I parked there.
b) The laws imply lawgivers. There are laws in nature. Therefore there must be a cosmic lawgiver.
c) Sure philosophy helps you argue better, but do we really need to encourage people to argue? There’s enough hostility in this world.

Another very common one is called a strawman argument, where the person misrepresents the argument and attacks that misrepresentation.

a) We can’t change the way we live to prevent global warming! What would we do without cars or electricity or gas?
b) We can’t reduce the military budget; what are we going to do without any military defense?
c) My teacher said I need to study grammar more, but I don’t have time to do that every day.
d) You’re an atheist, so you believe everything came from nothing.


Fallacies of presumption begin with an assumption which hasn’t been proven yet. Probably the most common example of this is the bald assertion, also called proof by assertion, which is simply stating a thing is true without any reasons.

a) It’s true!
b) That’s just the way it is.
c) It just IS!

When asking why, the person will usually just assert what the same thing, which is called a tautology (saying it’s true because it’s true), and is not an argument.

Another similar fallacy is the circular argument, also called begging the question. This is when the reasoning and conclusion both support each other!

a) I know he’s honest because he told me he’s honest.
b) this medicine cures colds because it’s cold medicine.
c) I know the bible is God’s word because the bible says so (the bible says it’s God’s word, so the bible is God’s word).

Another one is the false dilemma or false dichotomy. This is when the arguer offers only two options, but actually there are more. This is not always false; sometimes there are only actually two options, like “if Bob is not dead, then he’s alive” or “if she’s not a man, then she’s a woman”. But in many cases, there are sometimes many more options than what we are offered.

a) If we don’t reduce public spending, our economy will collapse.
b) If you don’t love your country, then leave.
c) If you’re not with us, then you’re against us.

The final common fallacy of presumption is called the argument from ignorance. We are all ignorant about some things to some degree, and the best thing we should do is learn until we know more. Some people, however, think that if they can’t think of anything else, then what they CAN think of must be true. This is the argument from ignorance. It is actually similar to the false dilemma, in that its inherent argument is “I can only think of two options, so there CANNOT be at least one more possible option.”

a) No one can actually prove that God doesn’t exists; therefore God exists.
b) No one on the council objected to the idea that he proposed, so everyone must think it’s a great idea.
c) I don’t see how evolution could increase the complexity of an organism (therefore, evolution is not true).

Weak inference

Arguments require evidence or good reasons to support them. When evidence or reasoning is weak, this is a fallacy of weak inference or weak induction. Certain appeals require weak evidence to support them.

Appeal to authority: My dad said it’s true (therefore it is true).
Appeal to popularity: Everybody’s doing it (there you should too).
Lots of people smoke, (therefore smoking is good).

Careful with the appeal to authority, though. It is NOT a fallacy if the person actually is an authority on the subject or topic, is not biased about the subject, and is named specifically. If the authority is not named (“Authorities say…”) or is biased or not an authority on the topic at hand is it fallacious. A consensus among other experts is also important; one scientist who believes there is a God is not sufficient as evidence for a God, for example.

Another type of weak inference is a generalization, specifically either a hasty generalization or a sweeping generalization. A sweeping generalization is one that takes a very broad example and assumes it is definitely true in a specific case. A hasty generalization is the opposite: it takes a specific example – which is not enough evidence – and attributes it to a broad group.

Sweeping generalization: All the men I’ve ever met were liars, so I don’t trust Bob, a man I met yesterday.
Every American I’ve ever met was individualist, and you’re American, so that means you’re an individualist.
Hasty generalization: Rachel, a feminist, says all men are sexist pigs, so feminists hate men.
I read a story in the news about this congressman who had an affair. Why do politicians always cheat on their wives?


When engaging in argumentation, it is important to stick to the topic at hand, rather than wandering off on tangents. Arguments that commit fallacies of relevance rely on premises that aren’t relevant to the truth of the conclusion. Fallacies of relevance are attempts to prove a conclusion by offering considerations that simply don’t bear on its truth or attempts to distract from the issue at hand. Arguments of this kind focus not on the evidence for a view but on the character of the person advancing it.

A classic informal logical fallacy of relevance is called an ad hominem attack. This is when the argument or counter-argument is focused on the person rather than the argument itself. The ad hominem does not have to be a rude insult. Often it appears logical, but is not.

a) YOU would say that.
b) Of course you think affirmative action is good, you’re a liberal/democrat.
c) How can you tell me smoking is bad, you used to smoke!
d) Sarah Palin said it, so it must be stupid.

Often, arguments use appeals to consequences or emotions, rather than logic or reason, to try to convince others that they are right. An appeal to consequences is an argument that because of bad actual, potential or imagined consequences, the truth or accuracy of the argument is wrong. While it may be important, it is not relevant to the truth of the argument. This fallacy is sometimes called wishful thinking.

a) Men and women can’t be different because if they are, then people will become more sexist.
b) I don’t want my existence to end, so there must be an afterlife.
c) An objective morality must exist, otherwise we would be free to do whatever we want.

Other arguments appeal to emotion, but again, while our emotions are important, they can’t tell us whether an argument is true or not. Manipulating other people’s emotions is an effective way to win an argument, but an ineffective way at reaching the truth. While your conclusions may be true, your argument isn’t if it has to appeal to an emotion.

Appeal to envy: Rich people have more than enough money, they should be giving it to the poor!
Appeal to fear: If you don’t graduate from college, you’ll live in poverty your whole life.
Appeal to flattery: You’re smart, so you must know Obama is a good president.
Appeal to pity: Look at that sweet, young girl. How could you think she could kill anyone?
Appeal to pride: You’re really smart, that’s why you agree with us.
Appeal to ridicule: You actually think there are UFOs? How silly.
Appeal to spite/hatred: You should cheat on him with his best friend because he’s a liar and an asshole.

A final one is a red herring. This is a very common fallacy that can be very hard to spot. It’s when introduces a totally different argument than the one you are talking about. While sometimes people will do it on purpose in order to distract you from your argument, often people will not even realize they are doing it, which is why it’s important to recognize it.

A) Person A: Murder is a criminal act, and should be punished. Person B: Yeah but lots of murderers have mental issues, and we really need to address that.
B) Person A: How can you eat all that junk food you know is bad for you? Person B: What should I do, let it go to waste? Think of all the poor people who can’t even eat anything.
C) Person A: I think the democrat’s presidential candidate has some bad economic policy ideas. Person B: Oh yeah? Well the republican’s candidate was accused of sexual harassment!
D) Person A: I think the presidents should have let those banks fail. Person B: But in tough times, we need to support our president.

Ok, now can you figure out the fallacy?

1. Most people like the TV show “Friends”, so it’s a good TV show.
2. The philosophy class I took was hard, so philosophy must be hard.
3. Bono from U2 talked about how bad capitalism is, so I realized it’s a bad system.
4. You can’t punish her for that, she has children!
5. Nobody has been able to prove that we don’t have psychic powers, so I think we do.
6. You’re a feminist, and feminists just want to get revenge on men, so why should I listen to your opinions on sexual harassment laws?
7. Giving your money to help out those in need is right, so we have a right to tax the wealthy.
8. You can’t say ask a woman her age, it’s just wrong!
9. He’s a good communicator because he speaks well.
10. You’re American, so you don’t understand Japanese culture, so your arguments about Japan don’t count.
11. Mining might be bad for the environment, but what about all the miners’ jobs?
12. If I don’t smoke, I’ll get fat, so I have to smoke.
13. If you care about the starving kids in Africa you will make a donation.
14. Everybody I’ve talked to doesn’t understand, so you won’t understand.
15. Evolution can’t be true, that means we’re just animals.


1. Appeal to popularity
2. Hasty generalization
3. Appeal to authority
4. Appeal to pity
5. Argument from ignorance
6. Straw man
7. Equivocation
8. Bald assertion
9. Circular argument
10. Ad Hominem
11. Red Herring
12. False dilemma
13. Appeal to pity
14. Sweeping generalization
15. Appeal to consequences

For more information, check out these websites:


Logical Fallacies and Constructing a Logical Argument
Informal Logic
More Informal Logic

Critical Thinking

What is Critical Thinking?


General Fallacies
Informal Fallacies Wiki
 Fun Site on Fallacies
What are Informal Fallacies?

November Topic: Freedom and Equality

Every month we center on a particular theme which we discuss twice a month, once in an open group where anyone and everyone can join, and once in a closed group, which allows for a limited number of participants and is for group members only. I’ve decided – at the suggestion of some of the members – to start a blog which will include notes I developed prior to the discussions, as well as questions and problems that arose regarding the particular topic (not about the discussions themselves, which are going really well), as well as any pertinent points or problems or questions which were brought up during the discussion.

This month’s topic: freedom and equality

In particular political/social freedom and equality, although there are certainly legal, philosophical, moral and perhaps even metaphysical ties. The questions discussed were these:

What is political freedom? What are negative and positive freedom/liberties and are they compatible? Which is closer to an accurate definition of freedom?

What is equality? What are equalities of opportunity and outcome? How is it different, synonymous or related to freedom? What type of equality should we strive for?

In a broad sense, freedom is defined as having agency over one’s person. Agency is the capacity to act in accordance with one’s voluntary choices. It is to be free from coercion, interference or restraint, to be able to make one’s own choices. It is being allowed to say, do or think as one wishes (to an extent, discussed a bit later).

Freedom in any “true” or “real” sense is impossible; we cannot achieve freedom to, for example, breathe in outer space without apparatus or to grow wings. To this end, we are not going to discuss freedom in any metaphysical or philosophical sense today, but rather a political or social extent.

Even so, this, of course, is still not practical enough because there are certain actions which you may take which may inhibit my freedom, and we live in a society where we must get along. For this reason there are necessarily limitations on political/social freedom. In this regard, freedom can be broken down into two different concepts:
A. Freedom as autonomy/independence.
B. Freedom as the capacity to initiate new beginnings.

Liberty can be basically divided into two types: negative and positive.

Negative liberty is basically the idea that one should be free FROM outside coercion. It is the idea that one is completely responsible for one’s own actions. It is an opportunity concept, asking to be allowed any opportunity equally with others. It is freedom from artificial restraints, not natural ones (ex: from someone telling me I can’t get on a plane because I’m black, not nature preventing me from growing wings to fly). Negative liberty obliges inaction (do not act against me).

Positive liberty is the idea that one should be free TO fulfill one’s potential, free from internal constraints (fear, addiction, weakness, ignorance, etc). It is an exercise concept, asking to be free to fulfill one’s wishes equally with everyone else. It is the idea that one has the right to possess the necessary power or resources in pursuit of one’s dreams. It is freedom from poverty, starvation, treatable disease, oppression, etc. Positive liberty obliges action (act to enhance my freedom).

In essence, negative liberty is freedom FROM while positive liberty is freedom TO. A proponent of negative liberty might say “I am a slave to no man” while a proponent of positive liberty might say “I am my own master.”

Potential problems:

1. If we work to enact type A (freedom as autonomy/independence), couldn’t type B – albeit not immediately actual – be an inevitably potentially consequence of it? However, if we prioritize type B, wouldn’t that necessarily inhibit type A? In other words, doesn’t the achievement of negative liberty potentially allow for positive liberty? However, how can you enact positive liberty without infringing on negative liberty? For instance, if I want the freedom to have time off from work, how am I to achieve this? By asking others to provide it for me. For example, if I am a drug addict and I want to be free from my addiction (an example of positive liberty), how can I achieve it? By requiring others to labor (an infringement on their negative liberty) to teach me how and to provide the funds to go through the process, etc.

2. Is it possible to be unaware of liberties that you have, and thus be unaware that they are being infringed upon? For example, if you grew up in an environment where you were religiously indoctrinated and had no basis for comparison via which you could assess information or ideas, would you be aware that your positive liberty is being infringed upon?

Some interesting conclusions we reached based on this:

1. Perfect liberty, negative or positive is not achievable. After all, I must be inhibited FROM taking another person’s life in order to ensure that the other person is free FROM my taking of his life. So the question is one of degree, not which one is the perfect ideal. Perfect positive liberty seems to necessarily lead to a totalitarian state, on the other hand.

2. While certain negative liberties may seem ideal, are they practical or beneficial to society as a whole? After all, if I am free from being forced to pay taxes, who is going to take care of the institutions and infrastructure we enjoy today?

Equality ties into freedom. It seems to be overall beneficial for society: According to Richard Wilkinson and Kate Prichett in their book “The Spirit Level”, egalitarian nations have fewer societal problems (ex: mental illness, homicide, teen pregnancy, obesity, incarceration, etc.) and better social goods (life expectancy, educational performance, trust, women status, social mobility, patents issued, etc).Equality is also divided into two types: Equality of opportunity and equality of outcome.

1. Equality of opportunity: a state where everyone starts at the same point. The stipulation that all people should be treated similarly, uninhibited by artificial barriers, prejudices or preferences in pursuit of goals. It means offering all an equal chance to compete within established and agreed-upon rules. It means applying fairness to the selection process.

2. Equality of outcome: a state in which people have equal wealth or economic conditions. It entails reducing or eliminating material inequality between individuals and/or households. It involves transferring or redistributing wealth. Ideally, this would reflect the necessary interdependence of citizens. It seems to lead to increased social cohesion and reduced jealousy.

Having said that, there are two types of equality of opportunity:

1. Formal equality of opportunity: this dictates that the “starting point” is the application for a desired position. An evaluation process subsequently commences for all applicants related to their qualifications, not on arbitrary or irrelevant criterion. There are three “stages”:
1. Open call: The application should be available to all potential applicants and all applications should be accepted.
2. Fair evaluation process: The applicants should be judged on their merits, following a set procedure in order to evaluate who is the best qualified.
3. The selection itself.

2. Fair equality of opportunity: this dictates that the starting point is prior to the application process. It examines whether or not applicants have equal abilities or talents before they venture to compete and deems that authorities should take steps to remedy any inequalities.

The core problems are these: Striving for equal outcome may require discriminating between certain groups (against the wealthy for the poor, against whites for other races, against men for women, etc). However, striving for equal opportunity leads to unequal results.

When discussing fairness, we came to the conclusion that equality has to do with quantifiable or qualitative states, and fairness has to do with the rules or processes through which we reach these states. For example, while the outcome of a basketball game may not be equal, the process through which the results are achieved should follow a set of rules that applies to everyone in order to make it a “fair” game.

Problems with equality of outcome:

1. How is it possible to bring about equality of outcome without coercion or interference? In other words, Do the ends really justify the means?

2. Even if equality of outcome is helpful, how do we go about transforming society?

3. How do we determine equal outcomes? After all, isn’t it unrealistic to expect, for example, a six-year-old to have equal outcomes with a 70-year-old? How about a 30-year-old with no children and a 30-year-old with four children?

Problems with equality of opportunity:

1. How do you measure it? Outcome is easier to measure than opportunity and is often used as a measure of opportunity or equality in general. That being said, many factors can potentially contribute to outcome, so it is not necessarily a reliable measure of opportunity.

2. How do we ensure that all children start at an equal starting point? Is providing equal distribution of material wealth sufficient, or should government also control the distribution of immaterial wealth, like knowledge or expertise or advice? What about an individual’s innate or genetic abilities? How do we “equalize” that?

Other problems in relation to equality in general:

1. Proving unequal treatment is difficult. How do we overcome the interpretative and methodological difficulties inherent in the problem?

2. As mentioned above, measuring equality seems problematic. How do we determine levels of equality? For example, should everyone be paid equally, no matter what job they do? What if certain individuals put in more risk, effort or time? What if certain individuals’ work results in greater benefits than others?