I used to teach research methods. Now I teach critical thinking. Research is about creating knowledge. Critical thinking is about assessing knowledge. In research methods, the goal is to create well-designed studies that allow us to determine whether something is true or not. A well-designed study, even if it finds that something is not true, adds to our knowledge. A poorly designed study adds nothing. The emphasis is on design.
In critical thinking, the emphasis is on assessment. We seek to sort out what is true, not true, or not proven in our info-sphere. To succeed, we need to understand research design.
We also need to understand the logic of critical thinking — a stepwise progression through which we can discover fallacies and biases and self-serving arguments. It takes time. In fact, the first rule I teach is “Slow down.
Take your time. Ask questions. Don’t jump to conclusions.”
In both research and critical thinking, a key question is: how do we know if something is true? Further, how do we know if we’re being fair minded and objective in making such an assessment? We discuss levels of evidence that are independent of our subjective experience.
Over the years, thinkers have used a number of different schemes to categorize evidence and evaluate its quality.
Today, the research world seems to be coalescing around a classification of evidence that has been evolving since the early 1990s as part of the movement toward evidence-based medicine (EBM).
The classification scheme (typically) has four levels, with 4 being the weakest and 1 being the strongest. From weakest to strongest, here they are:
- 4 — evidence from a panel of experts. There are certain rules about such panels, the most important of which is that it consists of more than one person. Category IV may also contain what are known as observational studies without controls.
- 3 — evidence from case studies, observed correlations, and comparative studies. (It’s interesting to me that many of our business schools build their curricula around case studies — fairly weak evidence. I wonder if you can’t find a case to prove almost any point.)
- 2 — quasi-experiments — well-designed but non-randomized controlled trials. You manipulate the independent variable in at least two groups (control and experimental). That’s a good step forward. Since subjects are not randomly assigned, however, a hidden variable could be the cause of any differences found — rather than the independent variable.
- 1b — experiments — controlled trials with randomly assigned subjects. Random assignment isolates the independent variable. Any effects found must be caused by the independent variable. This is the minimum proof of cause and effect.
- 1a — meta-analysis of experiments. Meta-analysis is simply research on research. Let’s say that researchers in your field have conducted thousands of experiments on the effects of using electronic calculators to teach arithmetic to primary school students. Each experiment is data point in a meta-analysis. You categorize all the studies and find that an overwhelming majority showed positive effects. This is the most powerful argument for cause-and-effect.
You might keep this guide in mind as you read your daily newspaper. Much of the “evidence” that’s presented in the media today doesn’t even reach the minimum standards of Level 4. It’s simply opinion. Stating opinions is fine, as long as we understand that they don’t qualify as credible evidence.
How Do We Know What Is True?
How do we know if something is true?
It seems like a simple enough question. We know something is true if it is in accordance with measurable reality. But just five hundred years ago, this seemingly self-evident premise was not common thinking.
Instead, for much of recorded history, truth was rooted in scholasticism. We knew something was true because great thinkers and authorities said it was true. At the insistence of powerful institutions like the Catholic Church, dogma was defended as the ultimate source of wisdom.
But by the 1500s, this mode of thinking was increasingly being questioned, albeit quietly. Anatomists were discovering that the human body did not function as early physicians described.
Astronomers were finding it hard to reconcile their measurements and observations with the notion that the Sun revolves around the Earth.
A select few alchemists were starting to wonder if everything really was composed of earth, water, air, fire, and aether.
Then, a man came along that refused to question quietly. When Italian academic Galileo Galilei looked through his homemade telescope and saw mountains on the moon, objects orbiting around Jupiter, and phases of Venus showing the Sun's reflected light — all sights that weren't in line with what authorities were teaching — he decided to speak out, regardless of the consequences.
In The Starry Messenger, published in 1610, Galileo shared his initial astronomical discoveries. He included drawings and encouraged readers to gaze up at the sky with their own telescopes.
Thirteen years later, in The Assayer, Galileo went even further, directly attacking ancient theories and insisting that it was evidence wrought through experimentation that yielded truth, not authoritarian assertion.
Finally, in 1632, Galileo penned the treatise that would land him under house arrest and brand him a heretic.
In Dialogue Concerning the Two Chief World Systems, Galileo cleverly constructed a conversation between two fictional philosophers concerning Copernicus' heliocentric model of the Solar System.
One philosopher, Salviati, argued convincingly for the sun-centered model, while the other philosopher, Simplicio, stumbled and bumbled while arguing against. At the time, “Simplicio” was commonly taken to mean “simpleton.” Simplicio also used many of the same arguments the Pope employed against heliocentrism. At the the time, the Catholic Church was not opposed to researching the topic, but they did have a problem with teaching it. Thus, the Vatican banned the book and imprisoned Galileo.
By stubbornly refusing to be silent, Galileo irrevocably altered the very definition of truth. Scientists today forge breakthroughs in all sorts of fields, but their successes can ultimately be attributed to Galileo's breakthrough in thought. In her recent book, Galileo's Middle Finger, historian of science Alice Dreger paid tribute to the legendary astronomer.
“Galileo actively argued for a bold new way of knowing, openly insisting that what mattered was not what the authorities… said was true but what anyone with the right tools could show was true. As no one before him had, he made the case for modern science — for finding truth together through the quest for facts.”
Primary Source: Galileo's Middle Finger: Heretics, Activists, and One Scholar's Search for Justice, by Alice Dreger. 2015. Penguin Books.
How do you know that what you know is true? That’s epistemology
How do you know what the weather will be like tomorrow? How do you know how old the Universe is? How do you know if you are thinking rationally?
These and other questions of the “how do you know?” variety are the business of epistemology, the area of philosophy concerned with understanding the nature of knowledge and belief.
- Epistemology is about understanding how we come to know that something is the case, whether it be a matter of fact such as “the Earth is warming” or a matter of value such as “people should not just be treated as means to particular ends”.
- It’s even about interrogating the odd presidential tweet to determine its credibility.
- Read more: Facts are not always more important than opinions: here's why
Epistemology doesn’t just ask questions about what we should do to find things out; that is the task of all disciplines to some extent. For example, science, history and anthropology all have their own methods for finding things out.
Epistemology has the job of making those methods themselves the objects of study. It aims to understand how methods of inquiry can be seen as rational endeavours.
Epistemology, therefore, is concerned with the justification of knowledge claims.
The need for epistemology
- Whatever the area in which we work, some people imagine that beliefs about the world are formed mechanically from straightforward reasoning, or that they pop into existence fully formed as a result of clear and distinct perceptions of the world.
- But if the business of knowing things was so simple, we’d all agree on a bunch of things that we currently disagree about – such as how to treat each other, what value to place on the environment, and the optimal role of government in a society.
- That we do not reach such an agreement means there is something wrong with that model of belief formation.
We don’t all agree on everything. Flickr/Frank, CC BY-NC
It is interesting that we individually tend to think of ourselves as clear thinkers and see those who disagree with us as misguided. We imagine that the impressions we have about the world come to us unsullied and unfiltered. We think we have the capacity to see things just as they really are, and that it is others who have confused perceptions.
As a result, we might think our job is simply to point out where other people have gone wrong in their thinking, rather than to engage in rational dialogue allowing for the possibility that we might actually be wrong.
But the lessons of philosophy, psychology and cognitive science teach us otherwise. The complex, organic processes that fashion and guide our reasoning are not so clinically pure.
Not only are we in the grip of a staggeringly complex array of cognitive biases and dispositions, but we are generally ignorant of their role in our thinking and decision-making.
Combine this ignorance with the conviction of our own epistemic superiority, and you can begin to see the magnitude of the problem. Appeals to “common sense” to overcome the friction of alternative views just won’t cut it.
We need, therefore, a systematic way of interrogating our own thinking, our models of rationality, and our own sense of what makes for a good reason. It can be used as a more objective standard for assessing the merit of claims made in the public arena.
This is precisely the job of epistemology.
Epistemology and critical thinking
One of the clearest ways to understand critical thinking is as applied epistemology. Issues such as the nature of logical inference, why we should accept one line of reasoning over another, and how we understand the nature of evidence and its contribution to decision making, are all decidedly epistemic concerns.
Just because people use logic doesn’t mean they are using it well.
The American philosopher Harvey Siegel points out that these questions and others are essential in an education towards thinking critically.
By what criteria do we evaluate reasons? How are those criteria themselves evaluated? What is it for a belief or action to be justified? What is the relationship between justification and truth? […] these epistemological considerations are fundamental to an adequate understanding of critical thinking and should be explicitly treated in basic critical thinking courses.
- To the extent that critical thinking is about analysing and evaluating methods of inquiry and assessing the credibility of resulting claims, it is an epistemic endeavour.
- Engaging with deeper issues about the nature of rational persuasion can also help us to make judgements about claims even without specialist knowledge.
- For example, epistemology can help clarify concepts such as “proof”, “theory”, “law” and “hypothesis” that are generally poorly understood by the general public and indeed some scientists.
- In this way, epistemology serves not to adjudicate on the credibility of science, but to better understand its strengths and limitations and hence make scientific knowledge more accessible.
Epistemology and the public good
One of the enduring legacies of the Enlightenment, the intellectual movement that began in Europe during the 17th century, is a commitment to public reason. This was the idea that it’s not enough to state your position, you must also provide a rational case for why others should stand with you. In other words, to produce and prosecute an argument.
- Read more: How to teach all students to think critically
- This commitment provides for, or at least makes possible, an objective method of assessing claims using epistemological criteria that we can all have a say in forging.
- That we test each others’ thinking and collaboratively arrive at standards of epistemic credibility lifts the art of justification beyond the limitations of individual minds, and grounds it in the collective wisdom of reflective and effective communities of inquiry.
- The sincerity of one’s belief, the volume or frequency with which it is stated, or assurances to “believe me” should not be rationally persuasive by themselves.
Simple appeals to believe have no place in public life.
If a particular claim does not satisfy publicly agreed epistemological criteria, then it is the essence of scepticism to suspend belief. And it is the essence of gullibility to surrender to it.
A defence against bad thinking
There is a way to help guard against poor reasoning – ours and others’ – that draws from not only the Enlightenment but also from the long history of philosophical inquiry.
So the next time you hear a contentious claim from someone, consider how that claim can be supported if they or you were to present it to an impartial or disinterested person:
- identify reasons that can be given in support of the claim
- explain how your analysis, evaluation and justification of the claim and of the reasoning involved are of a standard worth someone’s intellectual investment
- write these things down as clearly and dispassionately as possible.
In other words, make the commitment to public reasoning. And demand of others that they do so as well, stripped of emotive terms and biased framing.
- If you or they cannot provide a precise and coherent chain of reasoning, or if the reasons remain tainted with clear biases, or if you give up in frustration, it’s a pretty good sign that there are other factors in play.
- It is the commitment to this epistemic process, rather than any specific outcome, that is the valid ticket onto the rational playing field.
- At a time when political rhetoric is riven with irrationality, when knowledge is being seen less as a means of understanding the world and more as an encumbrance that can be pushed aside if it stands in the way of wishful thinking, and when authoritarian leaders are drawing ever larger crowds, epistemology needs to matter.
How do you know if a claim is true?
How do you know if a claim is true? There are many ways arguments can go wrong, but only a few ways to make them logical. Logical arguments provide convincing evidence for claims. What kind of evidence counts depends on what kind of claim has been made.
Opinions are never false, because the evidence is in the mind of whoever is giving the opinion. For example:
I don't like to eat green vegetables. Is that true or false? To find out, you'd have to be inside the body of the person who said it. Since that's impossible, there is no reason to question it. Of course, opinions don't count for much when someone is trying to persuade you. You can always answer, “I have a different opinion.”
To decide if the evidence is convincing, you first have to know what sort of claim has been made. Claims come in at least four types.
An empirical claim makes a statement about the world. For example:
The moon is made of green cheese. We need scientific knowledge about the world to test an empirical claim. Scientific knowledge is public information gained by careful observations and experiments. We have lots of evidence that the moon is made of rock, including the close-up observations of astronauts, so we know that the green-cheese claim is false.
An analytical claim makes a statement about the meaning of words or other symbols. For example:
The Constitution gives us freedom of speech. We need knowledge about words and symbols to test an analytical claim. We might consult a document and use a dictionary or other reference to find out how people have agreed to interpret a word. In this case, the claim is true because free speech is guaranteed in the First Amendment to the Constitution.
A valuative claim makes a statement about what is good or bad, right or wrong. For example:
People should read books instead of watching so much TV. To test a valuative claim, we appeal to standards of value. In this case, the standard might be the value of literacy. Valuative claims often carry assumptions about empirical claims not directly stated. Here, we are assuming that reading books makes us more literate than watching TV, which according to scientific studies of vocabulary growth, is also true. Answering valuative claims requires us to decide which value standard is higher. In this case, we might argue that literacy is a higher standard than relaxation or pleasure.
A metaphysical claim makes a statement about our very existence. For example:
All men are created equal. To test a metaphysical claim, we appeal to revelation, that is, to statements of faith. Reconciling conflicting metaphysical claims usually requires that we appeal to a common revelation. For example, if we understand that the introduction to the Declaration of Independence expresses an essential truth about our existence on earth, then it is true that all men are created equal. But if someone disputes the authority of the Declaration, we might not be able to resolve the question of whether all people are equal or not. We may have to agree to disagree, because our opponent does not share our faith.
When we've stripped down an argument to the bare essentials–when it's stated in neutral, unemotional language, it's free of opinions, and we are willing to grant the authority and impartiality of the speaker–then our final questions are:
- 1. What kind of claim is being made? 2. What evidence supports that claim?
This is how we get at the truth, the whole truth, and nothing but the truth.
- Click here to see some examples and try your hand at evaluating the truth of claims.
- Click here to find out about some popular fallacies, or ways arguments go wrong.
- Click here to return to the quotes from Nothing But the Truth.
How Do You Know if What You Read Online Is True?
I sent the lady a glass of wine and a note pic.twitter.com/GttnmQI25P
— elan gale (@theyearofelan) November 28, 2013
One of several viral Twitter posts that told the tale of a Thanksgiving feud on a plane. It later turned out the feud never happened, and the posts were described by the writer as a short story.
How many viral stories on the Internet do you read, share, reference and comment on each week? How much do you care if they are real or not? How would you find out if they were real if you wanted to?
In “If a Story Is Viral, Truth May Be Taking a Beating,” Ravi Somaiya and Leslie Kaufman write:
Truth has never been an essential ingredient of viral content on the Internet. But in the stepped-up competition for readers, digital news sites are increasingly blurring the line between fact and fiction, and saying that it is all part of doing business in the rough-and-tumble world of online journalism.
Several recent stories rocketing around the web, picking up millions of views, turned out to be fake or embellished: a Twitter tale of a Thanksgiving feud on a plane, later described by the writer as a short story; a child’s letter to Santa that detailed an Amazon.com link in crayon, but was actually written by a grown-up comedian in 2011; and an essay on poverty that prompted $60,000 in donations until it was revealed by its author to be impressionistic rather than strictly factual.
Their creators describe them essentially as online performance art, never intended to be taken as fact. But to the media outlets that published them, they represented the lightning-in-a-bottle brew of emotion and entertainment that attracts readers and brings in lucrative advertising dollars.
When the tales turned out to be phony, the modest hand-wringing that ensued was accompanied by an admission that viral trumps verified — and that little will be done about it as long as the clicks keep coming.
“You are seeing news organizations say, ‘If it is happening on the Internet that’s our beat,’ ” said Joshua Benton, director of the Nieman Journalism Lab at Harvard.
“The next step of figuring out whether it happened in real life is up to someone else.”
… Elan Gale, 30, a television producer and the author of the invented article on the feud on the plane, is not convinced. His fictitious Twitter tale of exchanging increasingly hostile notes with a fellow passenger spread rapidly — a compilation of his posts got 5.6 million views.
BuzzFeed sensed the tremor in the web and posted it, attracting nearly 1.5 million views to its site. (The New York Times travel section blog also linked to their story but labeled it as imaginary when it was discovered to be untrue.) Finally, Mr.
Gale revealed that the entire exchange was fake, and BuzzFeed posted an update describing the story as a lie and a hoax.
“I really have an issue with the word hoax,” said Mr. Gale, who says nobody called him to verify his story. “I was broadcasting to my followers who know what I do. It’s the people who reported it who are deceiving their audience.”
Students: Read the entire article, then tell us …
- How many viral posts — whether articles, videos or photographs — do you click on each week? How many on average do you share on social media?
- How often do you check to make sure what you are sharing or commenting on is real? How do you go about finding that out?
- How much do you care if a story purporting to be real actually is?
- What responsibility do journalists and news outlets who post or link these stories have to make sure they are true? Is it their job to make sure something is not a hoax before they cover or link to it?
- Can embellished, or outright fake, stories have real-world consequences? This article gives one example in which someone who wrote an essay on poverty received $60,000 in donations before the piece was revealed not to be “strictly factual,” but can you think of others?
- How much more careful are you with online sources when you are doing work for school than when you are simply surfing the Web for fun? How do you decide what is a reliable source for your schoolwork?
How We Recognize What Is True And What Is False
A recent neuroimaging study reveals that the ability to distinguish true from false in our daily lives involves two distinct processes.
Previous research relied heavily on the premise that true and false statements are both processed in the left inferior frontal cortex.
Carried out by researchers from the Universities of Lisbon and Vita-Salute, Milan, the June Cortex study found that we use two separate processes to determine the subtle distinctions between true and false in our daily lives.
Deciding whether a statement is true involves memory; determining one is false relies on reasoning and problem-solving processes.
The study examines the impact of true and false sentences on brain activity with a feature verification task and functional Magnetic Resonance Imaging (fMRI). Participants were asked to read simple sentences composed of a concept-feature pair (e.g.
'the plane lands') and to decide whether the sentence was true or false. Importantly, true and false statements were equated in terms of ambiguity, and exactly the same concepts and features were used across the two types of sentences.
False statements differentially activated the right fronto-polar cortex in areas that have been previously related to reasoning tasks. The activations related to true statements involved the left inferior parietal cortex and the caudate nucleus bilaterally.
The former activation may be hypothesized to reflect continued thematic semantic analysis and a more extended memory search.
The caudate activation may also reflect this search and matching processes as well as the fact that recognizing a sentence as true is in itself a positive reward for the subject, as this area is also involved in processing reward-related information.
Epistemology is the study of knowledge. Epistemologists concern themselves with a number of tasks, which we might sort into two categories.
First, we must determine the nature of knowledge; that is, what does it mean to say that someone knows, or fails to know, something? This is a matter of understanding what knowledge is, and how to distinguish between cases in which someone knows something and cases in which someone does not know something. While there is some general agreement about some aspects of this issue, we shall see that this question is much more difficult than one might imagine.
Second, we must determine the extent of human knowledge; that is, how much do we, or can we, know? How can we use our reason, our senses, the testimony of others, and other resources to acquire knowledge? Are there limits to what we can know? For instance, are some things unknowable? Is it possible that we do not know nearly as much as we think we do? Should we have a legitimate worry about skepticism, the view that we do not or cannot know anything at all?
While this article provides on overview of the important issues, it leaves the most basic questions unanswered; epistemology will continue to be an area of philosophical discussion as long as these questions remain.
Table of Contents
1. Kinds of Knowledge
The term “epistemology” comes from the Greek “episteme,” meaning “knowledge,” and “logos,” meaning, roughly, “study, or science, of.” “Logos” is the root of all terms ending in “-ology” – such as psychology, anthropology – and of “logic,” and has many other related meanings.
The word “knowledge” and its cognates are used in a variety of ways. One common use of the word “know” is as an expression of psychological conviction.
For instance, we might hear someone say, “I just knew it wouldn’t rain, but then it did.
” While this may be an appropriate usage, philosophers tend to use the word “know” in a factive sense, so that one cannot know something that is not the case. (This point is discussed at greater length in section 2b below.)
Even if we restrict ourselves to factive usages, there are still multiple senses of “knowledge,” and so we need to distinguish between them.
One kind of knowledge is procedural knowledge, sometimes called competence or “know-how;” for example, one can know how to ride a bicycle, or one can know how to drive from Washington, D.C. to New York.
Another kind of knowledge is acquaintance knowledge or familiarity; for instance, one can know the department chairperson, or one can know Philadelphia.
Epistemologists typically do not focus on procedural or acquaintance knowledge, however, instead preferring to focus on propositional knowledge.
A proposition is something which can be expressed by a declarative sentence, and which purports to describe a fact or a state of affairs, such as “Dogs are mammals,” “2+2=7,” “It is wrong to murder innocent people for fun.
” (Note that a proposition may be true or false; that is, it need not actually express a fact.
) Propositional knowledge, then, can be called knowledge-that; statements of propositional knowledge (or the lack thereof) are properly expressed using “that”-clauses, such as “He knows that Houston is in Texas,” or “She does not know that the square root of 81 is 9.” In what follows, we will be concerned only with propositional knowledge.
Propositional knowledge, obviously, encompasses knowledge about a wide range of matters: scientific knowledge, geographical knowledge, mathematical knowledge, self-knowledge, and knowledge about any field of study whatever.
Any truth might, in principle, be knowable, although there might be unknowable truths.
One goal of epistemology is to determine the criteria for knowledge so that we can know what can or cannot be known, in other words, the study of epistemology fundamentally includes the study of meta-epistemology (what we can know about knowledge itself).