Profile on epistemology: from mindreading to intellectual autonomy

Published: July 25, 2018

Posted In: , , , ,

The tri-campus Department of Philosophy is very proud of the breadth of specialists we have in all branches of philosophy, and we’ve started to feature brief reflections from a diverse range of our scholars on key issues in their particular fields. In the last issue of our annual magazine, Philosophy News, we focused on epistemology, the branch of philosophy concerned with the theory of knowledge and related concepts such as truth, belief, and justification.

In addition to Professors Benj Hellie (UTSC), Jennifer Nagel (UTM), Gurpreet Rattan (UTM), Jonathan Weisberg (UTM), Franz Huber (St. George), Lecturer Kenneth Boyd (UTSC), and PhD students Jessica Wright and Julia Smith, our complement of epistemologists was expanded by the arrival of Professor David Barnett (St. George) last year and Postdoctoral Fellow Stefan Lukits (St. George) this year. In what follows, four of these scholars share some of their contemplations on various problems in epistemology.

Jump to:

Professor Jennifer Nagel: Knowledge and Mindreading

1900 b&w poster of magician reading woman's mind

Marvelous feats in mind reading. Poster print, lithograph, B&W, 1900.

The word “mindreading” suggests a theatrical trick: the stage magician presses his hand to your forehead and mysteriously detects what you are thinking. But mindreading is also the standard term in social psychology for our natural capacity to attribute mental states to others. When you watch someone reaching for something, you see another person who wants something—the salt, say—and is trying to get it. On the basis of facial expression, speech, and gesture, we instinctively attribute goals, traits, desires, beliefs, and knowledge. My current project focuses on the difference between belief and knowledge, and on what we can learn about these states from studying the ways they are instinctively tracked by our everyday, non-magical social instincts.

There’s something puzzling about our instinctive tracking of knowledge and belief. If someone wants the salt, it will make no difference whether he knows or just believes that it is to his left: he will make the same motion either way. However, if you dig into big data on how we talk about other people, you see that we keep marking the distinction between believing and knowing, and use both of these terms heavily in describing what people are doing. It’s not obvious why we do this—and indeed, many philosophers who work on social navigation just focus on belief attributions and belief-desire explanations of action, passing over the hard fact that we speak more often of knowledge than belief.

You might think that knowledge would be harder to track, because the knower has to meet a higher standard. But sometimes high standards make things easier: tracking knowledge involves recognizing both its presence and absence. If your view of an event is blocked, I can tell that you don’t know what is happening, even when it’s a really open question what you might believe. Meanwhile, knowledge is in one key respect simpler than belief: while agents can believe almost anything, they can only know what is true. Young children talk about knowledge well before they can talk about belief, and non-human primates also spot knowledge and ignorance in their competitors even when they can’t keep track of any false (or accidentally true) beliefs that their competitors might have.

My own view is that the complex rules naturally used for instinctive belief attributions are a systematic expansion of a simpler set of rules used for knowledge detection. My current project aims to explain the nature of these rules, drawing on cross-linguistic work on mental state attribution, developmental and comparative psychology, and also on some very old-fashioned theoretical work in epistemology. And, although my central aim is to demystify what is going on in natural social intelligence, I have to confess that sometimes I do feel there is something almost magical about the way we are able to detect invisible states like knowledge, on our way to making sense of each other.

Associate Professor Franz Huber: Epistemology and Beyond

Brain with strings flowing through it (illustration)One way to engage with epistemology is as a normative discipline: to study how one should believe. For instance, we might propose the norm that one’s beliefs be consistent. This raises the question of why one’s beliefs should be consistent. That is, we need to justify this norm.

To do so requires clarifying the nature of normativity. According to one view, normativity consists in taking the means to one’s ends: a norm is a hypothetical imperative telling one what to do conditional on the assumption that one has a certain end. We justify such a hypothetical imperative by showing that obeying the norm in question really is a means to attaining the end the norm is conditional upon. In other words, we justify a norm by showing that some means-end relationship obtains.

For instance, we can justify the norm that one’s beliefs be consistent by showing that one’s beliefs are true only if they are consistent. That is, we justify the norm of consistency by showing it to be a necessary means to attaining the end of holding only true beliefs—an end one may, or may not, have.

Three features of this way of engaging with epistemology are worth being stressed.

First the bad news. Showing that a means-end relationship obtains requires carrying out a proof or argument. No sweet without sweat.

Next the sobering news. Engaging with epistemology in this way tells one which means to take in order to achieve various ends one may, or may not, have. However, it does not tell one which ends to have. To do so would be to succumb to dogmatism.

Finally, the good news. We can consider norms that go beyond epistemology and relate one’s beliefs to information about non-epistemological things. One such norm concerns degrees of belief and chances from metaphysics. It requires that one’s degrees of belief, in special circumstances, be equal to the chances if one is certain what they are. Another norm requires one’s degrees of belief to be probabilities.

Once these norms are justified by a means-end argument, one can explore their consequences. It turns out that some of these consequences—such as the thesis that chances are probabilities—are entirely metaphysical. These metaphysical consequences are necessary conditions for the satisfiability of said norms, and thus for the attainability of certain ends.

The upshot of this is that, by engaging with epistemology in this way, we can go beyond it and also make progress in metaphysics.

PhD student Jessica Wright: Epistemic Evaluation and Responsibility

ey inside magnifying glass (illustration)Consider two common ways in which we ethically assess other people. First, we evaluate others’ actions, calling them good or bad, altruistic or selfish, and so on. Second, we hold others responsible for their actions, blaming them when they act badly and praising them when they act as they should.

An interesting problem in epistemology is analogous to this one in ethics. It concerns how we should evaluate others’ beliefs and attitudes, and whether we can hold others responsible for them.

We can and do evaluate others’ explicit beliefs, calling them true or false, rational or irrational. But what about our other mental states? Unlike our actions, the content of our mental states is not always clear, even to the agent herself. This is especially urgent, as recent work in cognitive science tells us that many of our attitudes are deviant—introspectively inaccessible, associative, or outside of typical (reflective) avenues of control. Are these attitudes the proper subjects of epistemic evaluation, or do they fall outside this normative realm altogether?

It is also unclear how we can justifiably hold others responsible for their beliefs and attitudes (even the nondeviant ones). Many theorists have argued that we can be held responsible only for what we do intentionally and voluntarily. But is this the right model to apply to the epistemic realm? If none of our beliefs are under our voluntary control, it may mean that we cannot be held responsible for any of our mental states; or it may mean that epistemic responsibility needs to be reconceived.

My own view is that epistemic evaluation and responsibility are not best founded on voluntarist assumptions, which are strongly internalist—requiring introspective awareness and control. A hybrid picture, where evaluation is external to the agent but responsibility requires some level of reflective control, is the best solution to these thorny problems.

Assistant Professor David Barnett: The Puzzle of Intellectual Autonomy

An intellectually autonomous agent is one who thinks for herself, and doesn’t just go along with received opinion. We usually think of autonomy as a rational ideal. But autonomous agents face the charge of chauvinism. If you have no independent evidence supporting that you of all people are the one whose judgment is objectively most reliable, then trusting your own judgment can seem like objectionable chauvinism.

This challenge to autonomy arises most obviously in social epistemology. Conciliationists about disagreement charge you with chauvinism unless you grant equal weight to the beliefs of peers as to your own. And anti-reductionists about testimony say it is chauvinistic not to trust others’ beliefs by default, as you allegedly must your own.

But I think the local problems they identify with their opponents’ views are just symptoms of a deeper challenge. Even the most basic requirements of rationality would have us grant special authority to our own beliefs. For example, rationality requires that you see to it that your belief is consistent with your other beliefs, rather than with other people’s beliefs. But the charge of chauvinism can be raised against even this fundamental requirement of rationality. If you have no independent evidence that consistency with your beliefs is a better guide to the truth, then why aim for consistency with your beliefs rather than mine?

Raindrops on a window with sunset in backI think a solution to these challenges requires a better understanding of how beliefs (and other mental states figuring into rational requirements) contribute to the subjective perspective of the agent. Beliefs are transparent, in the sense that when you believe that it will rain, from your perspective it appears to be a fact about the world that it will rain. But if someone else believes that it will rain, then from your perspective this appears merely to be a fact about that person’s state of mind.

This contrast is important, because the puzzle of intellectual autonomy only arises when we consider an agent’s beliefs from a third-person perspective. Because an agent typically does not adopt this perspective on her own beliefs, exercising intellectual autonomy does not involve chauvinistically privileging her own beliefs over others’. Instead, it requires only privileging the truth over what is merely believed. When you try to see to it that your belief is consistent with the truth, you will of course end up making it consistent with what you believe to be the truth, rather that with what some other person believes. But from your perspective, this is not a matter of privileging your beliefs over another person’s, but instead simply of privileging the truth.


Read another profile on epistemology.

SHARE
Facebooktwitter