Wednesday, September 2, 2009

Fallibilism

Fallibilism is the epistemological thesis that no belief (theory,
view, thesis, and so on) can ever be rationally supported or justified
in a conclusive way. Always, there remains a possible doubt as to the
truth of the belief. Fallibilism applies that assessment even to
science's best-entrenched claims and to people's best-loved
commonsense views. Some epistemologists have taken fallibilism to
imply skepticism, according to which none of those claims or views are
ever well justified or knowledge. In fact, though, it is fallibilist
epistemologists (which is to say, the majority of epistemologists) who
tend not to be skeptics about the existence of knowledge or justified
belief. Generally, those epistemologists see themselves as thinking
about knowledge and justification in a comparatively realistic way —
by recognizing the fallibilist realities of human cognitive
capacities, even while accommodating those fallibilities within a
theory that allows perpetually fallible people to have knowledge and
justified beliefs. Still, although that is the aim of most
epistemologists, the question arises of whether it is a coherent aim.
Are they pursuing a coherent way of thinking about knowledge and
justification? Much current philosophical debate is centered upon that
question. Epistemologists generally seek to understand knowledge and
justification in a way that permits fallibilism to be describing a
benign truth about how we can gain knowledge and justified beliefs.
One way of encapsulating that project is by asking whether it is
possible for a person ever to have fallible knowledge and
justification.

1. Introduction

The term "fallibilism" comes from the nineteenth century American
philosopher Charles Sanders Peirce, although the basic idea behind the
term long predates him. According to that basic idea, no beliefs (or
opinions or views or theses, and so on) are so well justified or
supported by good evidence or apt circumstances that they could not be
false. Fallibilism tells us that there is no conclusive justification
and no rational certainty for any of our beliefs or theses. That is
fallibilism in its strongest form, being applied to all beliefs
without exception. In principle, it is also possible to be a
restricted fallibilist, accepting a fallibilism only about some
narrower class of beliefs. For example, we might be fallibilists about
whatever beliefs we gain through the use of our senses — even while
remaining convinced that we possess the ability to reason in ways that
can, at least sometimes, manifest infallibility. Thus, one special
case of this possible selectivity would have us being fallibilists
about empirical science even while exempting mathematical reasoning
from that verdict. For simplicity, though (and because it represents
the thinking of most epistemologists), in what follows I will
generally discuss fallibilism in its unrestricted form. (The exception
will be section 6, where a particularly significant, but seemingly
narrower, form of fallibilism will be presented.)

Fallibilism is an epistemologically pivotal thesis, and our initial
priority must be to formulate it carefully. Almost all contemporary
epistemologists will say that they are fallibilists. Yet the vast
majority of them also wish not to be skeptics. They would rather not
be committed to embracing principles about the nature of knowledge and
justification which commit them to denying that there can be any
knowledge or justified belief. This desire coexists, nonetheless, with
the belief that fallibility is rampant. Many epistemological debates,
it transpires, can be understood in terms of how they try to balance
these epistemologically central desires. So, can we find a precise
philosophical understanding of ourselves as being perpetually fallible
even though reassuringly rational and, for the most part,
knowledgeable?

2. Formulating Fallibilism: Preliminaries

An initial statement of fallibilism might be this:

All beliefs are fallible. (No belief is infallible.)

But what, exactly, is that saying? Here are three claims it is not making.

(1) Fallible people. It is not saying just that all believers — all
people — are fallible. A person as such is fallible if, at least
sometimes, he is capable of forming false beliefs. But that is
compatible with the person's often — on some other occasions —
believing infallibly. And that is not a state of affairs which is
compatible with fallibilism.

(2) Actually false beliefs. Nor is fallibilism the thesis that in fact
all beliefs are false. That possibility is allowed — but it is not
required — by fallibilism. Hence, it is false to portray fallibilism —
as commentators on science, in particular, sometimes do — in these
terms:

All scientific beliefs are false. This includes all scientific
theories, of course. (After all, even scientific theories are only
theories. So they are fallible — and therefore false.)

Regardless of whether or not that is a correct claim about scientific
beliefs and theories, it is not an accurate portrayal of what
fallibilism means to say. The key term in fallibilism, as we have so
far formulated it, is "fallible." And this conveys — through its use
of "-ible" — only some kind of possibility of falsity, rather than the
definite presence of actual falsity.

(3) Contingent truths. Take the belief that there are currently at
least one thousand kangaroos alive in Australia. That belief is true,
although it need not have been. It could have been false — in that the
world need not have been such as to make it true. So, the belief is
only contingently true (as philosophers say). By definition, any
contingent truth could have failed to be true. But even if we were to
accept that all truths are only contingently true, we would not be
committed to fallibilism. The recognition that contingent truths exist
is not what underlies fallibilism. The claim that any contingent truth
could instead have been false is not the fallibilist claim, because
fallibilism is not a thesis about truths in themselves. Instead, it is
about our attempts in themselves to accept or believe truths. It
concerns a kind of fundamental limitation first and foremost upon our
powers of rational thought and representation. And although a truth's
being contingent means that it did not have to be true, this does not
mean that it will, or even that it can, be altering its truth-value
(by becoming false) in such a way as to deceive you. For instance, the
truth that there are now more than one thousand kangaroos alive in
Australia is not made false even by there being only five kangaroos
alive in Australia in two days time from now.

3. Formulating Fallibilism: A Thesis about Justification

Given section 2's details, a better (and routine) expression of
fallibilism is this:

F: All beliefs are only, at best, fallibly justified.

F's main virtue, as a formulation of fallibilism, is its locating the
culprit fallibility as arising within the putative justification that
is present on behalf of a given belief. The kind of justification in
question is called "epistemic justification" by epistemologists. And
the suggested formulation, F, of fallibilism is saying that there is
never conclusive justification for the truth of a given belief.

There are competing epistemological theories of what, exactly,
epistemic justification is. Roughly speaking, though, it is whatever
would make a belief more, rather than less, rationally well supported
or established. This sort of rationality is meant to be
truth-directed. For example (as Conee and Feldman 2004 would argue),
whenever some evidence is providing epistemic support — justification
— for a belief, this is a matter of its supporting the truth of that
belief. In that sense, the evidence provides good reason to adopt the
belief — to adopt it as true. Or (to take another example, such as
would be approved of by the kind of theory from Goldman 1979) a
believer might have formed her belief within some circumstance or in
some way that — regardless of whether she can notice this — makes her
belief likely to be true. (And when are these kinds of justificatory
support present? In particular, are they only ever present if they are
guaranteeing that the belief being supported is true? Are any actually
false beliefs ever justified? Section 10 will focus on the question of
whether fallible justification is ever present, either for true or for
false beliefs.)

Just as there are competing interpretations of the nature of epistemic
justification, epistemologists exercise care in how they read F.
Perhaps the most natural reading of it says that no one is ever so
situated — even when possessing evidence in favor of the truth of a
particular belief — that, if she were to be rational in the sense of
respecting and understanding and responding just to that evidence, she
could not proceed to doubt that the belief is true. More generally,
the idea behind F is that, no matter how good one's justification is
in support of a particular belief's being true, that justification is
never so good as to be conclusive — leaving no room for anyone who
might be rationally attending to that justification not to have the
belief it is supporting. At any stage, according to F, doubt could
sensibly (in some relevant sense of "sensibly") arise as to the truth
of the particular belief.

Often, therefore, this kind of possible doubt is called a rational
doubt. This is not to say that, necessarily, the most rational
reaction is to be swayed by the doubt, accepting it as decisive;
whether one should react like that is a separate issue, probably
deserving to be decided only after some subtle argument. The term
"rational doubt" is meant only to distinguish this sort of actual or
possible doubt from a patently irrational one — a doubt that is
psychologically, but not even prima facie rationally, available. How
might a doubt that is not even prima facie rational arise? Here is one
possible way. Imagine a person who is attending to evidence for the
truth of a particular belief, yet who refuses to accept the belief's
being true. Suppose that this refusal is due either (i) to her
misunderstanding the evidence or (ii) to some psychological quirk such
as a general lack of respect for evidence at all or such as mere
obstinacy (without her supplying counter-reasons disputing the truth
or power of the evidence). There is no accounting for why some people
will in fact doubt a given belief: psychologically, doubt could be an
option even in the face of rationally conclusive evidence.
Nevertheless, fallibilism is not a thesis about that psychological
option. The option it describes concerns rationality. Fallibilism is
about what it claims to be the ever-present availability of rational
doubt.

Accordingly, one possible way of misinterpreting F would involve
confusing the concept of a rational doubt with that of a subjectively
felt doubt or, maybe more generally, a psychologically present doubt.
Rational doubts need not be psychologically actual doubts, just as
psychologically actual ones need not be rational. In theory, a person
might have or feel some doubt as to whether a particular claim is true
— some doubt which she should not have or feel. (Perhaps she is
misevaluating the strength of the evidence she has in support of that
claim.) Equally, someone might have or feel no doubt as to the truth
of a belief he has — when he should have or feel some such doubt.
(Perhaps he, too, is misevaluating the strength of the evidence he has
in support of his belief.) In either case, the way in which the person
is in fact reacting — by having, or by not having, an actual doubt —
does not determine whether his or her evidence is in fact providing
rationally conclusive support. That is because a particular reaction —
of doubting or of not doubting — might not be as justified or rational
in itself as is possible. (By analogy, we may keep in mind the case —
unfortunately, all too common a kind of case — of a brutal tyrant who
claims, sincerely, to have a clear conscience at the end of his life.
The morality of his actions is more obviously to be explicated in
terms of what his conscience should be telling him rather than of what
it is telling him.) In effect, F is saying that no matter what
evidence you have, no matter how carefully you have accumulated it,
and no matter how rationally you use and evaluate it, you can never
thereby have conclusive justification for a belief which you wish to
support via all that evidence. Equally, F is saying that no matter
what circumstance you occupy, and no matter how you are forming a
particular belief, no guarantee is thereby being provided of your
belief being true. In those respects (according to F), any
justification you have is fallible — and it will remain so, no matter
what you do with it, no matter how assiduously you attend to it, no
matter what the circumstances are in which you are operating. The
problem will also remain, no matter how you might supplement or try to
improve your evidence or circumstances. Any possible addition or
alteration that you might make will continue leaving open at least a
possibility — one to which a careful and rational thinker would in
principle respond respectfully if she were to notice it — of your
belief's being false.

In that way, fallibilism — as a thesis about justification — travels
more deeply into the human cognitive condition than it would do if it
were a point merely about logic, say. It is not saying that no belief
is ever supported by evidence whose content logically entails the
first belief's content. An example of that situation would be provided
by a person's having, as evidence, the belief that he is a living,
breathing Superman — from which he infers that he is alive. The
evidence's content ("I am a living, breathing Superman") does
logically entail the truth of the inferred content ("I am alive").
(This attribution of logical validity or entailment means — from
standard deductive logic — that it is impossible for the first content
to be true without the second one also being true.) But the
justification being supplied is fallible, because — obviously — the
person will have, at best, inconclusive justification for thinking
that he is a living, breathing Superman in the first place. The
putative justification is the belief (about being Superman) and its
history, not only its content and the associated logical relations.
Yet fallibilism says that, even when all such further features are
taken into account, some potential will remain for rational doubt to
be present.

4. Formulating Fallibilism: Necessary Truths

Nevertheless, a modification of F (in section 3) is required, it
seems, if fallibilism is to apply to beliefs like mathematical ones or
to beliefs reporting theses of pure logic, for instance. Most
philosophers would accept that it is possible to be fallible in
holding such a belief — and that this is so, even given that there is
a sense in which such a belief, when true, could not ever be false.
Thus, perhaps mathematical believing is a fallible process, able to
lead to false beliefs. Perhaps this is so, even if mathematical truths
themselves never "just happen" to be true — never depending upon
changeable surrounding circumstances for their truth, hence never
being susceptible to being rendered false by some change in those
surrounding circumstances. How should we modify F, therefore, so as to
understand the way in which fallibility can nonetheless be present in
such a case? More generally, how should we modify F, so as to
understand the prospect of a person ever having fallible beliefs (let
alone only fallible ones) in what philosophers call necessary truths?

By definition, any truth which is not contingent is necessary. The
class of necessary truths is the class of propositions or contents
which, necessarily, are true. They could not have failed to be true.
And that class will generally be thought to contain — maybe most
significantly — mathematical truths. Consider, then, the belief that 2
+ 2 = 4. In itself (almost every philosopher will concur), there is no
possibility of that belief's being false. However, if it is impossible
for that belief to be false, then there is also no possible evidence
on the basis of which — in coming to believe that 2 + 2 = 4 — a person
could be forming a false belief. In this way, no belief that 2 + 2 = 4
could be merely fallibly justified — at least as this phenomenon has
been portrayed in F. Yet it is clear — or so most epistemologists will
aver — that mathematical believing can be fallible. Indeed, if
fallibilism is true, all mathematical beliefs will be subject to some
sort of fallibility: even mathematical beliefs would, at best, be only
fallibly justified. How, therefore, is this to be understood?

Here is one suggestion — F* — which modifies F by drawing upon some
standard epistemological thinking. The aim in moving from F to F*
would be to allow for the possibility of having a fallible belief in a
necessary truth:

F*: All beliefs are, at best, only fallibly justified. (And a
belief is fallibly justified when — even if the belief, considered in
itself, could not be false — the justification for it exemplifies or
reflects some more general way or process of thinking or forming
beliefs, a way or process which is itself fallible due to its capacity
to result in false beliefs.)

Sections 5 and 7 will describe a few possible reasons for a
fallibilist to regard your belief that 2 + 2 = 4 as being fallible. In
the meantime, we need only note schematically how F* would accommodate
those possible reasons. The basic approach would be as follows.
Although your belief that 2 + 2 = 4 cannot be false (once it is
present), your supposed justification for it is fallible. This could
be so in a few ways. For a start, maybe you are merely repeating by
rote something you were told many years ago by a somewhat unreliable
school teacher. (Imagine the teacher having been poor at making
accurate claims within most other areas of mathematics. Even with
respect to the elements of mathematics about which she was accurate,
she might have been merely repeating by rote what she had been told by
her own early — and similarly unreliable — teachers.) The fallibility
of memory is also relevant: over the years, one forgets much. Still,
your current belief that 2 + 2 = 4 seems accurate. And it need not be
present only because of your fallible memory of what your fallible
teacher told you. Suppose that you are now very sophisticated in your
mathematical thinking: in particular, your justification for your
belief that 2 + 2 = 4 is purely mathematical in content. That
justification involves clever representation, via precisely defined
symbols, of abstract ideas. Nevertheless, even such purely
mathematical reasoning can mislead you (no matter that it has not done
so on this occasion). Really proving that 2 + 2 = 4 is quite
difficult; and when people are seeking to grasp and to implement such
proofs, human fallibility may readily intrude. Actual attempts to
establish mathematical truths need not always lead to accurate or true
beliefs.

At any rate, that is how a fallibilist might well analyze the case.

5. Empirical Evidence of Fallibility

How can we ascertain which of our ways of thinking are fallible? Both
ordinary observation and sophisticated empirical research are usually
regarded as able to help us here, by revealing some of the means by
which fallibility enters our cognitive lives. I will list several of
the seemingly fallible means of belief-formation and
belief-maintenance that have been noticed.

(1) Misusing evidence. Apparently, people often misevaluate the
strength of their evidence. By taking it to be stronger or weaker
support than in fact it is for the truth of a particular belief, a
person could easily be led to adopt or retain a false, rather than
true, belief. Indeed, there are many possible ways not to use evidence
properly. For example, people do not always notice, let alone compare
and resolve, conflicting pieces of evidence. They might overlook some
of the evidence available to them. There can be inattention to details
of their evidence. And so forth.

(2) Unreliable senses. How many of us have wholly reliable — always
accurate — senses? Shortsightedness is not so rare. The same is true
of long-sightedness. People can have poor hearing, not to mention
less-than-perfectly discerning senses of smell, taste, and so on.
Sensory illusions and hallucinations affect us, too. The road seems to
ripple under the heat of the sun; the stick appears to bend as it
enters the glass of water; and so forth. In such cases we will think,
upon reflection, that what we seem to sense is something we only seem
to sense.

(3) Unreliable memory. At times, people suffer lapses of memory; and
they can realize this, experiencing "blanks" as they endeavor to
recall something. They can also feel as though they are remembering
something, when actually this feeling is inaccurate. (A "false memory"
is like that. The event which a person seems to recall, for instance,
never actually happened.)

(4) Reasoning fallaciously. To reason in a logically invalid way is to
reason in a way which, even given the truth of one's premises or
evidence, can lead to falsity. It is thereby to reason fallibly. Do we
often reason like that? Seemingly, yes. Of course, often we and others
realize that we are doing so. And we and those others might generally
be satisfied with our admittedly fallible reasoning. (But should we
ever regard it with satisfaction? Section 10 will consider this kind
of question.) There are times, though, when we and others do not
notice the fallibility in our reasoning. On those occasions, we are —
without realizing this about ourselves — reasoning fallaciously. That
is, we are reasoning in ways which are logically invalid but which
most people mistakenly, albeit routinely, regard as being logically
valid.

(5) Intelligence limitations. Is each of us so intelligent as never to
make mistakes which a more intelligent person would be less likely
(all else being equal) to make? Presumably none of us escape that
limitation. Do we notice people making mistakes due to their
exercising (and perhaps possessing) less intelligence than was needed
not to make those mistakes? We appear to do so. Sometimes (often too
late), we observe this in ourselves, too.

(6) Representational limitations. We use language and thought to
represent or describe reality — hopefully, to do this accurately. But
people have often, we believe, made mistakes about the world around
them because of inadequacies in their representational or descriptive
resources. For example, they can have been applying misleading and
clumsily constructed concepts — ones which could well be replaced
within an improved science. (And this sort of problem — at least to
judge by the apparent inescapability of disputes among its
practitioners — might be even more acute within such areas of thought
as philosophy.)

(7) Situational limitations. It is not uncommon for people to make
mistakes of fact because they have biases or prejudices that impede
their ability to perceive or represent or reflect accurately upon
those facts. Such mistakes may be made when people are manifesting an
insufficiently developed awareness of pertinent aspects of the world.
Maybe a person's early upbringing, and how she has subsequently lived
her life, has not exposed her to a particularly wide range of ideas.
Perhaps she has not encountered what are, as it happens, more accurate
ideas or principles than the ones she is applying in her attempts to
understand the world. All of this might well prevent her even noticing
some relevant aspects of the world. (When both I and a doctor gaze at
an X-ray, only one of us notices much of medical relevance.)

That list of realistically possible sources of fallibility —
philosophers will suspect — could be continued indefinitely. And its
scope is disturbingly expansive. Thus, even when you do not feel as
though a belief of yours has been formed or maintained in some way
that manifests any of those failings, you could be mistaken about
that. This is a factual matter; or so most philosophers will say. On
any given occasion, it is an empirical question as to whether in fact
you are being fallible in one of those ways. (Notably, it is not
simply a matter of whether you are feeling fallible.) Accordingly,
many epistemologists have paid attention to pertinent empirical
research by psychiatrists, neurologists, biologists, anthropologists,
and the like, into actual limitations upon human cognitive powers.
Data uncovered so far have unveiled the existence of much fallibility.
(See, for example, Nisbett and Ross 1980; Kahneman, Slovic, and
Tversky 1982.)

Some epistemologists have found this to be worrying in itself. Still,
has enough fallibility thereby been uncovered to justify an acceptance
of fallibilism? (Remember that fallibilism, in its most general form,
is the thesis that all of our beliefs are fallible.) This, too, is at
least partly an empirical question. It is the question of just how
fallible people are as a group — and, naturally, of just how much a
given individual ever manages to transcend such limitations upon
people in general. How fallibly, as it happens, do people ever form
and maintain beliefs? Is every single one of us fallible enough to
render every single one of our beliefs fallible?

It is difficult, perhaps impossible, to use personal observations and
empirical research to answer those questions conclusively. (And
fallibilism would deny that this is possible anyway.) For presumably
such fallibilities would also afflict people as observers and as
scientific inquirers. Hence, this would occur even when theorists —
let alone casual observers — are investigating those fallibilities.
The history of science reveals that many scientific theories which
were at one time considered to be true have subsequently been
supplanted, with later theories deeming the earlier ones to have been
false.

Is science therefore especially fallible as a way of forming beliefs
about the world? That is a matter of some philosophical dispute.
Empirical science is performed by fallible people, often involving
much fallible coordination among themselves. It relies on the fallible
process of observation. And it can generate quite complicated theories
and beliefs — with that complexity affording scope for marked
fallibility. Yet in spite of these sources of fallibility nestling
within it (when it is conceived of as a method), science might well
(when it is conceived of as a body of theses and doctrines) encompass
the most cognitively impressive store of knowledge that humans have
ever amassed. Even if not all of its theories and beliefs are true
(and therefore not all of them are knowledge), a significant
percentage of them seem to have a strong case for being knowledge. Is
that compatible with science's fallibility, even its inherent
fallibility, as a method? Or are none of its theories and beliefs
knowledge, simply because (as later scientists will realize) some of
them are not? Alternatively, are none of them knowledge, because none
of them are conclusively justified? That depends on what kind of
knowledge scientific knowledge would be. This is a subtle matter,
asking us first to consider in general whether there can be
inconclusively justified knowledge at all. Section 9 will indicate how
epistemologists might take a step towards answering that question. It
will do so by discussing the idea of fallible knowledge. (And
section10 will comment on science and fallible justification.)

6. Philosophical Sources of Fallibilism: Hume

Section 5 indicated some empirical grounds on which fallibilism might
be thought to be true. Epistemologists have also provided
non-empirical arguments for fallibilism, both in its strongest form
and in important-but-weaker forms. This section and the next will
present two of those arguments.

One of them comes from the eighteenth-century Scottish philosopher
David Hume's classic invention of what is now called inductive
skepticism. (For a succinct version of his argument, see his 1902
[1748], sec. IV. For some sense of the philosophical and historical
dimensions of that notion, see Buckle 2001: part 2, ch. 4.) At the
core of his skeptical argument was an
important-even-if-possibly-not-wholly-general fallibilism. Hume's
argument showed, at the very least, the inescapable fallibility of an
extremely significant kind of belief — any belief which either is or
could be an inductive extrapolation from observational data. According
to Hume, no beliefs about what is yet to be observed (by a particular
person or some group) can be infallibly established on the basis of
what has been observed (by that person or that group). Consider any
use of present and past observations, perhaps to derive and at least
to support, some view that aims to describe aspects of the world that
have not yet been observed. (Standard examples include people's
seeking to justify the belief that the sun will rise tomorrow, by
using past observations of it having risen, and people's many
observations of black ravens supposedly justifying the belief that all
ravens are black.) Hume noticed that observations can never provide
conclusive assurance — a proof — that the world is not about to change
from what it has thus far been observed to be like. Even if all
observed Fs have been Gs, say, this does not entail that any, let
alone all, of the currently unobserved Fs are also Gs. No such
guarantee can be given by the past observations. And this is so, no
matter how many observations of Fs have been made (short of having
observed all of them, while realizing that this has occurred).

Hume presents his argument as one that uncovers a limitation upon the
power or reach of reason — that is, upon how much can be revealed to
us by reason as such. Possibly, this is in part because that is the
non-trivial aspect of his argument. Overall, his argument is
describing a limitation upon the power or reach both of reason and of
observation — upon how far these faculties or capacities can take us
towards proving the truth of various beliefs which, inevitably, we
find ourselves having. But that limitation reflects both a point that
is non-trivially true (about reason) and one that is trivially true
(about observation). Hume combines those two points (as follows) to
attain his fallibilism. (1) It is trivially true that any observations
that have been made at and before a given time have not been of what,
at that time, is yet to be observed. (2) It is true (although not
trivially so) that our powers of reason face a limitation of their
own, one that leaves them unable to overcome (1)'s limitation upon
observation. Our capacity to reason — our powers simply of reflection
— must concede that, regardless of however unlikely this might seem at
the time, the unobserved Fs could be different in a relevant way from
those that have been observed. Hence, in particular, whatever powers
of reason we might use in seeking to move beyond our observations will
be unable to eliminate the possibility that the presently unobserved
Fs are quite different (as regards being Gs) from the Fs that have
been observed. Our powers of reason must concede — again, even if this
seems unlikely at the time — that continued observations of Fs might
be about to begin giving results that are quite different to what such
observations have previously revealed about Fs being Gs. Obviously,
the past observations of Fs (all of which, we are supposing, were Gs)
do not tell us that this is likely to occur, let alone that it is
about to do so. But, crucially, pure reason tells us that it could be
about to occur. (3) Consequently, if we combine (1) and (2), we reach
this result:

Neither observation nor reason can reveal with rational certainty
anything about the nature of any of the Fs that are presently
unobserved.

In other words, there is always a "logical gap" between the
observations of Fs that have been made (either by some individual or a
group) and any conclusion regarding Fs that have not yet been observed
(by either that individual or that group).

Our appreciation of that gap's existence is made specific — even
dramatic — by the Humean thought that the world could be about to
change in the relevant respect. We thus see that fallibility cannot be
excluded from any justification which we might think is present for a
belief that either is or could be an extrapolation from some
observations. Such a belief could be about the future ("The sun will
rise tomorrow"), the presently unobserved past ("Dinosaurs used to
live here"), populations ("The cats in this neighborhood are
vicious"), and so on. Beliefs like that are pivotal in our mental
lives, it seems.

Indeed, as some philosophers argue, they can be all-but-ubiquitous —
even surprisingly so. When you believe that you are seeing a cat, is
this an extrapolation from observations? At first glance, it seems
straightforwardly observational itself. Yet maybe it is an
extrapolation in a less obvious way. Perhaps it is an extrapolation
from both your present sensory experience and similar ones that you
have had in the past. Perhaps it is implicitly a prediction that the
object in front of you is not about to begin looking and acting like a
dog, and that it will continue looking and acting like a cat. (Is this
part of what it means to say that the object is a cat — a
genuine-flesh-and-blood-physical-object cat?) Are even simple
observational beliefs therefore concealed or subtle extrapolations? If
they are to be justified, will this need to be inductive
justification?

If so, the Humean verdict (when formulated in contemporary
epistemological language) remains that, even at best, such beliefs are
only fallibly justified. Any justification for them would need to be
observations from which they might have been extrapolated (even if in
fact this is not, psychologically speaking, how they were reached).
And no such justification could ever rationally eliminate the
possibility that any group of apparently supportive observations is
misleading as to what the world would be found to be like if further
observations were to be made.

That is Hume's inductive fallibilism — a fallibilism about all actual
or possible inductive extrapolations from observations. Many
interpreters believe that his argument established — or at least that
Hume meant it to establish — more than a kind of fallibilism. This is
why it is generally called an argument for inductive skepticism, not
just for inductive fallibilism. (On Hume's transition from fallibilism
to skepticism, see Stove 1973.) Accordingly, his conclusion is
sometimes presented more starkly, as saying that observations never
rationally show or establish or support or justify at all any
extrapolations beyond observational data, even ones that purport only
to describe a likelihood of some observed pattern's being perpetuated.
At its most combative, his conclusion might be said — and sometimes
is, especially by non-philosophers — to reveal that predictions are
rationally useless or untenable, or that any beliefs "going beyond"
observational reports are, rationally speaking, nothing more than
guesses. Whether or not that skeptical thesis is true depends, for a
start, upon whether there can be such a thing as fallible
justification — or whether, once fallibility is present, justification
departs. Section 10 will consider that issue.

In any case, Hume's fallibilism is generally considered by
philosophers (for instance, see Quine 1969; Miller 1994: 2-13; Howson
2000: ch. 1) to have struck a serious blow against the otherwise
beguiling picture of science as delivering conclusive knowledge of the
inner continuing workings of the world. It is not uncommon for people
to react to this interpretation of Hume's result by inferring that
therefore science — with its reliance upon observations as data, with
which it supports its predictions and more general principles and
posits — never really gives us knowledge of a world beyond those
observations. The appropriateness of that skeptical inference depends
on whether or not there can be such a thing as fallible knowledge — or
whether, once fallibility is present, knowledge departs. Section 9
will consider that issue.

7. Philosophical Sources of Fallibilism: Descartes

Does Hume's reasoning (described in section 6) support fallibilism in
its most general form? It does, if all beliefs depend for their
justification upon extrapolations from observational experience. And
section 6 also indicated briefly how there can be more beliefs like
that than we might realize. Nevertheless, the usual philosophical
reading of Hume's argument does not assume that the argument shows
that all beliefs are to be supported either fallibly or not at all. We
should therefore pay attention to another equally famous philosophical
argument, one whose conclusion is definitely that no beliefs at all
are conclusively justified.

This argument comes to us from the seventeenth-century French
philosopher René Descartes. In his seminal Meditations on First
Philosophy (1911 [1641]), Descartes ended Meditation I skeptically,
denying himself all knowledge. How was that skeptical conclusion
derived? It was based upon a fallibilism — a wholly general
fallibilism. And his argument for that fallibilism — the Evil Genius
(or Evil Demon) argument, as it is often called — may be presented in
this way:

Any beliefs you have about … well, anything … could be present
within you merely because some evil genius or demon has installed them
there. And they might have been installed so as to deceive you: maybe
any or all of them are false. Admittedly, you do not feel as if this
has happened within you. Nonetheless, it could have done so. Note that
the evil genius is not simply some other person, even an especially
clever one. Rather, it would be God-like in pertinent powers although
malevolent in accompanying intent — mysteriously able to implant any
false beliefs within you so that their presence will feel natural to
you, leaving you unaware that any of your beliefs are bedeviled by
this untoward causal origin. You will never notice the evil genius's
machinations. All will seem normal to you within your mind. It will
feel just as it would if you were observing and thinking carefully and
insightfully.

Is that state of affairs possible? Indeed it is (said Descartes, and
most epistemologists have since agreed with him about that). Moreover,
if it is always present as a possibility, then one pressing part of it
— your being mistaken — is always present as a possibility. This is
always present, as a possibility afflicting each of your beliefs. What
is true of you in this respect, too, is true of everyone. The evil
genius could be manipulating all of our minds. Hence, any belief could
be false, no matter who has it and no matter how much evidence they
have on its behalf. Even the evidence, after all, could have been
installed and controlled by an evil genius.

Interestingly, the reference to an evil genius as such, provocative
though it is, was not essential even to Descartes' own reasoning. In
Meditation I, he had already — immediately prior to outlining the Evil
Genius argument — presented a sufficiently fallibilist worry. It
concerned the possibility of his having been formed or created in some
way — whatever way that might be — which would leave him perpetually
fallible. He wanted to believe that God was his creator. However (he
wondered), would God create him as a being who constantly makes
mistakes, or who is at least always liable to do so? God would be
powerful enough to do this. But (Descartes also thought) surely God
would have had no reason to allow him to make even some mistakes. Yet
manifestly Descartes does make them. So (he inferred), he could not
take for granted at this early stage of his inquiry (as it is
portrayed in his Meditations) that he has actually been formed or
created by a perfect God. The evidence of his fallibility opens the
door to the possibility that he does not have that causal background.
So (he continues), maybe his causal origins are something less than
perfect, as of course they would be if anything less than a perfect
God were involved in them. In that event, however, he is even more
likely to make mistakes than he would be if God was his creator. In
one way or the other, therefore (concludes Descartes), fallibility is
unavoidable for him: no belief of his is immune from the possibility
of being mistaken. Thus, fallibilism is thrust upon Descartes by this
reasoning. (He realizes, nonetheless, that it is subtle reasoning. He
might not retain it in his thinking. He might overlook his
fallibility, if he is not mentally vigilant. Hence, he proceeds to
describe the evil genius possibility to himself, as a graphic way of
holding the fallibilism fast in his mind. The Evil Genius argument is,
in effect, a philosophical mnemonic for him.)

Descartes himself did not remain a fallibilist. He believed that (in
his Meditation II) he had found a convincing answer to that
fallibilist argument. This answer was his Cogito, one of philosophy's
emblematic moments, and it arose via the following reasoning.
Descartes thought that if ever in fact he is being deceived by an evil
genius, at least he will thereby be in existence at these moments. (It
is impossible to be an object of deception without existing.) The
deception would be inflicted upon him while he exists as a thinker —
specifically, as someone thinking whatever false thoughts are being
controlled within him by the evil genius. But this entails (reasoned
Descartes) that there is a kind of thought about which he cannot be
deceived, even by an evil genius. Because he can know that he is
having a particular thought, he can know that he exists at that time.
And so he thought, "I think, therefore I am." (This is the usual
translation into English of the "Cogito, ergo sum" from Latin. The
latter version is from Descartes' Discourse on Method.) He would
thereby know that much, at any rate (inferred Descartes). He need not
— and at this point in his inquiry he does not think that he can —
know which, if any, of his beliefs about the wider world are true.
Nonetheless, he has knowledge of his inner world — knowledge of his
own thinking. He would know not only that he is thinking, but even
what it is that he is thinking. These beliefs about his mental life
are conclusively supported, too, because — as he has just argued —
they are beyond the relevant reach of any evil genius. No evil genius
can give him these thoughts (that he is thinking and hence existing)
and thereby be deceiving him.

But most subsequent epistemologists have been more swayed by the
fallibilism emerging from the Evil Genius argument than by Descartes'
reply to that argument. (For a discussion of these issues in
Descartes' project, see Curley 1978; Wilson 1978.) One common
epistemological objection to his use of the Cogito is as follows: How
could Descartes have known that it was he in particular who was
thinking? Shouldn't he have rested content with the more cautious and
therefore less dubitable thought, "There is some thinking occurring" —
instead of inferring the less cautious and therefore more dubitable
thought, "I am thinking"? That objection was proposed by Georg
Lichtenberg in the eighteenth century. (For a criticism of it, see
Williams 1978: ch. 3.) An advocate of it might call upon such
reasoning as this:

In order to know that it is his own thinking, as against just some
thinking or other, Descartes has to know already — on independent
grounds — that he exists. However, in that event he would not know of
his existing, only through his knowing of the thinking actually
occurring: he would have some other source of knowledge of his
existence. Yet his Cogito had been relied upon by him because he was
assuming that his knowing of the thinking actually occurring was (in
the face of the imagined evil genius) the only way for him to know of
his existence.

That reasoning would claim to give us the following results. (1)
Descartes does not know that he is thinking — because he would have to
know already that he exists (in order to be the subject of the
thinking which is noticed), and because he can know that he exists
only if he already knows that he is thinking (the latter knowledge
being all that is claimed to be invulnerable to the Evil Genius
argument). (2) Similarly, Descartes does not know that he exists —
because he would have to know already that he is thinking (this being
all that is claimed to be invulnerable to the evil genius argument),
and because he could know that he is thinking only by already knowing
that he exists (thereby being able to be the subject of the thinking
that is being noticed). (3) And once we combine those two results, (1)
and (2), what do we find? The objection's conclusion is that Descartes
knows of his thinking and of his existence all at once — or not at
all. In short, he is not entitled — as a knower — to the "therefore"
in his "I think, therefore I exist."

That is one possible objection to the Cogito. Still, even if it
succeeds on its own terms, it leaves open the following question. Can
Descartes have all of that knowledge — the knowledge of his thinking
and the knowledge of his existence — all at once? This depends on
whether, once he has doubted as strongly and widely as he has done, he
can have knowledge even of what is in his own mind. In the
mid-twentieth century, the Austrian philosopher Ludwig Wittgenstein
mounted a deep challenge to anything like the Cogito as a way of
grounding our thought and knowledge. Was Descartes legitimately using
words at all so as to form clearly known thoughts, such as "I am
thinking"? How could he know what these even mean, unless he is
applying some understood language? And Wittgenstein argued that no one
could genuinely be thinking thoughts which are not depending upon an
immersion in a "public" language, presumably a language shared by
other speakers, certainly one already built up over time. In which
case, Descartes would be mistaken in believing that, even if the
possibility of an evil genius imperils all of his other knowledge, he
could retain the knowledge of his own thinking. For even that thinking
would have its content only by using terms borrowed from a public
language. Hence, Descartes would have to be presupposing some
knowledge of that public world, even when supposedly retreating to the
inner comfort and security of knowing just what he is thinking. (It
should be noted that Wittgenstein himself did not generally direct his
reasoning — his Private Language argument, as it came to be called —
specifically against Descartes by name. For Wittgenstein's reasoning,
see his 1978 [1953] secs. 243-315, 348-412.)

Of course, even if the Cogito does in fact succeed, epistemologists
all-but-unite in denying that such conclusiveness would be available
for many — or perhaps any — other beliefs. Accordingly, we would still
confront an all-but-universal fallibilism, with Descartes having
provided an easy way to remember our all-but-inescapable fallibility.
In any case, it remains possible that the Cogito does not succeed, and
that instead the evil genius argument shows that no belief is ever
conclusively justified. Descartes' argument is not the only one for
such a fallibilism. But most epistemologists still refer to it
routinely and with some respect, as being a paradigm argument for the
most general form of fallibilism.

8. Implications of Fallibilism: No Knowledge?

If we were to accept that fallibilism is true, to what else would we
thereby be committed? In particular, what further philosophical views
must we hold (all else being equal) if we hold fallibilism?

Probably the most significant idea that arises, in response to that
question, is the suggestion that any fallibilist about justification
has to be a skeptic about the existence of knowledge. (There is also
the proposal that she must be a skeptic about the existence of
justification. Section 10 will discuss that proposal.) This potential
implication has made fallibilism particularly interesting to many
philosophers. Should we accept the skeptical thesis that because (as
fallibilists claim) no one is ever holding a belief infallibly, no one
ever has a belief which amounts to being knowledge? In this section
and the next, we will consider that question — first (in this section)
by examining how one might argue for the skeptical thesis, next (in
section 9) by seeing how one might argue against it.

That hypothesized skeptic is reasoning along these lines:

1. Any belief, if it is to be knowledge, needs to be conclusively justified.
2. No belief is conclusively justified. [Fallibilism tell us this.]
3. Hence, no belief is knowledge. [This follows from 1-plus-2.]

Fallibilism gives us 2; deductive logic gives us 3 (as following from
1 and 2); and in this section we are not asking whether fallibilism is
true. (We are assuming – for the sake of argument – that it is.) So,
our immediate challenge is to ask whether 1 is true. Is it a correct
thesis about knowledge? Does knowledge require infallibility (as 1
claims it does)? The rest of this section will evaluate what are
probably the two most commonly encountered arguments for the claim
that knowledge is indeed like that.

(1) Impossibility. Many people say this about knowledge:

If you have knowledge of some aspect of the world, it is
impossible for you to be mistaken about that aspect. (An example: "If
you know that it's a dog, you can't be mistaken about its being one.")

We may call that the Impossibility of Mistake thesis. Its advocates
might infer, from the conjunction of it with fallibilism, that no one
ever has any knowledge. Their reasoning would be like this:

Because no one ever has conclusive justification for a belief,
mistakes are always possible within one's beliefs. Hence, no beliefs
attain the rank of knowledge. (We would just think — mistakenly — that
often knowledge is present.)

But almost all epistemologists would regard that sort of inference as
reflecting a misunderstanding of what the Impossibility of Mistake
thesis is actually saying. More specifically, they will say that there
is a misunderstanding of how the term "impossible" is being used in
that thesis. Here are two possible claims that the Impossibility of
Mistake thesis could be thought to be making:

Any instance of knowledge is — indeed, it must be — directed at
what is true. (Knowledge entails truth.)

Any instance of knowledge has as its content what, in itself,
could not possibly be false. (Knowledge entails necessary truth.)

The first of those two interpretations of the Impossibility of Mistake
thesis says that knowledge, in itself, has to be knowledge of what is
true. The second of the two possible interpretations says that
knowledge is of what, in itself, has to be true. The two claims will
be correlatively different in what they imply.

Epistemologists will insist that the first possible interpretation
(which could be called the Necessarily, Knowledge Is of What Is True
thesis) is manifestly true — but that it does not join together with
fallibilism to entail skepticism. Recall (from (2) in section 2) that
fallibilism does not deny that there can be truths among our claims
and thoughts. It denies only that we are ever conclusively justified
in any specific claim or thought as to which claims or thoughts are
true. So, while the Necessarily, Knowledge Is of What Is True thesis
entails that any case of knowledge would be knowledge of a truth,
fallibilism — because it does not deny that there are truths — does
not entail that there is no knowledge.

Epistemologists will also deny that the second possible interpretation
(which may be called the Knowledge Is of What Is Necessarily True
thesis), even if it is true, entails skepticism. Recall (this time
from (3) in section 2) that fallibilism is not a thesis which denies
that knowledge could ever be of contingent truths. So, while the
Knowledge Is of What Is Necessarily True thesis entails that any case
of knowledge would be knowledge of a necessary truth, fallibilism —
because it does not, in itself, deny that there is knowledge of
contingent truths — does not entail that there is no knowledge. (But
most epistemologists, incidentally, will deny that the Knowledge Is of
What Is Necessarily True thesis is true. They believe that — if there
can be knowledge at all — there can be knowledge of contingent truths,
not only of necessary ones.)

(2) Linguistic oddity. Another way in which people are sometimes led
to deny that a wholly general fallibilism is compatible with people
ever having knowledge is by their reflecting on some supposed
linguistic infelicities. Imagine saying or thinking something like
this:

"I know that's true, even though I could be mistaken about its
being true." (An example: "I know that it's raining, even though I
could be mistaken in thinking that it is.")

That is indeed an odd way to speak or think. Let us refer to it as The
Self-Doubting Knowledge Claim. Should we infer, from that claim's
being so linguistically odd, that no instance of knowledge can allow
the possibility (corresponding to the "could" in The Self-Doubting
Knowledge Claim) of being mistaken? Would this imply the
incompatibility of fallibilism with anyone's ever having knowledge?
Does this show that, whenever one's evidence in support of a belief
does not provide a conclusive proof, the belief fails to be knowledge?

Few epistemologists will think so. They are yet to agree on what,
exactly, the oddity of a sentence like The Self-Doubting Knowledge
Claim reflects. (Very roughly: there is some oddity in that claim's
expressed mixture of confidence and caution.) But few of them believe
that the oddity — however, ultimately, it is to be understood — will
imply that knowledge cannot ever be fallible. Their usual view is that
the oddity will be found to reside only in the talking or the thinking
— in someone's actively using — any such sentence. And this could be
so (they continue) without the sentence's also actually being false,
even when it is being used. Some sentences which clearly are
internally logically consistent — and hence which in some sense could
be true — cannot be used without a similar linguistic oddity being
manifested. Try saying, for example, "It's raining, but I don't
believe that it is." As the twentieth century English philosopher G.
E. Moore remarked (and his observation has come to be called Moore's
Paradox), something is amiss in any utterance of that kind of
sentence. (For more on Moore's Paradox, see Sorensen 1988, ch. 1;
Baldwin 1990: 226-32.) This particular sentence — "It's raining, but I
don't believe that it is" — is manifestly odd, seemingly in a similar
way to any utterance of The Self-Doubting Knowledge Claim. Yet this
does not entail the sentence's being false. For each half of it could
well be true; and they could be true together. The fact that it is
raining is logically consistent with the speaker's not believing that
it is. (She could be quite unaware of the weather at the time.) So,
the sentence could be true within itself, no matter that it cannot
sensibly be uttered, say. That is, its content — what it reports —
could be true, even if it cannot sensibly be asserted — as a case of
reporting — in living-and-breathing speech or thought.

And the same is true (epistemologists will generally concur) of The
Self-Doubting Knowledge Claim, the analogous sentence about knowledge
and the possibility of being mistaken. Are they correct about that?
The next section engages with that question.

9. Implications of Fallibilism: Knowing Fallibly?

The question with which section 8 ended amounts to this: is it
possible for there to be fallible knowledge? If The Self-Doubting
Knowledge Claim could ever be true, this would be because at least
some beliefs are capable of being knowledge even when there is an
accompanying possibility of their being mistaken. Any such belief, it
seems, would thereby be both knowledge and fallible.

Many epistemologists, probably the majority, wish to accept that there
can be fallible knowledge (although they do not always call it this).
Few of them are skeptics about knowledge: almost all epistemologists
believe that everyone has much knowledge. But what do they believe
about the nature of such knowledge? When an epistemologist attributes
knowledge, what — more fully — is being attributed? In general,
epistemologists also accept that (for reasons such as those outlined
in sections 5 through 7) knowledge is rarely, if ever, based upon
infallible justification: they believe that there is little, if any,
infallible justification. Hence, most epistemologists, it seems,
accept that when people do gain knowledge, this usually, maybe always,
involves fallibility.

Epistemologists generally regard this fallibilist approach as more
likely to generate a realistic conception of knowledge, too. Their aim
is to be tolerant of the cognitive fallibilities that people have as
inquirers, while nevertheless according people knowledge (usually a
great deal of it). The knowledge would therefore be gained in spite of
the fallibility. And, significantly, it would be a kind of knowledge
which somehow reflects and incorporates the fallibility. Indeed, it
would thereby be fallible knowledge. (It would not be infallible
knowledge coexisting with fallibility existing only elsewhere in
people's thinking.) With this strategy in mind, then, epistemologists
who are fallibilists tend not to embrace skepticism.

Nor (if section 8 is right) should they do so. That section reported
(i) the two reasons most commonly thought to show that fallibility in
one's support for a belief is not good enough if the belief is to be
knowledge, along with (ii) the explanations of why (according to most
epistemologists) those reasons mentioned in (i) are not good enough to
entail their intended result. Given (ii), therefore, (i) will at least
fail to give us infallible justification for thinking that fallible
knowledge is not possible. Accordingly, perhaps such knowledge is
possible. But if it is, then what form would it take?

Almost all epistemologists will adopt this generic conception of it:

Any instance of fallible knowledge is a true belief which is at
least fallibly (and less than infallibly) justified.

(And remember that F*, in section 4, gave us some sense of what
fallible justification is.) Let us call this the Fallible Knowledge
Thesis. It is an application, to fallible knowledge in particular, of
what is commonly called the Justified-True-Belief Analysis of
Knowledge. (For an overview of that sort of analysis, see Hetherington
1996.) As stated, the Fallible Knowledge Thesis is quite general, in
that it says almost nothing about what specific forms the
justification within knowledge might take; all that it does require is
that the justification would provide only fallible support.

Nonetheless, generic though it is, the question still arises of
whether the Fallible Knowledge Thesis is ever satisfiable, let alone
actually satisfied. And that question readily leads into this more
specific one: Can a true belief ever be knowledge without having its
truth entailed by the justification which is contributing to making
the belief knowledge? (Sometimes this talk of justification is
replaced by references to warrant, where this designates the
justification and/or anything else that is being said to be needed if
a particular true belief is to be knowledge. For that use of the term
"warrant," see Plantinga 1993.) Section 8 has disposed of some
objections to there being any fallible knowledge; and the previous
paragraph has gestured at how — via the Justified-True-Belief Analysis
— one might conceive of fallible knowledge. Nonetheless, there could
be residual resistance to accepting that there can be fallible
knowledge like that. Undoubtedly, some people will think, "There just
seems to be something wrong with allowing a belief or claim to be
knowledge when it could be mistaken."

That residual resistance is not clearly decisive, though. It could
well owe its existence to a failure to distinguish between two
significantly different kinds of question. The first asks whether a
particular belief, given the justification supporting it, is true (and
thereby fallible knowledge). The other question asks whether, given
that belief's being true, there is enough supporting justification in
order for it to be (fallible) knowledge. The former question is raised
from "within" a particular inquiry into the truth of a particular
belief. The latter question arises from "outside" that inquiry into
that belief's being true (even if this question is arising within
another inquiry, perhaps an epistemological one). There is no
epistemologically standard way of designating the relevant difference
between those kinds of question. Perhaps the following is a helpful
way to clarify that difference.

(1) The not-necessarily-epistemological question as to whether a
belief is true. Imagine trying to ascertain whether some actual or
potential belief or claim is true. You ask yourself, say, "Do I know
whether I passed that exam?" Suppose that you have good — fallibly
good — evidence in favor of your having passed the exam. (You studied
well. You concentrated hard. You felt confident. Your earlier marks in
similar exams have been good.) And now suppose that you recall the
Justified-True-Belief Analysis. You apply it to your case. What does
it tell you? It tells you just that if your actual or possible belief
(namely, the belief that you passed the exam) is true, then — given
your having fallibly good evidence supporting the belief — the belief
is or would be knowledge, albeit fallible knowledge. But does this
reasoning tell you whether the belief is knowledge? It does not. All
that you have been given is this conditional result: If your belief is
true, then (given the justification you have in support of it) the
belief is also knowledge. You have no means other than your
justification, though, of determining whether the belief is true; and
because the justification is fallible, it gives you no guarantee of
the belief's being true (and thereby of being knowledge). Moreover, if
fallibilism is true, then any justification which you might have, no
matter how extensive or detailed it is, would not save you from that
plight. Thus (given fallibilism), you are trapped in the situation of
being able to reach, at best, the following conclusion: "Because my
evidence provides fallible justification for my belief, the belief is
fallible knowledge if it is true." At which point, most probably, you
will wonder, "Is it true? That's what I still don't know. (I have no
other way of knowing it to be true.)" And so — right there and then —
you are denying that your belief is knowledge, because you are denying
that you know it to be true. The fallibility in your justification
leaves you dissatisfied, as an inquirer into the truth of a particular
belief, at the idea of allowing that it could be knowledge, even
fallible knowledge. When still inquiring into the truth of a
particular belief, it is natural for you to deny that (even if, as it
happens, the belief is true) your having fallible justification is
enough to make the belief knowledge.

(2) The epistemological question as to whether a belief is knowledge.
But the epistemologist's question (asked at the start of this section)
as to whether there can be fallible knowledge is not asked from the
sort of inquirer's perspective described in (1). The epistemologist is
not asking whether your particular belief is true (while noting the
justification you have for the belief). That is the question you are
restricted to asking, when you are proceeding as the inquirer in (1).
The epistemological question is subtly different. It does not imagine
a fallibly justified belief — before asking, without making any actual
or hypothetical commitment as to the belief's truth, whether the
belief is knowledge. Rather, the epistemologist's question considers
the conceptual combination of the belief plus the justification for it
plus the belief's being true — which is to say, the whole package
that, in this case, is deemed by the Justified-True-Belief Analysis to
be knowledge — before proceeding to ask whether this entirety is an
instance of knowledge. To put that observation more simply, this
epistemological question asks whether a belief which is fallibly
justified, and which is true, is (fallible) knowledge. This is the
question of whether your belief is knowledge, given (even if only for
argument's sake) that it is true. In (1), your focus was different to
that. In wondering whether you had passed the exam, you were asking
whether the belief is true: you were still leaving open the issue of
whether or not the belief is true. And, as you realized, your fallible
justification was also leaving open that question. For it left open
the possibility of the belief's falsity.

Consequently, from (1), it is obvious why an inquirer might want
infallibility in her justification for a belief's truth. Infallibility
would mean her not having to leave open the question of the belief's
truth. Without infallibility, the possibility is left open by her
justification (which is her only indication of whether her belief is
true) of her belief being false — and hence not knowledge. (This is
so, even if we demand that, in order for an inquirer's belief to be
knowledge, she has to know that it is. That demand is called the
KK-thesis (with its most influential analysis and defense coming from
Hintikka 1962: ch. 5) — because one's having a piece of knowledge is
taken to require one's Knowing that one has that Knowledge. Yet even
satisfying that demand does not remove the rational doubt described in
(1). If the extra knowledge — the knowledge of the initial belief's
being knowledge — is not required to be infallible itself, then scope
for doubt will remain as to whether the initial belief really is
knowledge.) But if we can either (i) know or (ii) suppose (for the
sake of another kind of inquiry) that the belief is true, then we may
switch our perspective, so as to be asking a different question. That
is what the epistemologist is doing in (2), by adopting the latter,
(ii), of these two options. She supposes, for the sake of argument,
that the belief is true; then she can ask, "Would the belief's being
both true and fallibly justified suffice for it to be knowledge?" She
can do this without knowing at all, let alone infallibly, whether the
belief is true. (She will also not know infallibly, at least not via
this questioning, whether the belief is knowledge. Yet what else is to
be expected if fallibilism is true?)

It is also obvious, from (1), why an inquirer might want infallibility
in her justification, insofar as she is wondering whether to say or
claim that some actual or potential belief of hers is knowledge.
Nonetheless, this does not entail her needing such justification if
her belief is to be knowledge. Remember — from (2) in section 8 — that
whether one has a specific piece of knowledge could be quite a
different matter to whether one may properly claim to have it.
Similarly, most epistemologists will advise us not to confuse what
makes a belief knowledge with what rationally assures someone that her
belief is knowledge. For example, it is possible — according to
fallibilist epistemologists in general — for a person to have some
fallible knowledge, even if she does not know infallibly which of her
beliefs attain that status.

This section began by asking the epistemological question of whether
there can be fallible knowledge. And with our having seen — in this
section's (2) — what that question is actually asking, along with — in
this section's (1) — what it is not asking, we should end the section
by acknowledging that, in asking that epistemological question, we
need not be crediting epistemological observers with having a special
insight into whether, in general, people's beliefs are true. The
question of whether those beliefs are true is not the question being
posed by the epistemological observer. She is asking whether a
particular belief is knowledge, given (even if only for argument's
sake) that it is true and fallibly justified. She is asking this from
"above" or "outside" the various "lower level" or "inner" attempts to
know whether the given beliefs are true. The other ("lower level")
inquirers, in contrast, are asking whether their fallibly justified
beliefs are true. There is fallibility in each of those processes of
questioning; they just happen to have somewhat different
subject-matters and methods.

We should not leave a discussion of the Fallible Knowledge Thesis
without observing that, even if it is correct in its general thrust,
epistemologists have faced severe challenges in their attempts to
complete its details — to make it more precise and less generic. Over
the past forty or so years, there have been many such attempts. But
these have encountered one problem after another, mostly as
epistemologists have struggled to solve what is often called the
Gettier Problem.

A very brief word on that problem is in order here. It has become the
epistemological challenge of defining knowledge precisely, so as to
understand all actual or possible cases of knowledge — where one of
the project's guiding assumptions has been that it is possible for
instances of knowledge to involve justification which supplies only
fallible support. In other words, the project has striven to find a
precise analysis of what the Fallible Knowledge Thesis would deem to
be fallible knowledge; and, unfortunately, the Gettier Problem is
generally thought by epistemologists still to be awaiting a definitive
solution. Such a solution would determine wholly and exactly how
fallible a particular justified true belief can be, and in what
specific ways it can be fallible, without that justified true belief
failing to be knowledge. In the meantime (while awaiting that sort of
solution), epistemologists incline towards accepting the
Justified-True-Belief Analysis — represented here in the Fallible
Knowledge Thesis — as being at least approximately correct. Certainly
in practice, most epistemologists treat the analysis as being correct
enough — so that it functions well as giving us a concept of knowledge
that is adequate to whatever demands we would place upon a concept of
knowledge within most of the contexts where we need a concept of
knowledge at all. Such epistemologists take the difficulties that have
been encountered in the attempts to ascertain exactly how a fallibly
justified true belief can manage to be knowledge as being difficulties
of mere (and maybe less important) detail, not ones of insuperable and
vital principle. Those epistemologists tend to assume that eventually
the needed details will emerge, that these will be agreed upon by
epistemologists, and hence that the basic idea behind the Fallible
Knowledge Thesis will finally and definitively be vindicated. (For
more on the history of that epistemological project, see Shope 1983.)

But again, that definitive vindication is yet to be achieved. And, of
course, it will not eventuate if we should be answering "No" to the
question (discussed earlier in this section) of whether a true belief
which is less than infallibly justified is able to be knowledge. When
there is fallibility in the justification for a particular true
belief, is this fact already sufficient to prevent that belief from
being knowledge? Few epistemologists wish to believe so. What we have
found in this section is that they are at least not obviously mistaken
in that optimistic interpretation.

10. Implications of Fallibilism: No Justification?

Sometimes epistemologists believe that fallibilism opens the door upon
an even more striking worry than the one discussed in section 9
(namely, the possibility of there being no knowledge, due to the
impossibility of knowledge's ever being fallible). Sometimes they
infer, from the presence of fallibility, that even justification (let
alone knowledge) is absent. That is, once fallibility enters, even
justification — all justification — departs. Consequently, those
epistemologists — once they accept that a universal fallibilism
obtains — are skeptics even about the existence of justification. (For
an example of such an approach, see Miller 1994: ch. 3.)

How would that interpretation of the impact of fallibilism be
articulated? In effect, the idea is that if evidence, say, is to
provide even good (let alone very good or excellent or perfect)
guidance as to which beliefs are true, it is not allowed to be
fallible. No justification worthy of the name is able to be merely
fallible. And from that viewpoint, of course, skepticism beckons
insofar as no one is ever capable of having any infallible
justification. If fallibility is rampant, yet infallibility is
required if evidence or the like is ever to be supplying real
justification, then no real justification is ever supplied. In short,
no beliefs are ever justified.

That is a wholly general skepticism about justification, emerging from
a wholly general fallibilism. A possible example of that form of
skepticism would be the one with which Descartes ended his Meditation
I. Cartesian evil genius skepticism would say that, because there is
always the possibility of Descartes' evil genius (in section 7)
controlling our minds, any evidence or reasoning that one ever has
could be a result just of the evil genius's hidden intrusion into
one's mind. The evil genius — by making everything within one's mind
false and misleading — could render false all of one's evidence, along
with all of one's ideas as to what is good reasoning. None of one's
evidence, and none of one's beliefs as to how to use that evidence,
would be true. However, if there were no truth anywhere in one's
thinking (with one never realizing this), then no components of one's
thinking would be truth-indicative or truth-conducive. No part of
one's thinking would ever lead one to have an accurate belief.
Continually, one would both begin and end with falsity. And there are
many epistemologists in whose estimation this would mean that no part
of one's thinking is ever really justifying some other part of one's
thinking. For justification is usually supposed to have some relevant
link to truth. And presumably there would be no such link, if every
single element in one's thinking is misleading — as would be the case
if an evil genius was at work. Is that possible, then? Moreover, is it
so dramatic a possibility that if we are forever unable to prove that
it is absent, then our minds will never contain real justification for
even some of our beliefs?

A potentially less general skepticism about justification would be a
Humean inductive skepticism (mentioned in section 6). The thinking
behind this sort of skepticism infers — from the inherent fallibility
of any inductive extrapolations that could be made from some
observations — that no such extrapolation is ever even somewhat
rational or justifying. Again, the skeptical interpretation of Humean
inductive fallibilism is that, given that all possible extrapolations
from observations are fallible, neither logic nor any other form of
reason can favor one particular extrapolation over another. The
fallibilism implies that there is fallibility within any
extrapolation: none are immune. And the would-be skeptic infers from
this that, once there is such widespread fallibility, there may as
well be a complete absence of any pretence at rationality. The
fallibility will be inescapable, even as we seek to defend the
rationality of one extrapolation over another. Why is that? Well, we
could mount such a defense only by pointing to one sort of
extrapolation's possessing a better past record of predictive success,
say. But we would be pointing to that better past record, only in
order to infer that such an extrapolation is more trustworthy on the
present occasion. And that inference would itself be an inductive
extrapolation. It, too, is therefore fallible. Accordingly, if there
was previously a need to overcome inductive fallibility (with this
need being the reason for consulting the past records of success in
the first place), then there remains such a need, even after past
records of success have been consulted. In this way, it is the
fallibility's inescapability that generates the skepticism.

Yet, as we noted earlier, most epistemologists would wish to evade or
undermine skeptical arguments such as those ones — arguments that seek
to convert a kind of fallibilism into a corresponding skepticism. How
might this non-skeptical maneuver be achieved? There has been a
plethora of attempts, too many to mention here. (For one survey, see
Rescher 1980.) Moreover, no consensus has developed on how to escape
skeptical arguments like these. That issue is beyond the scope of this
article.

What may usefully (even if generically) be described here, however, is
a fundamental choice as to how to interpret the force of fallibilism
within our cognitive lives. Any response to the skeptical challenges
will make that choice (even if usually implicitly and in some more
specific way). The basic choice will be between the following two
underlying pictures of what a wholly general fallibilism would tell us
about ourselves:

(A) The inescapable fallibility of one's cognitive efforts would
be like the inescapable limits — whatever, precisely, these are — upon
one's bodily muscles. These limit what one's body is capable of —
while nonetheless being part of how it achieves whatever it does
achieve. Inescapable fallibility would thus be like a background
limitation — always present, sometimes a source of frustration, but
rarely a danger. When used appropriately, muscles strengthen
themselves in accomplished yet limited ways. Would the constant
presence of fallibility be like a (fallibly) self-correcting
mechanism?

(B) Inescapable fallibility would be like a debilitating illness
which "feeds upon" itself. It would become ever more dangerous, as its
impact is compounded by repeated use. This would badly lower the
quality of one's thinking. (For a model of that process, notice how
easily instances of minor fallibility can interact so as to lead to
major fallibility. For example, a sequence in which one slightly
fallible piece of evidence after another is used as support for the
next can end up providing very weak — overly fallible — support:
[80%-probabilification X 80%-probabilification X 80%-probabilification
X 80%-probabilification]

How are we to choose between (A) and (B) — between the Limited
Muscles model of fallibilism and the Debilitating Illness model of it?

Because most epistemologists are non-skeptics, they favor (A) —
the Limited Muscles model. This is not to insist that thinking in an
(A)-influenced way is bound to succeed against skeptical arguments.
The point right now is simply that this way of thinking is one
possible goal for an epistemologist. It is the goal of finding some
means of successfully understanding and defending an instance of the
Limited Muscles model. What is described by that model would be such a
theorist's desired way to conceive, if this is possible, of the
general idea of inescapable fallibility. She will seek to conceive of
inescapable fallibility as being manageable, even useful. Hence, the
Limited Muscles model is a framework which — in extremely general
terms — she will hope allows her to understand — in more specific
terms — the nature and significance of fallibilism. Perhaps the most
influential modern example of this approach was Quine's (1969),
centered upon a famous metaphor from Neurath (1959 [1932/33], sec.
201). That metaphor portrays human cognitive efforts as akin to a
boat, afloat at sea. The boat has its own sorts of fallibility. It is
subject to stresses and cracks. And how worrying is that? Must the
boat sink whenever those weaknesses manifest themselves? No, because
that is not how boats usually function. In general, repairs can be
made. This may occur even while the boat is still at sea.
Structurally, it is strong enough to support repairs to itself, even
as it continues being used, even while making progress towards its
destination. Neurath regarded cognitive progress as being like that —
as did Quine, who further developed Neurath's model. On what Quine
called his "naturalized" conception of epistemology (a conception that
many subsequent thinkers have sought to make more detailed and to
apply more widely), human observation and reason make cognitive
progress in spite of their fallibility. They do so, even when
discovering their own fallibility — finding their own stresses and
cracks. Must they then sink, floundering in futility? No. They
continue being used, often while repairing their own stresses and
cracks — reliably correcting their own deliverances and predictions.
Section 5 asked whether science is an especially fallible method. As
was also noted, though, science provides impressive results. Indeed,
it was Quine's favored example of large-scale cognitive progress. How
can that occur? How can scientific claims — including so many striking
ones — be justified, in spite of the fallibility that remains? Maybe
science is like a ship that carries within it some skilled and
imaginative artisans (carpenters, welders, electricians, and the
like). Not only can it survive; it can become more grand and capable
when being repaired at sea. (Even so, is such cognitive progress best
described in probabilistic terms? On that possibility, implied by
Humean fallibilism, see Howson 2000.)

Naturally, in contrast to that optimistic model for thinking about
fallible justification, skeptics will prefer (B) — the Debilitating
Illness model. We have examined (in sections 6 and 7) a couple of
specific ways in which they might try to instantiate that general
model. We have also seen (in sections 8 through 10) some reasons why
those skeptics might not be right. Perhaps they overstate the force of
fallibilism — inferring too much from the facts of fallibility. In any
case, the present point is that skeptics (like non-skeptics) seek
specific arguments in pursuit of a successful articulation and defense
of an underlying picture of inescapable fallibility. Both skeptics and
non-skeptics thereby search for an understanding of fallibilism's
nature and significance. They simply reach for opposed conceptions of
what fallibilism implies about people's ability to observe and to
reason justifiably.

So, there is a substantial choice to be made; and each of us makes
it, more or less carefully and consciously, when reflecting upon these
topics. Which of those two basic interpretive directions, then, should
we follow? The intellectual implications of this difficult choice are
exhilaratingly deep.

11. References and Further Reading

Baldwin, T. G. E. Moore. London: Routledge, 1990. 226-32.

On Moore's paradox.

Buckle, S. Hume's Enlightenment Tract: The Unity and Purpose of An
Enquiry Concerning Human Understanding. Oxford: Oxford University
Press, 2001. Part 2, chapter 4.

On Hume's famous skeptical reasoning in his first Enquiry.

Conee, E. and Feldman, R. Evidentialism: Essays in Epistemology.
Oxford: Oxford University Press, 2004.

A traditional (and popular) approach to understanding the
nature of epistemic justification.

Curley, E. M. Descartes against the Skeptics. Cambridge, Mass.:
Harvard University Press, 1978.

On Descartes' skeptical doubting.

Descartes, R. The Philosophical Works of Descartes, Vol. I, (eds.
and trans.) E. S. Haldane and G. R. T. Ross. Cambridge: Cambridge
University Press, 1911 [1641].

Contains both the Discourse and the Meditations. These include
both the Evil Genius argument and the Cogito.

Feldman, R. "Fallibilism and Knowing That One Knows." The
Philosophical Review 90 (1981): 266-82.

On the nature and availability of fallible knowledge.

Goldman, A. I. "What is Justified Belief?" In G. S. Pappas (ed.),
Justification and Knowledge: New Studies in Epistemology. Dordrecht:
D. Reidel, 1979.

An influential analysis of the nature of epistemic justification.

Hetherington, S. Knowledge Puzzles: An Introduction to
Epistemology. Boulder, Colo.: Westview Press, 1996.

Includes an overview of many of the commonly noticed
difficulties posed by the Gettier problem for our attaining a full
understanding of fallible knowledge.

Hetherington, S. "Knowing Fallibly." Journal of Philosophy 96
(1999): 565-87.

Describes the genus of which fallible knowledge is a species.

Hetherington, S. "Fallibilism and Knowing That One Is Not
Dreaming." Canadian Journal of Philosophy 32 (2002): 83-102.

Shows how fallibilism need not lead to skepticism about knowledge.

Hintikka, J. Knowledge and Belief: An Introduction to the Logic of
the Two Notions Ithaca, NY: Cornell University Press, 1962. ch. 5.

On the KK-thesis — that is, on knowing that one knows.

Howson, C. Hume's Problem: Induction and the Justification of
Belief. Oxford: Oxford University Press, 2000.

A technically detailed response to Hume's fallibilist
challenge to the possibility of inductively justified belief.

Hume, D. An Enquiry Concerning Human Understanding, in Hume's
Enquiries, (ed.) L. A. Selby-Bigge, 2nd edn. Oxford: Oxford University
Press, 1902 [1748].

This includes, in section IV, the most generally cited version
of Hume's inductive fallibilism and inductive skepticism.

Kahneman, D., Slovic, P., and Tversky, A. (eds.). Judgment under
Uncertainty: Heuristics and Biases. Cambridge: Cambridge University
Press, 1982.

On empirical evidence of people's cognitive fallibilities.

Merricks, T. "More on Warrant's Entailing Truth." Philosophy and
Phenomenological Research 57 (1997): 627-31.

Argues against the possibility of there being fallible knowledge.

Miller, D. Critical Rationalism: A Restatement and Defence.
Chicago: Open Court, 1994.

Discusses many ideas (including a skepticism about epistemic
justification) that might arise if fallibilism is true.

Morton, A. A Guide through the Theory of Knowledge, 3rd edn.
Malden, Mass.: Blackwell, 2003. ch. 5.

On the basic idea, plus some possible forms, of fallibilism.

Nagel, T. The View from Nowhere. New York: Oxford University Press, 1986.

See especially chapters I and V. Discusses the interplay of
different perspectives ("inner" and "outer" ones) that a person might
seek upon herself, especially as greater objectivity is sought. (This
bears upon section 9's distinction between two possible kinds of
question that can be asked about whether a particular belief is
fallible knowledge.)

Neurath, O. "Protocol Sentences," in A. J. Ayer (ed.), Logical
Positivism. Glencoe, Ill.: The Free Press, 1959 [1932/33].

Includes the famous "boat at sea" metaphor.

Nisbett, R. and Ross, L. Human Inference: Strategies and
Shortcomings of Social Judgment. Englewood Cliffs, NJ: Prentice-Hall,
1980.

On empirical evidence of people's cognitive fallibilities.

Peirce, C. S. Collected Papers, (eds.) C. Hartshorne and P. Weiss.
Cambridge, Mass.: Harvard University Press, 1931-60.

See, for example, 1.120, and 1.141 through 1.175, for some of
Peirce's originating articulation of the concept of fallibilism as
such.

Plantinga, A. Warrant: The Current Debate. New York: Oxford
University Press, 1993.

An analysis of some proposals as to what warrant might be
within (fallible) knowledge.

Quine, W. V. "Epistemology Naturalized," in Ontological Relativity
and Other Essays. New York: Columbia University Press, 1969.

A bold and prominent statement of the program of naturalized
epistemology, trying to understand fallibility as a part of, rather
than a threat to, the justified uses of observation and reason.

Reed, B. "How to Think about Fallibilism." Philosophical Studies
107 (2002): 143-57.

An attempt to define fallible knowledge.

Rescher, N. Scepticism: A Critical Reappraisal. Oxford: Blackwell, 1980.

On fallibilism and many associated skeptical issues about
knowledge and justification.

Shope, R. K. The Analysis of Knowing: A Decade of Research
Princeton: Princeton University Press, 1983.

Presents much of the earlier history of attempts to solve the
Gettier problem — and thereby to define fallible knowledge.

Sorensen, R. A. Blindspots. Oxford: Oxford University Press, 1988. ch. 1.

A philosophical analysis of the kinds of thought or sentence
that constitute Moore's paradox.

Stove, D. C. Probability and Hume's Inductive Scepticism. Oxford:
Oxford University Press, 1973.

Explains how Hume's inductive fallibilism gives way to his
inductive skepticism.

Williams, B. Descartes: The Project of Pure Enquiry. Hassocks: The
Harvester Press, 1978.

Analysis of Descartes' skeptical doubts.

Wilson, M. D. Descartes. London: Routledge & Kegan Paul, 1978.

Includes an account of Descartes' skeptical endeavors.

Wittgenstein, L. Philosophical Investigations, (trans.) G. E. M.
Anscombe. Oxford: Blackwell, 1978 [1953]. Sections 243-315, 348-412.

Presents the private language argument (against the
possibility of anyone's being able to think in a language which only
they could understand).

No comments: