set (such as propositions known by me) bear a given relation (such as
known deductive entailment) only to other members of that epistemic
set. The principle of the closure of knowledge under known logical
entailment is that one knows everything that one knows to be logically
entailed by something else one knows. For instance, if I know grass
is green, and I know that grass is green deductively entails that
grass is green or the sky is blue, then I know that grass is green or
the sky is blue. Epistemic closure principles are employed in
philosophy in myriad ways, but some theorists reject such principles,
and they remain controversial.
Some people see closure principles as capturing the idea that we can
add to our store of knowledge by accepting propositions entailed by
what we know; others claim that this is a misunderstanding, and that
closure principles are silent as to how a piece of knowledge is, or
can be, acquired. For instance, the proposition I have a driver's
license issued by the state of North Carolina entails that North
Carolina is not a mere figment of my imagination. According to the
principle that knowledge is closed under known entailment, if I know
the former claim, and I know the entailment, I know the latter claim.
Some insist, however, that this must be distinguished from the
(possibly) false claim that I could come to know the latter on the
basis of my knowing the former, since my basis for knowing the former
involves presupposing the latter (by taking my sense experience and
memory at more or less face value, for instance).
Closure principles are employed in both skeptical and anti-skeptical
arguments. The skeptic points out that if one knows an ordinary
common sense proposition (such as that one has hands) to be true, and
knows that this proposition entails the falsity of a skeptical
hypothesis (such as that one is a handless brain in a vat, all of
whose experiences are hallucinatory), one could know the falsity of
the skeptical hypothesis, in virtue of knowledge being closed under
known entailment. Since one cannot know the falsity of the skeptical
hypothesis (or so the skeptic maintains), one also must not know the
truth of the common sense claim that one has hands. Alternatively,
the anti-skeptic might insist that we do know the truth of the common
sense proposition, and hence, in virtue of the closure principle, we
can know that the skeptical hypothesis is false. Although the closure
principle is sometimes used by anti-skeptics, some view the rejection
of closure as the key to refuting the skeptic.
1. The Closure of Knowledge under Known Entailment
a. The Closure of Knowledge Under Entailment
A set is closed under a particular relation if all the members of the
set bear the relation only to other members of the set. The set of
true propositions is closed under entailment because true propositions
entail only other truths. Since false propositions sometimes entail
truths, false propositions are not closed under entailment. Epistemic
closure principles state that members of an epistemic set (such as my
justified beliefs) are closed under a given relation (which may be a
non-epistemic relation, like entailment, or an epistemic one, such as
known entailment).
A simple closure principle is the principle that knowledge is closed
under entailment:
If a subject S knows that p, and p entails q, then S knows that q.
Less schematically, this says that if one knows one thing to be true
and the known claim logically entails a second thing, then one knows
the second thing to be true. This principle has obvious
counter-examples. A complicated theorem of logic is entailed by
anything (and hence by any proposition one knows), but one may not
realize this and may thus fail to believe (or even grasp) the theorem.
Since one must at least believe a proposition in order to know that it
is true, we see that one may fail to know something entailed by
something else that one knows. Additionally, even if a proposition is
entailed by something one knows, if one comes to believe the
proposition through some epistemically unjustified process, one will
fail to know the proposition (since one's belief of it will be
unjustified). For instance, if one knows that one will start a new job
today and then comes to believe that one will either start a new job
today or meet a handsome stranger based on the testimony of her
astrologist, then perhaps she will fail to know the truth of the
entailed disjunction.
b. The Closure of Knowledge Under Known Entailment
It is more plausible that knowledge is closed under known entailment:
If S knows that p, and knows that p entails q, then S knows that q.
As stated, however, the principle seems vulnerable to counter-examples
similar to the ones just discussed. The subject might fail to put his
knowledge that p together with knowledge that p entails q and thus
fail to infer q at all. One might know that she has ten fingers and
that if she has ten fingers then the number of her fingers is not
prime, but simply not bother to go on to deduce and form the belief
that her number of fingers is not prime. Alternatively, although the
subject could have come to believe q by inferring it correctly from
something else that she knows (since she is aware of the entailment),
she instead might have come to believe q through some other,
epistemically unjustified, process.
How can we capture the idea that one can add to one's store of
knowledge by recognizing and assenting to what is entailed by what one
already knows? This formulation seems suitably qualified:
If S knows that p, and comes to believe that q by correctly
deducing it from her belief that p, then S knows that q.
Less formally, if I know one thing, correctly deduce another thing
from it, and come to believe this second thing by so deducing it, then
I know the second thing to be true. This principle eliminates
counterexamples in which the subject fails to believe the entailed
claim (and thus fails to know it) or comes to believe the entailed
claim for bad reasons (and thus fails to know the claim). (Henceforth,
uses in this article of the phrase "the principle of closure of
knowledge under known entailment" should be regarded as referring to
this preferred formulation of the principle).
So much is built into the antecedent of this principle that it might
now seem trivial but, as we shall see, it has been disputed on various
grounds.
c. Justification, Single-Premise and Multiple-Premise Closure
We would seem to have similar grounds for supposing that justified
belief is closed under known entailment. One is epistemically
justified in believing whatever one correctly deduces from one's
justified beliefs. This captures the idea that one way to add to one's
store of justified beliefs is to believe things entailed by your
justified beliefs. When one reasons validly, the justification that
one has for the premises carries over to the conclusion.
The mere fact that justification is (ordinarily taken to be) one of
the necessary conditions for knowledge does not strictly entail that
justification is closed under the same operations (such as known
entailment) that knowledge is closed under. As Steven Hales (1995) has
pointed out, to argue in this manner is to commit the fallacy of
division: to infer from the fact that a whole thing has a particular
quality, that each of its components must have this quality as well.
For instance, it does not follow from the fact that the glee club is
loud that each, or even any, of the individual singers in the glee
club is loud. Knowledge might be closed under known entailment even if
justified belief is not, if all the counterexamples to the closure of
justification were examples in which the justified belief was missing
at least one of the necessary conditions for knowledge. There seems to
be no particular reason to believe that this is the case, however.
(See Brueckner 2004 for more on this point).
The closure principles discussed thus far are instances of single
premise closure. For instance, one's knowledge that a given particular
premise is true, when combined with a correct deduction from that
premise of a conclusion, seems to guarantee that one knows the
conclusion. There are also multiple premise closure principles. Here
is an example:
If S knows that p and knows that q, and S comes to believe r by
correctly deducing it from p and q, then S knows that r.
That is, if I know two things to be true and can deduce a third thing
from the first two, then I know the third thing to be true. There is
good reason to be dubious of multiple premise closure principles of
justification, such as
If S is justified in believing that p and justified in believing
that q, and S correctly deduces r from p and q, then S is justified in
believing that r.
Lottery examples reveal the difficulty. Given that there are a million
lottery tickets and that exactly one of them must win, it is plausible
(though not obvious) that for any particular lottery ticket, I am
justified in believing that it will lose. So I am justified in
believing that ticket one will lose, that ticket two will lose, and so
forth, for every ticket. But if I know that there are a million
tickets, and I am justified in believing each of a million claims to
the effect that ticket n will lose and I can correctly deduce from
these claims that no ticket will win, then by closure I would be
justified in concluding that no ticket will win, which by hypothesis
is false. Justified belief is fallible, in that one can be justified
in believing something even if there is a chance that one is mistaken;
conjoin enough of the right sort of justified but fallible beliefs and
the resulting conjunction will be unlikely to be true, and thus
unjustified.
If knowledge, like justified belief, is fallible (say, only 99.9%
certainty is required), then multiple premise closure principles for
knowledge will fail as well. One could be sufficiently certain for
knowledge about each of a thousand claims ("I will not die today"; "I
will not die tomorrow"; …; "I will not die exactly 569 days from
today"; etc.), but not sufficiently certain of the conjunction of
these claims ("I will not die on any of the next thousand days") in
order to know it, even though it is jointly entailed by those thousand
known claims (and thus true). The fallibility of knowledge is far more
controversial than the fallibility of justified belief, however.
Similarly, closure might be thought to hold for different types of
knowledge, such as a priori knowledge (i.e. knowledge not gotten
through sense experience, to oversimplify a bit). If one knows a
priori that p, and knows a priori that p entails q, then one knows a
priori that q. Intuitively, it seems that if one knows the premises of
an argument a priori and is able to validly deduce a conclusion from
those premises, one would know the conclusion a priori as well. This
last point is on weaker ground, however, as discussed in Section 5b.
2. Philosophical Uses of the Closure Principle
The closure principle, now qualified to handle the straightforward
counterexamples, has been employed in skeptical and anti-skeptical
arguments, in support of a dogmatic refusal pay attention to evidence
that counts against what one knows, to generate a paradox about
self-knowledge, and for many other philosophical ends. These uses are
described in brief in this section, and in greater detail in later
sections.
The skeptic may argue as follows:
1. I do not know that I am not a handless, artificially stimulated
brain in a vat.
2. I do know that I have hands entails I am not a handless,
artificially stimulated, brain in a vat.
3. If I know one thing, and I know that it entails a second thing,
then I also know the second thing. (Closure)
4. Thus, I do not know that I have hands. (From 2 and 3, if I knew
I had hands I would know that I am not a brain in a vat, in
contradiction with 1).
If one really knew the ordinary common sense claim to be true, one
could deduce the falsity of the skeptical claim from it and come to
know that the skeptical claim is false (by closure). The fact that one
cannot know that the skeptical claim is false (as per the first
premise) demonstrates that one does not in fact know that the common
sense proposition is true either. (See also Contemporary Skepticism).
But one person's modus tollens (the inference from if p then q and
not-q to the conclusion p) is another person's modus ponens (the
inference from if p then q and p to the conclusion q), as we can see
from an anti-skeptical argument of the sort associated with G.E.
Moore. (See Moore 1959).
1. I know that I have hands.
2. I know that I have hands entails I am not a handless,
artificially stimulated, brain in a vat.
3. If I know one thing, and I know that it entails a second thing,
then I also know the second thing. (Closure)
4. Thus, I know that I am not a handless, artificially stimulated
brain in a vat.
From the fact that one knows that she has hands and this is
incompatible with a skeptical hypothesis under which her hands are
illusory, one can infer, and thus come to know (if closure is
correct), the falsity of the skeptical hypothesis.
The closure principle can be used even in defense of a dogmatic
rejection of any recalcitrant evidence that counts against something
that one takes oneself to know. The argument runs as follows (adapted
from Harman 1973):
1. I know my car is parked in Lot A. (Assume)
2. I know that if my car is parked in Lot A, and there is evidence
that my car is not parked in Lot A (say, testimony that the car has
been towed), then the evidence is misleading. (Analytic, since
evidence against a truth must be misleading)
3. Thus, I know that any evidence that my car is not parked in Lot
A is misleading. (Closure)
4. I know that there is evidence that my car is not parked in Lot A. (Assume)
5. Thus, I know that this evidence (testimony that my car was
towed) is misleading. (Closure)
6. If a piece of evidence is known by me to be misleading, then I
ought to disregard it. (Analytic)
7. Thus, I ought to disregard any evidence that my car is not
parked in Lot A. (From 5 and 6)
This result seems paradoxical, however, as most would claim that it is
epistemically irresponsible to ignore all the evidence against what
one takes oneself to know, simply because it is evidence against what
one takes oneself to know. It is plausible (though hardly obvious)
that one takes oneself to know each thing that one believes
(considered individually). If this is conjoined with the argument
above, it entails that one ought to ignore any evidence against what
one believes. This seems to be an even more ill-considered policy.
The closure principle also figures prominently in a paradox about
self-knowledge and knowledge of the external world. It is now widely
accepted that some thought contents are individuated externally. That
is, there are some thought contents that one could not have unless one
was in an environment or linguistic community that is a certain way.
On this view, one could not think the thought that water is wet were
one not in an environment with water, or at least with some causal
connection to water. Given content externalism, it seems we may argue
as follows (the argument is due to McKinsey 1991):
1. I know that I have mental property M (say, the thought that
water is wet). (Assume privileged access to one's own thoughts)
2. I know that if I have mental property M (the thought that water
is wet), then I meet external conditions E (say, living in an
environment containing water). (Externalism with respect to content)
3. If I know one thing, and I know that it entails a second thing,
then I know the second thing. (The principle of the closure of
knowledge under known entailment).
4. Thus, I know that I meet external conditions E (namely, that I
live in environs containing water). (From 1, 2 and 3)
The conclusion follows from an application of the closure principle,
but what makes this paradoxical is that it appears that the knowledge
that is attributed in the premises depends on reflection alone
(introspection plus a priori reasoning), whereas the knowledge
attributed in the conclusion is empirical. If the premises are
correct, and closure holds, I can know an empirical fact by reflection
alone (since I know it on the basis of premises than can be known by
reflection alone). Something seems to have gone wrong and it is
unclear which premise, if any, is the culprit.
Closure principles figure in another philosophical puzzle about
knowledge of "ordinary propositions", those we ordinarily take
ourselves to know, and "lottery propositions," those that, although
extremely likely, we do not ordinarily take ourselves to know. Suppose
that one is struggling to get by on a pensioner's income. It seems
plausible to say that one knows one will not be able to afford a
mansion on the French Riviera this year. However, that one will not be
able to afford the mansion this year entails that one will not win the
lottery. By the closure principle, since one knows that one will not
be able to afford the mansion, and knows that this entails that one
will not win the lottery, one must know that one will not win the
lottery. However, very few are inclined at accept that one knows one
will not win the lottery. After all, there's a chance one could win.
3. Externalist Accounts of Knowledge and the Rejection of Closure
a. Epistemic Externalism and Internalism
To determine whether someone is epistemically justified in believing
something, one must do so from a particular point of view. One may
consider the point of the view of the agent who holds the belief or of
someone who possesses all the relevant information (which may be
unavailable to the agent). To oversimplify, those who consider only
the subject's perspective when evaluating the subject's epistemic
justification are epistemic internalists, and those who adopt the
point of view of one with all the relevant information are epistemic
externalists. An account of epistemic justification is internalist if
it requires that all the elements necessary for an agent's belief to
be epistemically justified are cognitively accessible to the agent;
that is, these elements (say, evidence or reasons) must be internal to
the agent's perspective. Externalist theories of justification, on the
other hand, allow that some of the elements necessary for epistemic
justification (such as a belief's being produced by a process that
makes it objectively likely to be true) may be cognitively
inaccessible to the agent and external to the agent's perspective.
There are so many varieties of internalism and externalism that
further generalization is perilous. Considering the theories'
respective treatments of the problem of induction illustrates the
basic difference between them. Hume famously argued that although we
rely on inductive inferences, we have access to no non-question
begging justification for doing so, as our only grounds for thinking
that induction will continue to be reliable is that it always has been
reliable. This is an inductive justification of the belief that
induction is epistemically justified. If Hume is right, then a typical
internalist will concede that beliefs based on inductive reasoning are
not epistemically justified. An externalist, however, might insist
that such beliefs are justified, provided that inductive reasoning as
a matter of fact is a process that reliably produces mostly true
beliefs, whether the agent who reasons inductively has access to that
fact or not. On the other hand, an epistemic internalist might rate
the beliefs of a brain in a vat or a victim of Cartesian evil demon
deception as epistemically justified, provided that they were formed
in a way that seems reasonable from the point of the view of the agent
(the brain in a vat), such as through the careful consideration of
evidence (evidence, albeit, that is misleading). The epistemic
externalist, however, likely would rate such an agent's beliefs as
unjustified, on the basis of evidence not accessible to the agent,
such as that the belief-forming processes she relies on make her
beliefs extremely likely to be false.
For the most part, internalist accounts of knowledge are those that
appeal to an internalist conception of epistemic justification and
externalist accounts of knowledge employ an externalist conception of
justification. (Alternatively, one may be an internalist about
justification and an externalist about knowledge, by rejecting the
view that epistemic justification is one of the requirements for
knowledge.) Perhaps the greatest challenge to closure principles for
knowledge comes from externalist theories of knowledge, notably those
of Robert Nozick and Fred Dretske.
b. Nozick's Tracking Account of Knowledge and the Failure of Closure
It strikes many that some version of the closure principle must be
true. The idea that no version of the principle is true is, according
to one noted epistemologist, "one of the least plausible ideas to come
down the philosophical pike in recent years." (Feldman 1995)
Nevertheless, philosophers have argued against the epistemic closure
principle on many different grounds. One serious challenge to closure
arose from those who proposed the "tracking" analysis of knowledge
(notably Nozick 1981). According to the tracking theory, to know that
p is to track the truth of p. That is, one's true belief that p is
knowledge if and only if the following two conditions hold: if p were
not the case, one would not believe that p, and if p were the case,
one would believe that p. For one's belief that p to be knowledge,
one's belief must be sensitive to the truth or falsity of p; that
sensitivity is captured by the two subjunctive conditions above. One
knows that Albany is the capital of New York only if one would not
believe it if it were false, and would believe it if it were true.
(See also Robert Nozick's epistemology).
This is an externalist theory of knowledge because whether or not an
agent satisfies the subjunctive conditions for knowledge may not be
cognitively accessible to the agent. To evaluate an agent's belief,
with respect to whether it meets those conditions, it may be necessary
to adopt the point of view of someone with information not accessible
to the agent.
Let's illustrate this with an example similar to Nozick's own (1981,
207). Let p be the belief that one is sitting in a chair in Jerusalem.
Let q be the belief that one's brain is not floating in a tank on
Alpha Centauri, being artificially stimulated so as to make one
believe one is sitting in a chair in Jerusalem. Suppose one has a true
belief that p. In the "closest" counterfactual situations (to employ
the terminology of one account of truth-conditions for subjunctives)
in which p is false (say, one is standing in Jerusalem, or one is
sitting in Tel Aviv), one will not believe p. In close counterfactual
situations in which one is sitting in Jerusalem, one does believe that
p. One's belief of p tracks the truth of p and thus counts as
knowledge.
Suppose, on the other hand, that one has a true belief that q. If
one's belief that q were false, however (and one really was in this
predicament on Alpha Centauri), one would still believe (falsely) that
one was not in Alpha Centauri (q). One's belief that q, while actually
true, does not track the truth of q (being held when q is true but not
when q is false). Hence, the belief that q does not count as
knowledge.
How does this relate to the closure of knowledge? The proposition that
one is sitting in Jerusalem (p) entails that one's brain is not
floating in a tank in Alpha Centauri, being stimulated so as to make
one think that one is sitting in Jerusalem (q). We may suppose that
one can correctly deduce q from p. Even so, since one's belief that p
tracks the truth of p and counts as knowledge and one's belief that q
does not do so, knowledge fails to be closed under known entailment.
One may know that p, and know that p entails q (and come to believe
the latter by correctly deducing it from the former), and yet fail to
know that q.
Nozick's account has at least two virtues. One is that the tracking
analysis of knowledge is plausible. The other is that the rejection of
closure allows us to reconcile the following two claims, both of which
seem plausible but had seemed incompatible: (1) we do know many common
sense propositions, such as that I have hands, and (2) we do not know
that skeptical hypotheses, such as that I am a handless, artificially
stimulated brain in a vat, are false. One desideratum of a theory of
knowledge is that it refutes skepticism while accounting for the
plausibility and persuasiveness of the skeptic's case against common
sense knowledge claims. Both the skeptic and the Moorean anti-skeptic
come up short here. The skeptic must deny our common sense knowledge
claims and the Moorean must maintain that we can know the falsity of
skeptical hypotheses. As long as we accept the closure principle,
whether we are skeptics or anti-skeptics, we cannot maintain both that
we know common sense propositions and that we do not know that the
skeptical hypotheses are false, since we know that the common sense
propositions entail the falsity of the skeptical propositions.
Knowledge of the truth of the common sense claims would, if knowledge
is closed under known entailment, guarantee our knowledge that
skeptical hypotheses are false. Citing our failure to know that
skeptical hypotheses are false, the skeptic applies modus tollens and
infers that we must not know the common sense propositions. The
rejection of closure blocks this move by the skeptic.
This is not to say that there are not plausible counterexamples to the
tracking account of knowledge. I may know my mother is not the
assassin since she was with me when the assassination took place. But
counterfactually, if she were the assassin, I would still believe she
was not, since after all I couldn't believe such a thing of my mother.
My belief that my mother is not the assassin fails to track the truth,
since I would have believed it even if it were false, but it seems
quite plausible that I do know she's not the assassin, as my evidence
for her innocence is quite overwhelming – my mother cannot be in two
places at once. Tracking accounts like Nozick's, which do not make
reference to the reasons the agent has for the belief in question,
seem vulnerable to such counterexamples.
c. Dretske's Externalist Account of Knowledge and Closure Failure
Dretske's account of knowledge is as follows: one's true belief that p
on the basis of reason R is knowledge that p if only if (i) one's
belief that p is based on R and (ii) R would not hold if p were false.
Less formally, we may put this as follows: one knows a given claim to
be true only if one has a reason to believe that it is true, and one
would not have this reason to believe it if it were not true. (See
Dretske 1971). This is an externalist account because whether an agent
meets conditions (i) and (ii) above may be inaccessible to the agent.
One could believe a claim on the basis of a particular reason without
being able to explain one's reliance on that reason, and without
knowing whether one would still have the reason if the claim were
false. For instance, one might believe that one's toes are curled on
the basis of proprioceptive evidence (evidence that one would not have
if one's toes were not curled), without one having any idea what
proprioception is, what sort of evidence one has for the claim that
one's toes are curled, or whether one would have such evidence even if
one's toes were uncurled.
Let's illustrate Dretske's account with his famous zebra example
(Dretske 1970). Suppose one is in front of the zebra display at the
zoo. One believes that one is seeing zebras on the basis of perceptual
evidence. Furthermore, in the closest possible worlds in which one is
not seeing zebras (where the display is of camels or tigers), one
would not have that perceptual evidence. Consequently, one knows that
one is now seeing zebras, on the basis of the perceptual evidence one
is having. Consider, however, the belief that one is not now seeing
mules cleverly disguised by zoo staff to resemble zebras. Whatever
one's reason for believing this claim (say, that it is just very
unlikely that the zoo would deceive people in that fashion), one would
still have this reason even if the belief were false (and one was
seeing mules cleverly disguised to look like zebras). Hence, one would
not know that one is not now seeing mules cleverly disguised to
resemble zebras.
As with Nozick's account, this provides a counterexample to the
closure of knowledge. One can know that one is now seeing zebras, one
can correctly deduce from this that one is not now seeing mules
cleverly disguised to resemble zebras, and yet fail to know that one
is not now seeing mules cleverly disguised to resemble zebras.
Furthermore, Dretske's account better handles the counterexample to
Nozick's theory. One believes (truly) that one's mother is not the
assassin, on the grounds that one was with one's mother at the time
the assassination happened (and that mother cannot be in two places at
once) and one would not have this reason to think mother innocent if
she were indeed the assassin. Thus, one knows that one's mother is not
the assassin, since the evidence is absolutely conclusive, despite the
fact that if one's mother were the assassin, one would still believe
that she wasn't, on the basis of a different, bad reason.
Even Dretske's account is plausibly vulnerable to counterexample.
Suppose that one believes correctly at noon on Tuesday that Jones is
chair of one's department, on the basis of the typical sort of
evidence (say, recollection of Jones being installed in the position,
the department's website listing Jones as chair, and so forth).
Suppose that at five minutes past noon on Tuesday, Jones is suddenly
struck dead by a bolt of lightning (and is consequently no longer
chair). Did one know at noon, five minutes prior to the death, that
Jones was the chair? Since one would have had that same set of reasons
to believe at noon that Jones was chair even in the closest possible
worlds in which he was not chair at noon (that is, worlds in which
he'd been struck dead by lightning five minutes before noon), one does
not actually know at noon that Jones is the chair. Those who find this
verdict implausible (that is, those who think one does know on the
basis of the typical evidence that Jones is the chair, right up until
the moment that Jones suddenly is struck dead and stops being the
chair), may find Dretske's account of knowledge wanting. (The example
is adapted from Brueckner and Fiocco 2002).
Further justification of Dretske's for denying closure is that there
are other sentential operators that are not closed under known
entailment and behave in many respects like the knowledge operator.
(See Dretske 1970). Dretske defines a sentential operator O to be
fully penetrating when O(p) is closed under known entailment. That is,
O is penetrating if and only if: O(p) entails O(q) if p is known to
entail q. "It is true that" is a penetrating operator, since, if p is
known to entail q, "it is true that p" must entail "it is true that
q". "It is surprising that" is non-penetrating; although it is
surprising that tomatoes are growing on the apple tree, it is not
surprising that something is growing on the apple tree. Some operators
are semi-penetrating. An operator is semi-penetrating when it
penetrates only to a certain subset of a given proposition's
entailments.
For instance, "R is an explanatory reason for" seems to be a
semi-penetrating operator. Within a range of cases, if p is known to
entail q, then R is an explanatory reason for p entails R is an
explanatory reason for q. A reason that explains why Bill and Harold
are invited to every party necessarily is a reason why Harold is
invited to every party. Similarly, "knows that" seems to penetrate
through similar entailments; if one knows that Bill and Harold are
invited to every party, then one knows that Harold is invited to every
party.
However, "R is an explanatory reason for my painting the walls green"
need not entail "R is an explanatory reason for my painting the
walls." Depending on the context, a reason that explains why I painted
my walls green may be a reason why I did something entailed by my
painting the walls green, such as my not painting the walls red, but
may not be a reason why I did something else entailed by my painting
the walls green, such as my not wallpapering the walls green. The
emphasis is crucial. A reason to paint the walls green is a reason not
to paint them red, but may not be a reason to paint rather than
wallpaper. A reason to paint the walls green may be a reason not to
paint the floor green, but it might be neutral as to the color.
Consideration of ordinary demands for reasons shows that emphasis, or
other contextual factors, determines a certain range of reasons to be
relevant and a certain range irrelevant. The same reason will not
suffice to explain each of the following: "I bought tomatoes," "I
bought tomatoes" and "I bought tomatoes", even though these three
sentences entail and are entailed by exactly the same claims, since
they are logically equivalent. Dretske says that no fact is an island
and that various contextual factors will determine, for each operator,
its relevant alternatives (i.e. the negations of the consequents to
which the operator penetrates). (See also Contextualism in
Epistemology, Chapter 3, on Dretske and the denial of closure).
d. "Abominable Conjunctions"
On the other hand, some philosophers view the closure principle as so
obviously true that, rather than reject it to accommodate a given
theory of knowledge, they would reject the account of knowledge in
order to keep closure. Dretske's account of knowledge has been much
discussed in the philosophical literature. One consequence of this
rejection of closure in favor of his account that hardly seems
felicitous is that one could truly say, "I know that that animal is a
zebra and I know that zebras are not mules, but I don't know that that
animal is not a cleverly disguised mule." Or, "I know I have hands,
and I know that if I have hands I am not handless, but I don't know
that I am not a handless brain in a vat." Worse yet, "I know it is not
a mule, but I don't know it's not a cleverly disguised mule." These
claims ("abominable conjunctions," according to DeRose 1995) sound at
best paradoxical and at worst absurd. This seems to point to the
extreme plausibility of some form or another of the closure principle.
Dretske (2005a, 17-18) agrees that such statements sound absurd, but
maintains that they are true. They may violate conventional
conversational expectations and they may be met with incomprehension,
but they are not self-contradictory. "Empty" and "flat" are often
taken to be absolute concepts (since to be empty is to not contain
anything at all and to be flat is to have no bumps), but also
context-relative, in that whether a particular item counts as a thing
or a bump depends on the context. It sounds a bit strange to say that
the warehouse is empty, but has lots of dust, gas molecules, and empty
crates in it. The utterance may violate conversational rules, but the
utterance might, despite all that, be true, if the concepts of
emptiness and flatness are as described. So too with the abominable
conjunctions if the attendant conception of knowledge is correct.
Philosophers may always appeal to Gricean conversational implicatures
to blunt the objection that their view entails absurd claims. Truth
and conversational propriety are not one and the same. (Paul Grice is
the philosopher most closely associated with the view that
communication is guided by various conversational maxims and that some
utterances are conversationally inappropriate, even if true, because
they invite misunderstanding. For instance, the utterance "Mary
insulted her boss and she was fired," is true even if Mary did not
insult her boss until after she was fired, but it would be an
inappropriate remark in most contexts, since the listener naturally
would conclude that the insult preceded the dismissal. For more on
this, see Grice 1989).
John Hawthorne (2005: 30-31) makes two points in reply. First, he
says, it is unclear what sort of Gricean mechanism could make it true
but conversationally inappropriate to utter "S knew that p and
correctly deduced q from p, but did not know that q." Second, an
appeal of this sort can at best explain why we do not utter certain
true propositions, but not why we actually believe their negations.
Even if it is true that one's wife is his best friend, it would be
inappropriate for him to introduce her to someone as his best friend.
But the conversational mechanism at play here could hardly be an
explanation for why he believed that his wife was not his best friend
(even though she was). Why, if the denial of closure is true but
conversationally infelicitous, do so many not only not deny closure in
conversations but in fact believe that the closure principle is true?
One might reply that many people, even philosophers, are apt in some
situations to mistake what is conversationally appropriate for what is
true (as with conditional claims that have false antecedents), so an
explanation of why a true claim violates conversational norms might
well explain why people believe the negation of the claim.
e. Alternative Anti-Skeptical Strategies Need Not Reject Closure
There are alternative strategies for refuting skepticism that seem to
have many of the virtues of the tracking account of knowledge, but do
not entail the falsity of closure principles. Contextualism, for
example, says that knowledge attributions are sensitive to context, in
that a subject S might know a proposition p relative to one context,
but simultaneously fail to know that p relative to another context.
The contextual factors to which knowledge attributions are taken to be
sensitive include things like whether a particular doubt has been
raised or acknowledged and the importance of the belief being correct.
In an ordinary context, where skeptical scenarios have not been
raised, the standards for knowledge are quite low, but, in contexts in
which skeptical doubts have been raised, such as an epistemology
class, standards for knowledge have been raised to levels that
typically cannot be met. One might know relative to the everyday
context that she has hands, but fail to know this relative to the
skeptic's context, because a skeptical scenario has been raised and
she cannot rule it out.
Or a true belief with a certain level of justification might count as
knowledge as long as it is not terribly important that the belief be
correct, but would no longer be knowledge if the stakes were raised.
One might know that the bank will be open on Saturday after confirming
that the bank has Saturday hours, even if one has not checked whether
the bank has changed its hours in the past two weeks, as long as no
great harm will befall one if it turns out one is wrong. But if
financial ruin will befall one were a check not deposited before
Monday, then one's justification might need to be stronger before it
would be correct to say that one knows the bank is open Saturday.
The contextualist then can reconcile the intuitions that it is
sometimes correct to attribute to someone knowledge of everyday common
sense propositions, despite her inability to rule out skeptical
propositions, and that we are sometimes correct in refusing to
attribute knowledge of the falsity of a skeptical scenario when the
subject is unable to rule out such scenarios. But the contextualist
can do this while accepting at least some version of closure. The
contextualist says that epistemic closure holds within an epistemic
context, but fails inter-contextually. For instance, in the everyday,
low epistemic standards context, one knows that one has hands and
anything that one can correctly deduce from this claim, such as that
one is not a handless being deceived into thinking that one has hands.
In the context with much higher epistemic standards, one knows neither
that one is not a handless, artificially stimulated brain in a vat,
nor (by an application of the closure of knowledge under known
entailment) that one has hands. Closure will fail only when it extends
across contexts. For instance, if one were to cite one's knowledge
that one has hands (in the ordinary context) as grounds for saying in
the heightened context that one knows that the brain in a vat
hypothesis is false (as the Moorean might), one would illegitimately
apply the closure principle. The skeptic's citing one's failure to
know the falsity of the skeptical hypothesis (in the heightened
context) as entailing that one does not know the common sense
proposition (in the ordinary context) would be a similar misuse of the
closure principle.
If a theory of knowledge is independently plausible and can answer the
skeptic without denying closure, then, everything else being equal, we
ought to be reluctant to reject closure just so that we can accept the
tracking account of knowledge. Contextualism, of course, is plagued
with problems of its own. One such problem is as follows: since
whether one knows a claim or not depends on how stringent the
epistemic standards are in the context and the standards can be raised
by a particular doubt occurring to someone in the context,
contextualism seems to imply that it is easier to know things if one
spends time with the stupid or incurious or if one is stupid or
incurious.
The plausibility of the denial of closure may well depend not only on
whether it is a way to avoid skepticism, but on whether it is the only
way to do so. (Dretske does insist that the only plausible way to
refute skepticism is by denying closure. See his 2005a and 2005b for a
defense of this claim, trenchant criticisms of the contextualist
theory, and responses to criticisms of the tracking theory.)
f. Some Skeptical Arguments do not Employ Closure
One of the strengths claimed for the tracking account of knowledge is
that it blocks the standard skeptical argument, since it involves the
rejection of closure. Not all skeptical arguments employ closure
principles, however, so it is unclear how much anti-skeptical value
would accrue from denying closure. Underdetermination arguments might
be the best skeptical arguments and they do not depend (at least
explicitly) on closure.
Underdetermination is a relation that holds between two or more
theories, when the theories are incompatible, but empirically
equivalent. Underdetermination skeptical arguments rely crucially on
the premise that if two theories are incompatible but compatible with
all the available (and perhaps possible) data, we cannot know that one
is true and the other false. Compare, for example, the thesis that I
have hands, which I perceive through sense perception, and the thesis
that I am a handless brain in a vat, artificially stimulated so as to
have misleading sense perceptions. These theses are incompatible, but
they are empirically equivalent. Whichever thesis were true, I would
have the same sort of experiences. Suppose we adopt the following
principle: if two incompatible theses both entail (or predict) the
same observational data, then that observational data does not support
(or justify belief of) one of the theses over the other. With this
principle and the premise that the two theses are incompatible but
observationally equivalent, we can deduce that our apparent perception
of our hands does not justify us in believing that we have hands.
The argument is greatly oversimplified, but the outline of the
skeptical argument from underdetermination now ought to be clear. The
argument does not explicitly employ any closure premise, so the
rejection of closure would seem not to undermine the argument in any
straightforward way. One could always argue that the appeal of the
argument from underdetermination implicitly relies on the closure
principle or that the argument from underdetermination is
objectionable on other grounds. Skeptical arguments from
underdetermination, however, seem as plausible as other skeptical
arguments and their plausibility seems not to depend on the
plausibility of any of the closure principles.
Infinite regress arguments for skepticism also do not
straightforwardly appeal to closure. A regress argument that no belief
is epistemically justified (and hence than no belief counts as
knowledge) runs as follows. We assume that all justification is
inferential. That is, every justified belief is justified by appeal to
some other justified belief. The basis for this claim might be the
nature of argumentation. One is justified in believing a conclusion if
one is justified in believing the premises that support the
conclusion. If the conclusion is one of the premises, then the
argument is question-begging, or circular, and not rationally
persuasive. But if every justified belief can be justified only be
inferring it from some further justified belief and there cannot be an
infinite regress of justified beliefs, then it must be that no beliefs
are justified. (A foundationalist about justification, on the other
hand, while agreeing that an infinite regress of justified beliefs is
impossible, insists that there are justified beliefs, and hence that
some beliefs are justified non-inferentially, or in other words, that
some justified beliefs are basic or foundational). The claim that no
justified belief is self-justifying does not entail any closure
principle of justification or knowledge, so the argument seems to be
independent of closure and thus not vulnerable to arguments against
closure principles. (See also Ancient Skepticism).
The proponent of the tracking account of knowledge need not answer all
forms of the skeptical argument with the same tools, so even if some
skeptical arguments do not depend on the closure principle, the
tracking analysis might provide the resources for countering the
skeptical arguments from underdetermination or regress.
4. Dogmatism and the Rejection of Closure
At least one philosopher (Audi 1988, 76-8; 1991, 77-84) has claimed
that the justification of dogmatism, adapted from Harman (see section
2 of this article), is a reductio ad absurdum of the epistemic closure
principle. If closure allows one to infer, and thus know, that any
evidence against something one knows must be misleading and may be
ignored, then closure must be rejected.
Audi's example is of a man who adds up a series of numbers and thereby
knows the sum of the numbers. But the man's wife (whom he considers to
be a better mathematician) says that he has added the numbers
incorrectly and gotten the wrong sum. If the man knows that the sum is
n, and knows that his wife says the sum is not n, then by closure he
knows that his wife is wrong. (This is so, as "the sum is n and my
wife says the sum in not n" entails that "my wife is wrong;" one knows
the former claim and knows it entails the latter, so one knows the
latter). Since he knows his wife is wrong, there is no need to
recalculate the sum. (Similar examples appear in Dretske 1970 and
Thalberg 1974). If one believes something only when one takes oneself
to know it, as is plausible, then by this reasoning one has reason to
dismiss any evidence against something that one believes.
Denying the closure principle to avoid the odd dogmatic conclusion has
some initial appeal, but there are alternative solutions that do not
require us to reject such a compelling principle. And, as Feldman says
(1995, 493), there is a general reason not to resolve the paradox by
denying closure. To say, "Yes, I know that p is true, and that p
entails q, but I draw the line at q," seems irrational. To refuse to
accept what you know to be the consequences of your beliefs, he says,
is to be "patently unreasonable." Not only is it infelicitous to deny
closure, but the dogmatist argument may be blocked without doing so.
For instance, one could take the dogmatism argument to be a reductio
ad absurdum of the anti-skeptical position. This is the tack taken by
Peter Unger (1975). If we deny that one could know that p (say, that
the sum of the numbers is n), then even if we accept closure, we have
no reason to suppose that one could know that all evidence against p
was misleading.
Alternatively, Roy Sorensen (Sorensen 1988) argues that given that one
knows that p, the conditional "If E is evidence against p, then E is
misleading" is a junk conditional, in that although it may be known to
be true, this knowledge cannot be expanded under modus ponens. That is
to say, if "if p then q" is a junk conditional, the conditional can be
known to be true, but it could not be the case that simultaneously the
conditional is known and that knowledge of the antecedent p would
justify one in believing the consequent q. Some conditionals are known
to be true on the basis of the extreme unlikelihood of the antecedent,
but are such that if one acquired evidence that supports the
antecedent, one would not be justified in inferring the consequent
because the probability of the antecedent is inversely proportional to
the probability of the conditional. That is, if one were to learn that
the antecedent of the conditional was true, one would no longer have
reason to accept (and would no longer know) the conditional. "If this
is a Cuban cigar, then I'm a monkey's uncle!" is an example of such a
conditional. This conditional can be known to be true, in virtue of
the antecedent being known to be false, but if one were to find
evidence that this is indeed a Cuban cigar, one should not infer that
he is a monkey's uncle. Rather, one should conclude that perhaps one
did not know the conditional to be true after all, since one has
evidence that its antecedent was true and its consequent false. In
short, if a conditional is a junk conditional one cannot come to know
the consequent q in virtue of one's knowing the antecedent p and the
conditional if p then q, because one's knowledge of the conditional
depends on the falsity of the antecedent.
Given that one knows that r (say, that one's car is in parking lot A),
one knows that the conditional "if there is any evidence against r,
however strong, then it must be misleading" is true. Part of one's
basis for knowing that r might be that one has reason to believe that
there is no strong evidence against r. But if one were to learn of
strong evidence against r, such as testimony that one's car had been
towed, one ought, at least in some cases, to consider the possibility
that one does not in fact know that r, rather than simply inferring
that the testimony is misleading. Learning the truth of the antecedent
– that there is strong evidence against r – may undermine the
justification for believing the conditional itself, thus making the
conditional resistant to modus ponens. Knowledge of the conditional
depends on one's knowing that the antecedent is false. Finding
evidence in favor of the antecedent – even if in fact it is misleading
– may weaken one's justification for the conditional, such that one no
longer knows the conditional to be true.
This blocking of the dogmatist argument does not involve denying
closure, though. The reason the modus ponens inference fails to go
through is because the conditional is a "junk" conditional; one can
know the conditional to be true only if one does not know the
antecedent to be true, and the closure principle applies only if one
simultaneously knows both the conditional and its antecedent to be
true.
Another explanation that does not require the denial of closure is due
to Michael Veber (Veber 2004). He says that even if the dogmatist
argument is sound, the principle "If a piece of evidence E is known by
S to be misleading, S ought to disregard it," ought not to be endorsed
on grounds of human fallibility. We are frequently enough wrong in
taking ourselves to know what we in fact do not know that following
such a principle would lead one to disregard evidence that is not
misleading. There is nothing wrong with the principle, provided it is
correctly applied; but due to the difficulty or impossibility of
correctly applying it, adopting such a policy is contraindicated.
5. The McKinsey Paradox, Closure, and Transmission Failure
a. The McKinsey Paradox
Michael McKinsey (1991) discovered a paradox about content externalism
that has prompted some reconsideration of how knowledge is transmitted
through deductive reasoning.
Content externalism (or anti-individualism) is, to greatly
oversimplify, the thesis that we are only able to have thoughts with
certain contents because we inhabit environments of certain sorts.
(Putnam 1975 and Burge 1979 are the most notable defenses of this
view). Molecule-for-molecule duplicates could differ in their contents
due to differences in their environments. According to the
externalist, my twin on Twin Earth might be an exact duplicate of me,
but if Twin Earth contains a different but similar light metal used to
make baseball bats, cans, and so forth instead of aluminum, then even
if the denizens of Twin Earth call this metal "aluminum," their
thoughts are not thoughts about aluminum. This view is a repudiation
of the Cartesian view of the mental, according to which the contents
of our thoughts are what they are independent of the surrounding
world.
Externalism has been defended and criticized on many different
grounds, but the debate about externalism has pivoted largely on its
implications for the thesis that we have privileged access to the
contents of our own thoughts. How does one know that she is now
thinking that some cans are made from aluminum, rather than the
thought that some cans are made from twaluminum (as we may call it),
which is what she would be thinking if she lived on Twin Earth?
Incompatibilists about externalism and privileged access point out
that the two thoughts are introspectively indiscriminable if
externalism is true and argue that one could only know which of these
thoughts one is now thinking through empirical investigation of one's
environment.
Compatibilists about externalism and self-knowledge often argue that
if a subject has a mental state with a particular content (say, a
belief that some cans are made of aluminum) in virtue of that subject
bearing a certain relation to an external state of affairs (say,
aluminum, rather than twaluminum, being present in one's environs),
then any mental state the subject has about that particular mental
state of his, like his belief that he believes some cans are made of
aluminum, will also stand in a similar relation to the same external
state of affairs (aluminum, rather than twaluminum, being present).
Hence, this second-order mental state (i.e. a mental state about a
mental state) will involve the same content as the first-order belief
(say, that some cans are made of aluminum). In short, one will believe
that he believes cans are made of aluminum only if one in fact does
believe that cans are made of aluminum, since both of these states
bear a causal relation to aluminum, rather than twaluminum. (See Burge
1988 and Heil 1988). Whatever makes it the case that S thinks that p
(instead of q) will also make it the case that S thinks I am thinking
that p (instead of I am thinking that q). Coupled with a reliabilist
theory of knowledge, these second-order beliefs count as knowledge
since they cannot go wrong and the thesis of privileged access is
reconciled with externalism.
Enter McKinsey's Paradox. We assume that we know content externalism
to be true and that it is compatible with a suitably robust thesis of
privileged access to thought contents. We may now reason as follows:
1. I know that I am in mental state M (say, the state of believing
that water is wet). (Privileged Access)
2. I know that if I am in mental state M, then I meet external
conditions E (say, living in an environment that contains water).
(Content Externalism, known through philosophical reflection)
3. If I know one thing and I know that it entails a second thing,
then I know the second thing. (Closure of knowledge under known
entailment)
4. Thus, I know that I meet external conditions E. (From 1-3)
The knowledge attributed in the premises is a priori in the broad
sense that includes knowledge gotten through introspection and/or
philosophical reflection. That knowledge is not gained via empirical
investigation of the external world. The conclusion follows by an
application of the closure principle. What is paradoxical is that,
given closure, it seems that one can know the truth of an empirical
claim about the external world (say, that one's environment contains
water or that it contains aluminum rather than twaluminum) simply by
inferring it from truths known by reflection or introspection. This
argument bolsters the incompatibilist's case: since it is only by
investigation of the world that one can know that one meets a
particular set of external conditions and since the premises
(including closure) entail that this fact can be known on the basis of
knowledge not dependent on investigation of the world, either the
privileged access premise or the externalist thesis must be false
(provided that the closure principle is correct).
b. Davies, Wright, and the Closure/Transmission Distinction
There are many responses to this argument. Some reject externalism,
some (like McKinsey) deny privileged access, and some compatibilists
(Brueckner 1992) argue that even if externalism is known to be true,
nothing as specific as the second premise of the argument could be
known a priori. But perhaps the most influential attempt to solve the
paradox is due to Martin Davies (1998) and Crispin Wright (2000). They
argue that even though arguments like McKinsey's are valid and their
premises are known to be true, this knowledge is not transmitted
across the entailment to the conclusion. At first blush, it seems like
Davies and Wright are rejecting closure, which is certainly one way to
deal with the paradox. Davies and Wright accept closure, though, and
only reject a related but stronger epistemological principle that says
that knowledge is transmitted over known entailment.
Davies and Wright are distinguishing between the closure of knowledge
under known entailment and what they take to be a common misreading of
it. The closure principle says that if one knows that p and knows that
p entails q, then one knows that q, but the principle is silent on
what one's basis or justification for q is and does not claim that the
basis for q is the knowledge that p and that p entails q. The
principle of the transmission of knowledge under known entailment,
however, states that if one knows that p, and knows that p entails q,
then one knows q on that basis – what enables one to know that p and
that p entails q also enables one to know that q. Davies and Wright
accept the closure principle but deny the transmission principle,
arguing that it fails when the inference from p to q is, although
valid, not cogent. Here cogency is understood as an argument's aptness
for producing rational conviction.
One way an argument could be valid but fail to be cogent is that the
justification for the premises presupposes the truth of the
conclusion. If I reason from the premise that I have a drivers license
issued by the state of North Carolina (based on visual inspection of
my license and memory of having obtained it at the North Carolina
Department of Motor Vehicles) to the conclusion that there exists an
external world, including North Carolina, outside my mind, it is
plausible that my justification for the premise (taking sense
experience and memory at face value) presupposes the truth of the
conclusion. If this is so, then it seems that the premise could not be
my basis for knowing the conclusion. Anyone in doubt about the
conclusion would not accept the premise, so although the premise
entails the conclusion, the premise could not provide the basis for
rational conviction that the conclusion is true. Such an argument is
valid, but not cogent. It would not be a counterexample to closure,
for anyone who knows the premise and the entailment also must know the
conclusion, but it is a counterexample to the transmission principle,
since the conclusion would not be known on the basis of the knowledge
of the premise.
According to Davies and Wright, the McKinsey argument is valid but not
cogent because knowledge of the conclusion is presupposed in one's
supposed introspective knowledge of the premises. Thus, it is a
counterexample to transmission, but poses no threat to closure. The
non-empirical access to the externally individuated thought contents
is conditional on the assumption that certain external conditions
obtain (such as that one's environs include aluminum rather than
twaluminum), which can only be confirmed empirically. Thus one may not
reason from the non-empirical knowledge claimed in the premises to
non-empirical knowledge of an empirical truth that enjoys
presuppositional status with regard to the premises. That one has a
thought about water may entail that one bears a causal relation to
water in one's environment (if externalism is correct) and one may
know the former and the entailment only if one knows the latter, but
one may not cogently reason from the premise to the conclusion, since
the inference begs the question. Anyone who doubts the conclusion of
the McKinsey argument in the first place would not (or at least should
not — the presuppositions of our premises are not always recognized as
such) be moved to accept the premises that entail it.
Consider then the following principle about a priori knowledge:
(APK) If a subject knows something a priori and correctly deduces
(a priori) from it a second thing, then the subject knows a priori the
second claim.
We can describe this principle in two equivalent ways. It is the
principle of closure of a priori knowledge under correct a priori
deduction and, alternatively, it is a specific instance of the
principle of transmission of knowledge under known entailment, since
it claims that the a priori basis for knowledge of the premise
transmits to the conclusion, allowing it to be known a priori as well.
If Davies and Wright are correct, the principle is false because
counterexamples may be found in deductions that are valid but not
cogent.
Davies and Wright apply this distinction between transmission and
closure to Moore's anti-skeptical argument as well. Although it is
true that the negation of the brain-in-a-vat hypothesis is entailed by
an ordinary proposition, such as that I have hands, the existence of
the external world is presupposed in the justification for that
premise and, therefore, may not be justifiably inferred from that
premise. Moore's argument is not cogent, so it is a counterexample to
transmission, which we have reason to reject anyhow, and not a
counterexample to closure (or so Davies and Wright argue).
This is plausibly another sort of conditional that is not expandable
by modus ponens. Unlike the junk conditionals, which cannot be
expanded because the conditional can be known to be true only when the
antecedent of the conditional is not known to be true, conditionals in
which the justification for the antecedent presupposes justification
for the consequent – we may call them conditionals of presupposition –
cannot be expanded because the relevant modus ponens inference would
not be cogent. The inference would be question-begging.
The distinction that Davies and Wright argue for also applies to
closure principles for justified belief. If they are correct, then
justified belief could be closed under known entailment even if
justification is not necessarily transmitted across known entailment.
The counterexamples to the transmission principle for knowledge would
also function as counterexamples for the transmissibility of justified
belief.
Some have argued that the Davies-Wright line of argument fails to
solve the McKinsey paradox. Whether they are right is beyond the scope
of this entry. But the distinction Davies and Wright have drawn
between transmission and closure is an important one. That if one
knows that p and has validly deduced q from p, one must know that q,
tells us nothing about one's basis for q. Although quite often it can
and will, in some instances knowledge of p cannot provide the basis
for knowledge of q, even though p entails q, because the justification
for p presupposes q. One knows that q (on some independent basis), so
there is no counterexample to closure, but q will not be known on the
basis of p, so the transmission principle is false.
Clarifying the closure principle as a principle about the distribution
of knowledge across known entailment, rather than as a principle about
the transmission or acquisition of knowledge, divorces the closure
principle, to some extent, from the initial intuitive support for it,
which is the idea that we can add to our store of knowledge (or
justified belief) by accepting what we know to be entailed by
propositions we know (or justifiably believe). On this understanding
of closure, knowledge and justified belief are distributed across
known entailment even when drawing the inference in question could not
add to one's store of knowledge or justified belief.
6. Ordinary Propositions, Lottery Propositions, and Closure
The closure principle also figures in a paradox about our knowledge of
"ordinary propositions" and "lottery propositions." Ordinary
propositions are those that we ordinarily suppose ourselves to know.
Lottery propositions are those with a high likelihood of being true,
but which we are ordinarily disinclined to say that we know. Suppose
that one lives on a fixed income and struggles to make ends meet. It
seems that one knows one will not be able to afford a mansion on the
French Riviera this year. One's not being able to afford the mansion
this year entails that one will not win the big lottery this year. By
the closure principle, since one knows that one will not be able to
afford the mansion and one knows that one's not being able to afford
the mansion entails that one will not win the lottery, one must know
that one will not win the lottery. Most, however, are disinclined to
say that one could know that one will not win the lottery. There's
always a chance, after all (provided that one buys a ticket).
This phenomenon is widespread. Ordinarily, one who keeps up with
politics could be said to know that Dick Cheney is the U.S.
Vice-President. That Cheney is the Vice-President entails that Cheney
did not die of a heart attack thirty seconds ago. But it seems that
one does not know that Cheney did not die of a heart attack in the
last thirty seconds. How could one know such a thing? (The coining of
the term "lottery proposition" and the discovery that this phenomenon
is widespread, is due to Jonathan Vogel).
The apparently inconsistent triad is (i) one knows the ordinary
proposition, (ii) one fails to know the lottery proposition, and (iii)
closure. One may eliminate the inconsistency by denying closure on the
sort of grounds that Dretske and Nozick cite. Plausibly, one's belief
of so-called ordinary propositions tracks the truth, while one's
belief of lottery propositions does not. If Cheney were not
Vice-President, one would not believe he was, but had Cheney died in
the past thirty seconds, one still would believe he was
Vice-President.
One might bite the skeptical bullet and insist that one really does
not know that Cheney is Vice-President. One of a more anti-skeptical
bent might maintain that one can really know the lottery propositions,
such as that Cheney did not die in the last thirty seconds. Such a
resolution has considerable costs, but denying closure is not among
them.
Alternatively, one might argue for a contextualist handling of the
problem that does not require the denial of closure or biting the
skeptical or anti-skeptical bullet.
7. References and Further Reading
References
Audi, Robert (1988), Belief, Justification and Knowledge, Belmont:
Wadsworth. Argues against closure to avoid dogmatic conclusion.
Audi, Robert (1991), "Justification, Deductive Closure and Reasons to
Believe," Dialogue, 30: 77-84. Argues against closure to avoid
dogmatic conclusion.
Brueckner, Anthony (1992), "What an Anti-Individualist Knows A
Priori," Analysis 52: 111-118. Solution to the McKinsey paradox that
does not deny closure.
Brueckner, Anthony (2004), "Strategies for Refuting Closure," Analysis
64: 333-35. Reply to Warfield 2004 and Hales 1995.
Brueckner, Anthony; Fiocco, M. Oreste (2002), "Williamson's
Anti-Luminosity Argument," Philosophical Studies, 110: 285-293.
Contains putative counterexample to Dretskean account of knowledge.
Burge, Tyler (1979), "Individualism and the Mental," Midwest Studies
in Philosophy, 4: 73-122. Seminal defense of content externalism (or
anti-individualism).
Burge, Tyler (1988), "Individualism and Self-Knowledge," The Journal
of Philosophy, 85: 649-663. Influential reconciliation of content
externalism and the privileged access theses.
Davies, Martin (1998), "Externalism, Architecturalism, and Epistemic
Warrant," in C. MacDonald, B. Smith and C. J. G. Wright (eds.),
321-361. Argues that McKinsey paradox is a counterexample to
transmission, not closure.
Dretske, Fred (1970), "Epistemic Operators," The Journal of
Philosophy, 67: 1007-1023. Seminal paper arguing against the closure
of knowledge.
Dretske, Fred (1971), "Conclusive Reasons," Australasian Journal of
Philosophy, 49: 1-22. Contains Dretske's account of knowledge.
Dretske, Fred (2005a), "The Case against Closure," in Steup and Sosa
(eds.), 13-26. Argues that denying closure is only way to avoid
skepticism.
Dretske, Fred (2005b), "Reply to Hawthorne," in Steup and Sosa (eds.),
43-46. Reply to Hawthorne 2005.
Feldman, Richard (1995), "In Defence of Closure," The Philosophical
Quarterly, 45: 487-494. Defends closure against Audi's arguments (Audi
1988, 1991).
Grice, Paul (1989), Studies in the Ways of Words, Cambridge, MA:
Harvard University Press. Classic treatment of pragmatic/semantic
distinction, and conversational maxims and implicatures. Relevant to
discussion of the tracking theory of knowledge's "abominable
conjunctions."
Gunderson, Keith (ed.) (1975), Language, Mind and Knowledge, Minnesota
Studies in the Philosophy of Science, volume VII, Minneapolis:
University of Minnesota Press. Contains seminal Putnam 1975 article.
Hales, Steven (1995), "Epistemic Closure Principles," The Southern
Journal of Philosophy 33: 185-201. Produces counterexamples to many
different formulations of the closure principle, but points out that
one cannot refute closure for knowledge by showing that some necessary
condition for knowledge fails to be closed.
Harman, Gilbert (1973), Thought, Princeton: Princeton University
Press. Employs closure principle in formulating dogmatic argument.
Hawthorne, John (2004), Knowledge and Lotteries, Oxford: Clarendon
Press. Argues for quasi-contextualist solution to problem of lottery
propositions, and defends closure.
Hawthorne, John (2005), "The Case for Closure," in Steup and Sosa
(eds.), 26-43. Defends closure against Dretske's 2005a arguments.
Heil, John (1988), "Privileged Access," Mind 97: 238-251. Influential
reconciliation of content externalism and privileged access theses.
MacDonald, Cynthia; Smith, Barry; Wright, Crispin (1998), Knowing Our
Own Minds: Essays on Self-Knowledge, Oxford: Oxford University Press.
Contains the Davies 1998 article.
McKinsey, Michael (1991), "Anti-Individualism and Privileged Access,"
Analysis 51: 9-16. Formulation of the McKinsey paradox.
Moore, G.E. (1959), Philosophical Papers, London: George Allen and
Unwin, Ltd. Contains seminal anti-skeptical essays, such as "Proof of
an External World," and "A Defence of Common Sense."
Nozick, Robert (1981), Philosophical Explanations, Cambridge: Harvard
University Press. Influential tracking account of knowledge and
consequent denial of closure.
Putnam, Hilary (1975), "The Meaning of 'Meaning'," in K. Gunderson
(ed.), 131-193. Seminal work defending content externalism.
Roth, Michael (ed.) (1990), Doubting: Contemporary Perspectives on
Skepticism, Dordrecht: Kluwer. Contains Vogel 1990.
Sorensen, Roy (1988), "Dogmatism, Junk Knowledge and Conditionals,"
The Philosophical Quarterly, 38: 433-454. Solves dogmatism puzzle
without denying closure.
Steup, Matthias, and Sosa, Ernest, (eds.) (2005), Contemporary Debates
in Epistemology, Malden MA: Blackwell Publishing. Contains
Dretske-Hawthorne exchange on closure.
Thalberg, Irving (1974), "Is Justification Transmissible Through
Deduction?" Philosophical Studies 25: 347-356. Argues for
counterexample to closure in dogmatism examples.
Unger, Peter (1975), Ignorance: A Case for Scepticism, Oxford: Oxford
University Press. Retains closure but offers skeptical resolution of
the dogmatism puzzle.
Veber, Michael (2004), "What do you do with Misleading Evidence?" The
Philosophical Quarterly 54: 557-569. Reply to Sorensen (1988) and
alternative solution to dogmatism puzzle.
Vogel, Jonathan (1990), "Are There Counterexamples to the Closure
Principle?" in M. Roth (ed.). Influential discussion of closure and
lottery propositions.
Wright, Crispin (2000), "Cogency and Question-Begging: Some
reflections of McKinsey's Paradox and Putnam's Proof," Philosophical
Issues 10: 140-163. On the distinction between closure and
transmission, and McKinsey's paradox.
Further Reading
Brueckner, Anthony (1985), "Transmission for Knowledge not
Established," The Philosophical Quarterly 35: 193-95. Reply to Forbes
1984.
Brueckner, Anthony (2000), "Klein on Closure and Skepticism,"
Philosophical Studies 98: 139-151. Reply to Klein 1995.
DeRose, Keith (1995), "Solving the Skeptical Problem," Philosophical
Review 104: 1-52. Influential defense of contextualist epistemology.
Forbes, Graeme (1984), "Nozick on Scepticism," The Philosophical
Quarterly 34: 43-52. Argues that Nozick's denial of closure cannot
adequately handle cases of inferential knowledge.
Goldman, Alvin (1976), "Discrimination and Perceptual Knowledge,"
Journal of Philosophy 73: 771-791. Defends reliabilist account of
knowledge that denies closure, and contains a helpful discussion of
the notion of a relevant alternative.
Klein, Peter (1981), Certainty: A Refutation of Skepticism,
Minneapolis: University of Minnesota Press. Argues that defense of
knowledge closure assumes internalism about justification, so the
skeptic who uses the principle begs the question against the
externalist anti-skeptic.
Klein, Peter (1995), "Skepticism and Closure: Why the Evil Genius
Argument Fails," Philosophical Topics 23: 213-236. Offers a defense of
closure for justification, which, whether the defense succeeds or
fails, he says refutes the skeptic.
Luper (-Foy), Steven, (1987), "The Causal Indicator Analysis of
Knowledge," Philosophy and Phenomenological Research 47: 563-587.
Argues for a tracking account of knowledge that retains closure.
Pritchard, Duncan (2002), "McKinsey Paradoxes, Radical Scepticism, and
the Transmission of Knowledge Across Known Entailments," Synthese 130:
279-302. Reply to Martin and Davies on Transmission and McKinsey
paradox.
Salmon, Nathan (1989), "Illogical Belief," Philosophical Perspectives
3: 243-285. Argues that his Millian account of names and belief
produces counterexamples to closure principles of justification and
knowledge.
Silins, Nicholas (2005), "Transmission Failure Failure," Philosophical
Studies 126: 71-102. Argues against the Davies-Wright line on
transmission failure.
Sosa, Ernest (1999), "How to Defeat Opposition to Moore,"
Philosophical Perspectives 13: 141-152. Adjustment of the tracking
account of knowledge that allows it to sustain closure.
Stine, Gail (1971), "Dretske on Knowing the Logical Consequences,"
Journal of Philosophy 68: 296-299. Reply to Dretske 1970.
Warfield, Ted (2004), "When Epistemic Closure Does and Does not Fail:
a Lesson from the History of Epistemology," Analysis 64: 35-41. Points
out that one cannot refute closure for knowledge by showing that some
necessary condition for knowledge fails to be closed.
No comments:
Post a Comment