psychologists, and other students of human nature. Philosophers of
mind and action have worked towards developing an account of
self-deception and, in so doing, an explanation of its possibility.
They have asked questions concerning the origin and structure of
self-deception: How is self-deception possible? Do self-deceivers hold
contradictory beliefs? And do they intentionally bring about their
self-deception? While these questions have received a great deal of
attention from philosophers, they certainly do not exhaust the topic
of its conceptual intrigue. Self-deception gives rise to numerous
important ethical questions as well—questions concerning the moral
status, autonomy, and well-being of the self-deceiver.
Many worries concerning self-deception stem from the self-deceiver's
distorted view of the world and of himself or herself. Some
philosophers believe that the self-deceiver's warped perception of
things may enable or encourage him or her to act in immoral ways.
Other philosophers, such as Immanuel Kant, fear that the "ill of
untruthfulness" involved in cases of self-deception may spread
throughout the self-deceiver's life and interpersonal relationships.
These concerns about truth and perception point to further questions
regarding the autonomy of the self-deceiver. Can a self-deceiver be
fully autonomous while lacking important information about the world?
Is the possession of true beliefs a necessary condition for autonomous
decisions and action? This article will consider these and other
issues concerning the ethics of self-deception.
1. What Is Self-Deception?
a. Conceptual Challenges
There is a vast literature on the nature and possibility of
self-deception. And given the state of the debate, it seems unlikely
that philosophers will soon agree upon one account of self-deception.
This may be due, in part, to the fact that we ordinarily use the term,
"self-deception", in a broad and flexible way. But it is also the case
that our various experiences with self-deception shape our thoughts
about the paradigmatic self-deceiver. We can view much of the work on
the nature of self-deception as a response to its apparently
paradoxical nature. If self-deception is structurally similar to
interpersonal deception, then it would seem that the self-deceiver
must A) intentionally bring about the self-deception, and B) hold a
pair of contradictory beliefs. Theorists who accept this model claim
that deception is, by definition, an intentional phenomenon; that is,
one person cannot deceive another without intending to do so. They
also maintain that deception always involves contradictory beliefs;
that is, a deceiver believes that p and brings it about that the
deceived believes that not-p. And since the self-deceiver plays the
role of the deceiver, and the deceived, he must believe both that p
and that not-p. Suppose, for example, that William is self-deceived
about his talent as a writer and believes that he will be the world's
next Marcel Proust. If this is true, then William must hold
contradictory beliefs regarding his talent; that is, he must believe
both that he will be the world's next Proust, and that he will not be
the world's next Proust. Moreover, as per condition A, it must be the
case that he intentionally brings it about that he holds the former
(desirable) belief. But it not obvious that a single person can
satisfy both of these conditions. Each of these conditions generates a
"puzzle" or "paradox" when applied to cases of self-deception.
Condition A, which gives rise to the "dynamic" puzzle, is problematic
because it seems unlikely that a person could deceive himself while
being fully aware of his intention to do so; for awareness of the
self-deceptive intention would interfere with the success of his
project (Mele 2001, p. 8). And condition B, which gives rise to the
"static" puzzle (pp. 6-7), would be difficult to satisfy because it is
often thought that believing that p rules out believing that not-p as
well (see Goldstick 1989). Even if one thinks that it is possible for
a person to hold contradictory beliefs, one might still be reluctant
to accept that this can happen when the beliefs in question are
obvious contradictories, as they are thought to be in cases of
self-deception. Indeed, theorists who accept this model generally
maintain that it is the very recognition that p that motivates a
person to produce in himself the belief that not-p. What then should
we conclude about the nature and possibility of self-deception?
b. Divided Mind Accounts
Some philosophers respond to these puzzles by denying that strict or
literal self-deception is possible (see Haight 1980). Other
philosophers, such as Donald Davidson (1986, 1998) and David Pears
(1984, 1985), have developed sophisticated accounts of self-deception
that embrace conditions A and B, but avoid—or so they claim—the two
corresponding puzzles. Both Davidson and Pears have introduced
divisions in the mind of the self-deceiver in order to keep
incompatible mental states apart, and thus preserve internal
coherence. Pears, at times, seems to be willing to attribute agency
(at least in some incipient form) to a part or sub-system that results
from such divisions (see Pears 1984). But Davidson firmly denies that
these divisions result in there being multiple agents, or "autonomous
territories", in the mind of the self-deceiver. Instead, he asks us to
suppose that the self-deceiver's mind is "not wholly integrated," and
is, or resembles, "a brain suffering from a perhaps self-inflicted
lobotomy" (1998, p. 8). On Davidson's model, it is possible for a
self-deceiver to hold contradictory beliefs as long as the two beliefs
are held apart from each other. We need to distinguish between
"believing contradictory propositions and believing a contradiction,
between believing that p and believing that not-p on the one hand, and
believing that [p and not-p] on the other" (p. 5). If incompatible
beliefs can be held apart in the human mind, then we can coherently
describe cases of self-deception that satisfy conditions A and B.
c. Deflationary Accounts
Alfred Mele has rejected the two conditions for literal
self-deception, and has developed a "deflationary" account of
self-deception (Mele 2001, p. 4). His account of self-deception is
based heavily upon empirical research regarding hypothesis testing and
biased thinking and believing. He tries to show that ordinary cases of
self-deception can be explained by looking at the biasing effect that
our desires and emotions have upon our beliefs (pp. 25-49). A person's
desiring that p can make it easier for her to believe that p by
influencing the way that he or she gathers and interprets evidence
relevant to the truth of p. The ordinary self-deceiver does not do
anything intentionally to bring it about that he is self-deceived.
Rather, his motivational economy can cause her to be self-deceived
automatically, as it were, and without her intervention. One of the
ways that a person's desires can shape the way that she forms beliefs
is through what Mele calls "positive misinterpretation". Positive
misinterpretation occurs when one's desiring that p leads him "to
interpret as supporting p data that we would easily recognize to count
against p in the desire's absence" (p. 26). Mele illustrates how this
can happen through his example of the unrequited love that a student,
Sid, feels for his classmate, Roz. Sid is fond of Roz and wants it to
be true that she feels the same way about him. Sid's desire for Roz's
love may cause him to "interpret her refusing to date him and her
reminding him that she has a steady boyfriend as an effort on her part
to "play hard to get" in order to encourage Sid to continue to pursue
her and prove that his love for her approximates hers for him" (p.
26). Positive misinterpretation is just one piece of Mele's careful
empirical study of the nature and aetiology of self-deception.
Annette Barnes (1997) and Ariela Lazar (1999) have also developed
accounts of self-deception that reject conditions A and B. Lazar's
account emphasizes the influence that desires, emotions, and fantasy
have upon the formation of our beliefs. Barnes examines the way that
"anxious" desires affect what we believe, and cause us to become
self-deceived. Barnes, unlike Mele, argues that the desires at work in
cases of self-deception must be "anxious" ones. A person has an
"anxious" desire that q when "the person (1) is uncertain whether q or
not-q and (2) desires that q" (p. 39). For Barnes, self-deceptive
beliefs are functional, and serve to reduce the self-deceiver's
anxiety (p. 76).
In dispensing with conditions A and B of self-deception, some
theorists might worry that deflationary accounts do away with anything
worthy of the name "self-deception". On this view, what Mele et al
succeed in describing is best understood as wishful thinking or a kind
of motivated believing (see Bach 2002). They seem to fail to account
for self-deception, which is a conceptually distinct phenomenon that
is described by conditions A and B (or conditions closely resembling
conditions A and B). José Luis Bermúdez (2000) and William J. Talbott
(1995), who both defend "intentionalist" accounts of self-deception
(that is, accounts that accept condition A but reject condition B),
have individually argued that deflationary (and thus,
"anti-intentionalist") accounts cannot explain why self-deceivers are
selective in their self-deception. Why is it that an individual can be
self-deceived about his artistic talent, say, but not about the
fidelity of his spouse? Bermúdez refers to this as the "selectivity
problem" (p. 317). Mele is confident that his analysis and application
of the "FTL model" for lay hypothesis testing (which combines the
results of James Friedrich 1993; and Akiva Liberman, and Yaacov Trope
1996), can provide us with an answer to this question (Mele 2001, pp.
31-46). According to the FTL model, desires and corresponding "error
costs" influence the way that we test for truth. When the cost of
falsely believing that p is true is low, and the cost of falsely
believing that p is false is high, it will take less evidence to
convince one that p is true than it will to convince one that p is
false (pp. 31-37). It follows from this analysis that individuals may
test hypotheses differently due to variations in their motivational
states (pp. 36-37). By way of example, Mele explains that
[f]or the parents who fervently hope that their son has been
wrongly accused of treason, the cost of rejecting the true hypothesis
that he is innocent (considerable emotional distress) may be much
higher than the cost of accepting the false hypothesis that he is
innocent. For their son's staff of intelligence agents in the CIA,
however, the cost of accepting the false hypothesis that he is
innocent (considerable personal risk) may be much greater than the
cost of rejecting the true hypothesis that he is innocent—even if they
would like it to be true that he is innocent. (pp. 36-7)
On Mele's view, we can make sense of the different responses that
parents and CIA agents would have to the same hypothesis without
introducing talk of intentions; for differences in motivation give
rise to differences in error costs and, in turn, beliefs. Still,
Mele's critics may remain sceptical about the ability of FTL model to
deal with the selectivity problem in its full generality. Can error
costs alone determine when a person will, or will not, become
self-deceived? Unimpressed by Mele's treatment of the problem,
Bermúdez insists that "[i]t is simply not the case that, whenever my
motivational set is such as to lower the acceptance threshold of a
particular hypothesis, I will end up self-deceivingly accepting the
hypothesis" (p. 318). Clearly, there is still a great deal of
disagreement concerning the intentionality of self-deception, and of
motivationally biased belief more generally.
d. Other Approaches
There are numerous intermediate, and alternative accounts, of
self-deception in the literature. Jean-Paul Sartre is well known for
his existential treatment of self-deception, or bad faith (mauvais
fois), and the human condition that inspires it. The person who is
guilty of bad faith bases his decisions and actions upon an "error";
he mistakenly denies his freedom and ability to invent himself (1948,
pp. 50-15). Consider Sartre's provocative and well-known description
of a woman who halfheartedly, and in bad faith, "accepts" the advances
of a certain male companion. Sartre tells us that the woman is aware
of her companion's romantic interest in her. However, she is at the
same time undecided about her own feelings for him, and so neither
accepts nor rejects his advances wholeheartedly. She enjoys the
anxious uncertainty of the moment, and tries to maintain it through
her ambivalent response to his attempted seduction of her (1956, p.
55). Suddenly, though, the woman's companion reaches for her hand, and
with this gesture "risks" forcing her to commit herself one way or
another (p. 56):
To leave the hand there is to consent in herself flirt, to engage
herself. To withdraw it is to break the troubled and unstable harmony
which gives the hour its charm. The aim is to postpone the moment of
decision as long as possible. We know what happens next; the young
woman leaves her hand there, but she does not notice that she is
leaving it. She does not notice because it happens by chance that she
is at this moment all intellect. She draws her companion up to the
most lofty regions of sentimental reflection; she speaks of Life, of
her life, she shows herself in her essential aspect—a personality, a
consciousness. And during this time the divorce of the body from the
soul is accomplished; the hand rests inert between the warm hands of
her companion—neither consenting nor resisting—a thing. (pp. 55-56)
Sartre charges the woman in this example with bad faith because she
fails to acknowledge and take full responsibility for her situation
and freedom. Instead of committing herself to one choice or the other
(that is, flirting or not flirting), she attempts to avoid both
choices through a deliberate but feigned separation of the mental and
the physical.
Herbert Fingarette, influenced by Sartre's existential approach, has
developed a theory of self-deception that is couched in what he calls
the "volition-action" family of terms. According to Fingarette, we can
make progress towards understanding self-deception if we replace the
old "cognitive-perception" terminology with his new "volition-action"
family of terms (2000, p. 33). Whereas the cognitive-perception family
of terms emphasizes belief and knowledge, the volition-action family
of terms highlights the dynamic and semi-voluntary nature of
consciousness. Crucial to Fingarette's active or dynamic conception of
consciousness is the idea that a person can become explicitly aware of
something by "spelling it out" to himself. When a person does this, he
directs his attention towards the thing in question and makes himself
fully and explicitly conscious of it (p. 38). Fingarette describes the
self-deceiver as a person who cannot (or will not) spell-out an
"engagement" to himself (p. 46). He is unable, or unwilling, to do
this because the engagement in question challenges his conception of
himself. He cannot "avow" this threatening feature of himself or the
world, and so actively prevents himself from doing so. Moreover, the
success of his project demands that he avoid spelling-out that he is
not spelling-out a particular engagement in the world. In this way,
the self-deceiver adopts a strategy or policy that is "self-covering"
(p. 47).
Fingarette offers a plausible and insightful account of the motivation
behind typical cases of self-deception. But some may interpret his
shift in terminology as an evasion of the central issues that need to
be discussed. Fingarette describes the self-deceiver as one who adopts
a policy that is self-covering. But how is the self-deceiver able to
adhere to this policy without noticing, or even suspecting, that it is
his policy? Will he not find himself in the grip of the dynamic puzzle
of self-deception? And what, on Fingarette's model, should we make of
the self-deceiver's doxastic state? Does the self-deceiver hold only
desirable beliefs about himself and his engagement in the world? Or is
he confused about what he believes because he is engaged in the world
in a way that he cannot avow? Fingarette seems to think that his new
way of framing the problem avoids these questions altogether. But
those who are not immediately sympathetic to Fingarette's shift in
terminology may find his account lacking in detail and clarity on
these "key" points.
Also of interest here is Ronald de Sousa's treatment of self-deceptive
emotions. de Sousa has considered the possibility that we can be
self-deceived not only about our beliefs, but about our emotions as
well. In explaining one source of self-deception, de Sousa examines
the way that various social ideologies influence the emotions—or the
quality of the emotions—that we experience (1987, p. 334). In
explaining how self-deceptive emotions are possible, de Sousa looks at
the way that stereotypes shape the emotions that we experience. For
example, according to certain gender stereotypes,
[a]n angry man is a manly man, but an angry woman is a "fury" or a
"bitch." This is necessarily reflected in the quality of the emotion
itself: a man will experience an episode of anger characteristically
as indignation. A woman will feel it as something less moralistic,
guilt-laden frustration, perhaps, or sadness. Insofar as the
conception of gender stereotypes that underlies these difference is
purely conventional mystification, the emotions that embody them are
paradigms of self-deceptive ones. (p. 334)
de Sousa adds that we cannot account for the emotions in question on
the basis of socialization, or external social forces alone.
Individuals whose emotions embrace these stereotypes are not simply
socialized; they are self-deceived. And they are self-deceived,
according to de Sousa, because they have internalized these
stereotypes, and have allowed them to affect the character of what
they feel (p. 336). To this extent, they are complicit and deeply
involved in the modeling of their own emotions. Fortunately, we have
some hope of freeing ourselves from gender stereotypes and other
social mythologies through what de Sousa describes as
"consciousness-raising". By engaging in a process of critical review
and redescription, we can challenge our assumptions and our view of
the situation that is contributing to our emotive response (pp.
337-338).
Now how a theorist approaches the ethics of self-deception will depend
upon the view of self-deception that he accepts. As we begin to
explore the ethical dimension of self-deception, it is important to
keep in mind that there is no single account of self-deception that
has acquired universal acceptance among philosophers. At times, these
points of disagreement will have a profound impact upon the way that
we evaluate self-deception. This will become particularly clear (in
Section 6) when we consider whether or not a self-deceiver is ever
responsible for his self-deception.
2. Conscience and Moral Reflection
Self-deception is clearly a sin against Socrates' maxim, "know
thyself". And many people find self-deception objectionable precisely
because of the knowledge that it prevents a self-deceiver from
achieving. As history has amply demonstrated, ignorance—no matter what
its source—can lead to morally horrendous consequences. Aristotle, for
instance, believed that temporary ignorance, a state akin to
drunkenness, made it possible for the akrates to act against his best
moral judgment (1999, 1147a, 10-20). Some scholars might interpret
this ignorance as a convenient instance of self-deception that enables
the akrates to succumb to temptation. One problem with this reading of
Aristotle is that it is not explicitly supported by the relevant
texts. But in addition to this, self-deception is generally thought to
be a lasting, and not temporary, state. A fleeting spell of ignorance
that surfaced and then quickly passed would probably not be best
described as self-deception. If my moral judgment in support of
vegetarianism is suddenly overcome by an intense craving for a grizzly
piece of steak, I may be distracted and temporarily ignorant, but
probably not self-deceived in my impaired state of mind. Sometimes,
though, a person's ignorance endures and shapes the way that he
perceives himself and his situation. When this happens, we may have
grounds for thinking that the person in question is self-deceived.
Bishop Joseph Butler regarded self-deception as a serious threat to
morality, and treated it as a problem in its own right in his sermons
on the topic. Butler was particularly concerned about the influence
that self-deception has upon the conscience of an individual. Butler
believed that the purpose of a human being's conscience is to direct
him in matters of right and wrong. A human being's conscience is a
"light within" that—when not darkened by self-deceit—guides a person's
moral deliberations and actions. According to Butler, self-deception
interferes with the conscience's ability to direct an individual's
moral thinking and action. And this, in turn, makes it possible for an
individual to act in any number of malicious or wicked ways without
having any awareness of his moral shortcomings (1958, p. 158). Butler
warns that self-partiality, which is at the root of self-deception,
"will carry a man almost any lengths of wickedness, in the way of
oppression, hard usage of others, and even to plain injustice; without
his having, from what appears, any real sense at all of it" (p. 156).
Butler's condemnation of self-deception is severe, in part, because of
the gravity of the consequences that self-deception can bring about.
The self-deceiver's "ignorance" makes it possible for him to act in
ways that he would not choose to, were he aware of his true motives or
actions. And thus, self-deception is wrong because the acts that it
makes possible are wrong or morally unacceptable. Morality demands
that we reason and act in response to an accurate view of the world.
Self-deception, in obscuring our view, destroys morality and corrupts
"the whole moral character in its principle" (p. 158).
Adam Smith shared Butler's concern about the "blinding" effect of
self-deception, and its ability to interfere with our moral judgment.
According to Smith, it is our capacity for self-deception that allows
us to think well of ourselves, and to cast our gaze away from a less
than perfect moral history (2000, p. 222). In this way, we can
preserve a desirable but inaccurate conception of our character. Smith
observes that
[i]t is so disagreeable to think ill of ourselves, that we often
purposely turn away our view from those circumstances which might
render that judgment unfavourable. He is a bold surgeon, they say,
whose hand does not tremble when he performs an operation upon his own
person; and he is often equally bold who does not hesitate to pull off
the mysterious veil of self-delusion which covers from his view the
deformities of his own conduct. (pp. 222-223)
Self-deception, for Smith, is an impediment to self-knowledge and
moral understanding. If a person does not clearly perceive his
character, and its manifestations in action, then he is less able to
act morally, and to make amends for previous acts of injustice.
Self-deception can also interfere with a person's ability to progress
morally, and to reform or refine his character. Both Butler and Smith
recognized that even the most patient and careful moral reflection is
wholly useless when it responds to a view of things that has been
distorted by self-deception.
One worry that we might have about this evaluation of self-deception
concerns its apparent neglect of instances of self-deception that do
not concern moral issues. We are not always self-deceived about our
immoral actions or motives. It is quite common for people to be
self-deceived about their intelligence, physical appearance, artistic
talent, and other personal attributes or abilities. And it is arguably
the case that self-deception about these qualities often gives rise to
positive or desirable consequences; that is, it may bring it about
that the individuals in question are healthier, happier, and more
productive in their lives than they otherwise would be (see Brown and
Dutton 1995, and Taylor 1989). Mike Martin, in discussing Butler's
treatment of self-deception, has voiced this concern. On Martin's
view, self-deception does not always lead to negative or immoral
consequences, but when it does we should be critical of it. His
"Derivative-Wrong Principle" captures this insight: "Self-deception
often leads to, threatens to lead to, or supports immorality, and when
it does it is wrong in proportion to the immorality involved" (1986,
p. 39). For Martin, self-deception is not always wrong in virtue of
its consequences. But in evaluating the wrongfulness of any particular
case of self-deception, we need to consider its consequences and the
actions that it makes possible.
A second worry that we might have with the Butler-Smith evaluation of
self-deception stems from the fact that we are not always
self-deceived in the positive direction. We are often self-deceived in
thinking that the world, or some part of it, is worse than it really
is. Donald Davidson, in commenting on such cases, claims that if
pessimists are individuals who believe that the world is worse than it
really is, then they may all be self-deceived (1986, p. 87). But if
pessimists have a more realistic view of things than the rest of us,
as the research on depressive realism suggests, then we may want to
resist this conclusion (see Dobson and Franche 1989). It may turn out
to be the case that pessimists are the only ones who are not deeply
mistaken about the world and their role in it. These possibilities
certainly need to be considered when weighing the advantages and
disadvantages of habitual or episodic self-deception.
3. Truth and Credulity
Thus far we have examined the way that self-deception can interfere
with a person's moral reasoning. But what should we say about the
effect that self-deception has upon our general reasoning, that is,
our reasoning about non-moral issues? Might we have reason to extend
Butler's concern about self-deception to other forms of reasoning? W.
K. Clifford, in "The Ethics of Belief," (1886) provided an affirmative
answer to this question, and argued very passionately against any form
of self-deception. Clifford believed that we have a moral duty to form
our beliefs in response to all of the available evidence. It is
therefore wrong on his view to believe something because it is
desirable, comfortable, or convenient. Clifford supports this position
by way of example. He asks his reader to imagine a shipowner who
carelessly sends a dilapidated ship to sail. The shipowner is fully
aware of the ship's condition, but deliberately stifles his doubts,
and brings himself to believe the opposite. As a result of his
negligence, the ship, along with all of the passengers upon it, sinks
in mid-ocean (p. 79). According to Clifford, the shipowner should be
held responsible for the deaths of the passengers; for, as Clifford
puts it, "he had no right to believe on such evidence as was before
him" (p. 70). Clifford adds that even if the ship had successfully
made its way to shore, the shipowner's moral status would be the same,
"he would only have been not found out" (p. 71). Believing upon
insufficient evidence is always morally wrong, regardless of the
consequences. And given that self-deception involves believing upon
insufficient evidence, the same can be said of it: it is always
morally wrong, regardless of its consequences.
Clifford was especially concerned about the effect that believing
based upon insufficient evidence would have upon an individual's (and
society's) ability to test for truth. He thought that believing based
upon insufficient evidence would make human beings credulous, or ready
to believe. A lack of reverence for the truth not only spreads
throughout the life of a single individual—from moment to moment, as
it were—it also spreads from one individual to another. In this way,
humanity may find itself surrounded by a thick cloud of falsity and
illusion (pp. 76-77). Philosophers have been critical of Clifford's
ethics of belief for a variety of reasons. Some have argued that there
can be no ethics of belief because beliefs, unlike actions, are not
under our direct control (see Price 1954), and others have worried
that Clifford's requirements for belief are mistaken or unduly strict
(see James 1999, and van Inwagen 1996). In discussing Clifford's
specific thoughts on self-deception, Mike Martin has argued, contra
Clifford, that not all cases of self-deception (or believing on
insufficient evidence) lead to credulity, or a general disregard for
truth. Indeed, many cases of self-deception seem to be isolated and
relatively harmless (1986, pp. 39-41).
Immanuel Kant also expressed grave concern about the corrosive effect
that self-deception has upon belief and our ability to test for truth.
He refers to falsity as "a rotten spot," and warns that "the ill of
untruthfulness" has a tendency to spread from one individual to
another (1996, p. 183). Although a person may deceive himself or
another for what seems to be a good cause, all deception should be
avoided because it is "a crime of a human being against his own
person" (p. 183). When a person deceives himself or another he uses
himself as a mere means, or "speaking machine" (p. 183). In so doing,
he fails to use his ability to speak for its natural purpose, that is,
the communication of truth (pp. 183-184). Kant's categorical treatment
of all forms of deception is the outgrowth of his particular version
of deontologism. And his especially harsh criticisms of internal lies
has its source in his views about the moral importance of acting from
duty. For Kant, a person only acts morally when he acts from duty, or
out of respect for the moral law. While we can never be certain that
we have succeeded in acting from duty, we have an obligation to strive
for this goal (p. 191). Through self-cognition, a person can examine
his motives and possibly become aware of internal threats to acting
morally. (Given that Kant believed that our introspection is fallible,
the qualification is in order here). When he succeeds in his
introspection, he will be in a better position to act morally from
respect for the moral law. Self-deception is particularly problematic
for Kant because it allows a person to disguise his motives and act
under the guise of moral purity. A self-deceiver can comfort himself
with his actions and with what he sees in the external world, and thus
avoid the morally crucial thoughts and questions about the motives for
these actions.
Kant's limited remarks on self-deception are in many ways peculiar to
his moral philosophy. But there is still a great deal that we can take
away from his insights. Whether or not one is a Kantian,
self-understanding seems to be something that is of value to most
people, and to most (if not all) moral theories. Anyone who engages in
moral reasoning will have to be concerned, if not suspicious, about
the accuracy of the beliefs or motives that guide the process. Even
consequentialists must concern themselves with the possibility that,
as a result of self-deception, they may miscalculate the foreseeable
consequences of their actions. John Stuart Mill (1910), for example,
admitted that self-deception might interfere with a person's ability
to correctly apply the utilitarian standard of morality. However, he
believed that self-deception, and the corresponding misapplication of
a moral standard, presents a problem for all moral theories. In
responding to this concern, Mill asks:
But is utility the only creed which is able to furnish us with
excuses for evil doing, and means of cheating our own conscience? They
are afforded in abundance by all doctrines which recognise as a fact
in morals the existence of conflicting considerations; which all
doctrines do, that have been believed by sane persons. It is not the
fault of any creed, but of the complicated nature of human affairs,
that rules of conduct cannot be so framed as to require no exceptions,
and that hardly any kind of action can safely be laid down as either
always obligatory or always condemnable. There is no ethical creed
which does not temper the rigidity of its laws, by giving a certain
latitude, under the moral responsibility of the agent, for
accommodation to peculiarities of circumstances; and under every
creed, at the opening thus made, self-deception and dishonest
casuistry get in. (p. 23)
As Mill observes here, self-deception can interfere with the
application of any standard of morality. For any standard that exists,
no matter how rigid or precise, there is always the possibility that
it will be misapplied as a result of self-deception. What we can
conclude from this, according to Mill, is that the cause of the
misapplication is not the standard itself, but the complexity of human
affairs and our great capacity for self-deception.
4. Autonomous Belief and Action
As we have seen thus far, self-deception (for better or worse) can
interfere with an individual's reasoning in a number of ways. Kant,
Butler, and (to a lesser extent) Mill are particularly worried about
the influence that self-deception can have upon our moral reasoning.
Some philosophers have suggested that by interfering with our
reasoning, self-deception can decrease a person's autonomy, where
autonomy is understood (roughly) as rational self-governance. Marcia
Baron considers the possibility that self-deception diminishes a
person's autonomy by causing him to "operate with inadequate
information," or a "warped view of the circumstances" (1988, p. 436).
When one is self-deceived about important matters, one may suffer from
a serious loss of control. The ability to make an autonomous decision
requires that a person have a certain amount of information regarding
the world and available options in it. If I lack information about the
world, then I may be unable to develop and act on a plan that is
appropriate to it (that is, the world), or to some feature of it. It
has been argued, however, that a person who is self-deceived may not
always be less autonomous on-balance than he otherwise would be. As
Julie Kirsch has pointed out in evaluating the effect of
self-deception upon a person's autonomy, we may need to be sensitive
to the self-deceiver's values, and to the history of the case in
question. Was the self-deception intentionally brought about? Did it
serve to reduce a crippling spell of anxiety? And does the
self-deceiver care more about his own self-esteem or "happiness" than
about truth, or the "real world"? If a person engages in deliberate
self-deception with his own interest in view, we may interpret his
action as an expression of autonomy, and not necessarily as an
impediment to it (2005, pp. 417-426). After all, while many of us do
value truth over comfort, this preference seems not to be one that is
shared by all individuals. Indeed, even truth-loving, tough-minded
philosophers and scientists would probably rather be without certain
pieces of information, such as the unsavory details surrounding their
certain and inevitable deaths.
In examining the connection between self-deception and autonomy, we
may also want to consider the extent or frequency of the
self-deception. Clifford, as we have seen, believed that habitual
self-deception could make a person credulous. Might it also (or in so
doing) make him less autonomous? Baron warns that it might, and takes
this to be one of the most troubling consequences of self-deception.
She claims that self-deception gradually undermines a person's agency
by corroding his "belief-forming processes" (1988, p. 438). This may
be true of habitual self-deception, but as we have already seen, not
all self-deception is habitual. Self-deception can be isolated or
limited to particular areas of concern. Baron's analysis might seem
more plausible, however, if we are willing to accept that
self-deception is not always easy to control or oversee. Some
theorists of self-deception suggest that the easiest or most effective
way to deceive yourself is to do so with your metaphorical "eyes"
closed, and to forfeit all control. Self-deception, on such a model,
would be difficult (or impossible) to navigate because it relies upon
processes that are necessarily blind and independent. As Amelie Rorty
observes,
[c]omplex psychological activities best function at a precritical
and prereflective automatic or autonomic level. The utility of many of
our presumptively self-deceptive responses—like those moved by fear
and trust, for example—depends on their being relatively
undiscriminating, operating at a deeply entrenched habitual
precritical level. (1996, p. 85)
If the success of a strategy depends upon its not being monitored,
then the strategy and its reach may be difficult to control. In this
way, a single case of self-deception may soon lead to others. This is
why Rorty concludes that "[t]he danger of self-deception lies not so
much in the irrationality of the occasion, but in the ramified
consequences of the habits it develops, its obduracy, and its tendency
to generalize" (p. 85). A single case of self-deception may seem prima
facie to be innocuous and under one's control. However, a look at its
less immediate or long-term consequences may cause us to reject this
initial evaluation as shortsighted and incomplete. In this way,
self-deception may be analogous to smoking cigarettes or drinking
alcohol. There may be nothing disastrous about smoking a cigarette or
enjoying the occasional gin and tonic among friends. However, if one
develops—or even begins to develop—the habit of smoking or drinking
gin and tonics, then one might very well be on the way to developing
an autonomy debilitating addiction.
5. Responsibility
Whether or to what extent we should hold a self-deceiver responsible
for his self-deception will depend upon the view of self-deception
that we accept. As indicated in Sections 1 and 2, there is a great
deal of disagreement about whether self-deception is (sometimes or
always) intentional. Theorists who think that self-deception is
intentional will have grounds for holding self-deceivers responsible
for their self-deception. If becoming self-deceived is an action, or
something that one does, then a self-deceiver may be responsible for
bringing this about (that is, he will be just as responsible for
bringing this about as he would be anything else). To be sure, if the
theorist does not think that we are responsible for anything that we
do (say, because he is a hard determinist), then he will of course
think the same of the self-deceiver. Matters become more complicated
when the theorist in question (like Davidson 1986, 1998, and Pears
1984) also views the self-deceiver as divided, or composed of parts or
sub-agents. How, then, should he evaluate the self-deceiver? Should he
hold "part" of the self-deceiver, that is, the deceiving "part",
responsible? And view the other "part", that is, the deceived, as the
passive and helpless victim of the former?
Those who do not think that self-deception is intentional, may be
reluctant to hold the self-deceiver responsible for his
self-deception. Such theorists may view self-deception as something
that happens to the self-deceiver; for, the self-deceiver does not
actively do anything in order to bring it about that he is
self-deceived. Still, even on this view, we might think that the
self-deceiver has some degree of control over what happens to him.
Although self-deception is not something that a person does, or
actively brings about, it is something that he can guard against and
try to avoid. If this is true, then we might be justified in holding
the self-deceiver responsible for the negligence that contributed to
his state of mind. But there are some who will be reluctant to
attribute even this weak form of responsibility to the self-deceiver.
Neil Levy, who describes self-deception as "a kind of mistake," argues
that we need to "drop the presumption" that self-deceivers are
responsible for their states of mind (2004, p. 310). Levy maintains
that we are often unable to prevent ourselves from becoming
self-deceived because we fail to recognize that we might be at risk.
In many cases, our failure to perceive warning signs will itself be a
function of our motivationally biased states of mind. If I have doubts
about a particular belief that I hold, then I might have reason to
exercise a form of control against my thoughtless acceptance of it.
However, if I am sufficiently deluded about the truth of my belief due
to the force of my desires, then I may hold it without even a hint of
suspicion or doubt. And thus, there will be nothing to prompt me to
implement a strategy of self-control. If this is true, then it would
be inappropriate for others to hold me responsible for my
self-deception (pp. 305-310).
6. Conclusions
The philosophers that we have considered all express serious concerns
about the effects that self-deception can have upon our moral lives.
Butler, Smith, Clifford, and Kant have shown that our moral reasoning
is only effective when it responds to the actual state of the world.
And even when our moral reasoning is effective, self-deception enables
us to hide our true motivation from ourselves, or that which prompts
and guides our reasoning in the first place. But, as we have seen,
self-deception is not limited to our desires, motives, and moral
deliberations: we can deceive ourselves about the state of the world,
the people in it, and even our own personality and bodily flaws.
Self-deception, when practiced regularly, can serve as a kind of
global anesthetic that immunizes us against the maladies of life. Most
philosophers accept that severe and widespread self-deception is
harmful and can lead to disastrous results. There is, however,
comparatively less agreement about the wrongfulness of mild and
localized cases of self-deception that simply boost a person's ego, or
add a touch of romance to an otherwise cold and loveless world. While
some philosophers view such cases as harmless and even necessary,
others view them as dangerous and destructive to human well-being and
autonomy.
7. References and Further Reading
* Aristotle, Nichomachean Ethics. Translated by Martin Ostwald
(Upper Saddle River: Prentice Hall, 1999).
* Bach, Kent. "Self-Deception Unmasked." Philosophical Psychology
15.2 (2002), pp. 203-206.
* Baron, Marcia. "What Is Wrong with Self-Deception?" In
Perspectives on Self- Deception. Edited by Brian P. McLaughlin and
Amélie Oksenberg Rorty (Berkeley: University of California Press,
1988).
* Barnes, Annette. Seeing through Self-Deception (Cambridge:
Cambridge University Press, 1997).
* Bermúdez, José Luis. "Self-Deception, Intentions, and
Contradictory Beliefs." Analysis 60.4 (October 2000), pp. 309-319.
* Brown, J., and K. Dutton. "Truth and Consequences: The Costs and
Benefits of Accurate Self-knowledge." Personality and Social
Psychology Bulletin 21 (1995), pp. 1288-1296.
* Butler, Joseph D. C. L. Fifteen Sermons Preached at the Rolls
Chapel and A Dissertation upon the Nature of Virtue. Edited by Rev.
W.R. Matthews (London: G. Bell & Sons LTD, 1958).
* Clifford, William Kingdon. "The Ethics of Belief." In Lectures
and Essays. Edited by Leslie Stephen and Frederick Pollock (London:
Macmillan and Co., 1886).
* Davidson, Donald. "Who Is Fooled?" In Self-Deception and
Paradoxes of Rationality. Edited by J.P. Dupuy (Stanford: CSLI
Publications, 1998).
* Davidson, Donald. "Deception and Division." In The Multiple
Self. Edited by John Elster (Cambridge: Cambridge University Press,
1986).
* de Sousa, Ronald. The Rationality of Emotion (Cambridge: The MIT
Press, 1987).
* Dobson, K. and Franche, R. L. "A Conceptual and Empirical Review
of the Depressive Realism Hypothesis." Canadian Journal of Behavioural
Science 21 (1989) pp. 419- 433.
* Fingarette, Herbert. Self-Deception (Berkeley: University of
California Press, 2000).
* Friedrich, J. "Primary Error Detection and Minimization (PEDMIN)
Strategies in Social Cognition: A Reinterpretation of Confirmation
Bias Phenomena." Psychological Review 100 (1993), pp. 298-319.
* Goldstick, Daniel. "When Inconsistent Belief Is Logically
Impossible." Logique & Analyse 125- 126 (1989), pp. 139-142.
* Haight, Mary. A Study of Self-Deception (Suzzex: The Harvester
Press, 1980).
* James, William. "The Will to Believe." In Reason and
Responsibility: Some Basic Problems of Philosophy, 10th Edition.
Edited by Joel Feinberg and Russ Shafer- Landau (Belmont: Wadsworth
Publishing Company, 1999).
* Kant, Immanuel. The Metaphysics of Morals, Cambridge Texts in
the History of Philosophy. Translated and edited by Mary Gregor
(Cambridge: Cambridge University Press, 1996).
* Kirsch, Julie. "What's So Great about Reality?" Canadian Journal
of Philosophy, 3 (September 2005), pp. 407-427.
* Lazar, Ariela. "Deceiving Oneself Or Self-Deceived? On the
Formation of Beliefs 'Under the Influence.'" Mind 108 (April 1999),
pp. 265-290.
* Levy, Neil. "Self-Deception and Moral Responsibility." Ratio, 3
(September 2004), pp. 294-311.
* Martin, Mike. Self-Deception and Morality (Lawrence: University
of Kansas Press, 1986).
* Mele, Alfred. Self-Deception Unmasked (Princeton: Princeton
University Press, 2001).
* Mill, John Stuart. Utilitarianism (London: J. M. Dent & Sons LTD, 1910).
* Pears, David. Motivated Irrationality (Oxford: Clarendon Press, 1984).
* Pears, David. "The Goals and Strategies of Self-Deception." In
The Multiples Self. Edited by Jon Elster (Cambridge: Cambridge
University Press, 1985).
* Price, H. H. "Belief and Will." Proceedings of the Aristotelian
Society, Supplementary Volume, 28 (1954), pp. 1-26.
* Rorty, Amelie Oksenberg. "User-Friendly Self-Deception: A
Traveler's Manual." In Self and Deception: A Cross-Cultural
Philosophical Enquiry. Edited by Roger T. Ames and Wimal Dissanayake
(Albany: State University of New York Press, 1996).
* Sartre, Jean-Paul. Existentialism and Humanism. Translated by
Philip Mariet (US: Mathuen, 1948).
* Sartre, Jean-Paul. Being and Nothingness; A Phenomenological
Essay on Ontology. Translated by Hazel E. Barnes (New York: Washington
Square Press, 1956).
* Smith, Adam. The Theory of Moral Sentiments (Amherst: Prometheus
Books, 2000).
* Talbott, William J. "Intentional Self-Deception in a Single
Coherent Self." Philosophy and Phenomenological Research 55 (March
1995), pp. 27-74.
* Taylor, Shelley E. Positive Illusions: Creative Self-Deception
and the Healthy Mind (Basic Books, Inc., Publishers, 1989).
* Trope Yaacov and Akiva Liberman. "Social Hypothesis Testing:
Cognitive and Motivational Mechanisms." In Social Psychology: Handbook
of Basic Principles. Edited by E. Higgins and A. Kruglanski (New York:
Guilford Press, 1996).
* van Inwagen, Peter. "It Is Wrong, Everywhere, Always, and for
Anyone, to Believe Anything upon Insufficient Evidence." In Faith,
Freedom, and Rationality: Philosophy of Religion Today. Edited by Jeff
Jordan and Daniel Howard-Snyder (London: Rowman & Littlefield, 1996).
No comments:
Post a Comment