scope of animal mentality, as well as to our commonsense understanding
and scientific knowledge of animal minds. Two general sets of problems
have played a prominent role in defining the field and will take
center stage in the discussion below: (i) the problems of animal
thought and reason, and (ii) the problems of animal consciousness.
The article begins by examining three historically influential views
on animal thought and reason. The first is David Hume's analogical
argument for the existence of thought and reason in animals. The
second is René Descartes' two arguments against animal thought and
reason. And the third is Donald Davidson's three arguments against
ascribing thought and reason to animals.
Next, the article examines contemporary philosophical views on the
nature and limits of animal reason by Jonathan Bennett, José Bermúdez,
and John Searle, as well as four prominent arguments for the existence
of animal thought and reason: (i) the argument from the intentional
systems theory by Daniel Dennett, (ii) the argument from common-sense
functionalism by Jerry Fodor, Peter Carruthers, and Stephen Sitch,
(iii) the argument from biological naturalism by John Searle, and (iv)
the argument from science by Colin Allen and Marc Bekoff, and José
Bermúdez.
The article then turns to the important debate over animal
consciousness. Three theories of consciousness—the inner-sense theory,
the higher-order thought theory, and the first-order theory—are
examined in relation to what they have to say about the possibility
and existence of animal consciousness.
The article ends with a brief description of other important issues
within the field, such as the nature and existence of animal emotions
and propositional knowledge, the status of Lloyd Morgan's canon and
other methodological principles of simplicity used in the science of
animal minds, the nature and status of anthropomorphism employed by
scientists and lay folk, and the history of the philosophy of animal
minds. The field has had a long and distinguished history and has of
late seen a revival.
1. The Problems of Animal Thought and Reason
Given what we know or can safely assume to be true of their behaviors
and brains, can animals have thought and reason? The answer depend in
large measure on what one takes thought and reason to be, as well as
what animals one is considering. Philosophers have held various views
about the nature and possession conditions of thought and reason and,
as a result, have offered various arguments for and against thought
and reason in animals. Below are the most influential of such
arguments.
a. Hume's Argument for Animal Thought and Reason
David Hume (1711-1776) famously proclaimed that "no truth appears to
be more evident, than that beast are endow'd with thought and reason
as well as men" (1739/1978, p. 176). The type of thought that Hume had
in mind here was belief, which he defined as a "lively idea" or
"image" caused by (or associated with) a prior sensory experience
(1739/1978, p. 94). Reason Hume defined as a mere disposition or
instinct to form associations among such ideas on the basis of past
experience. In the section of A Treatise of Human Nature entitled, "Of
the Reason of Animals," Hume argued by analogy that since animals
behave in ways that closely resemble the behaviors of human beings
that we know to be caused by associations among ideas, animals also
behave as a result of forming similar associations among ideas in
their minds. Given Hume's definitions of "thought" and "reason," he
took this analogical argument to give "incontestable" proof that
animals have thought and reason.
A well-known problem with Hume's argument is the fact that "belief"
does not appear to be definable in terms of vivid ideas presented to
consciousness. Beliefs have propositional content, whereas ideas, as
Hume understood them, do not (or need not). To have a belief or
thought about some object (e.g., the color red) always involves
representing some fact or proposition about it (e.g., that red is the
color of blood), but one can entertain an image of something (e.g.,
the color red) without representing any fact or proposition about it.
Also, beliefs aim at the truth, they represent states of affairs as
being the case, whereas ideas, even vivid ideas, do not. Upon looking
down a railway track, for instance, one could close one's eyes and
entertain a vivid idea of the tracks as they appeared a moment ago
(that is, as converging in the distance) without thereby believing
that the tracks actually converge. And it is further argued, insofar
as "belief" fails to be definable in terms of vivid ideas presented to
consciousness, "reason" fails to be definable in terms of a
disposition to form associations among such ideas; for whatever else
reason might be, so the argument goes, it is a surely a relation among
beliefs. Finally, and independently of Hume's definitions of "belief"
and "reason," there is a serious question about how incontestable his
analogical proof is, since similar types of behaviors can often be
caused by very different types of processes. Toy robotic dogs,
computers, and even radios behave in ways that are similar to the ways
that human beings behave when we have vivid ideas presented to our
consciousness, but few would take this fact alone as incontestable
proof that these objects act as a result of vivid ideas presented to
their consciousness (Searle 1994).
b. Descartes' Two Arguments Against Animal Thought and Reason
Equally as famous as Hume's declaration that animals have thought and
reason is René Descartes' (1596-1650) declaration that they do not.
"[A]fter the error of those who deny God, " Descartes wrote, "there is
none that leads weak minds further from the straight path of virtue
than that of imagining that the souls of beasts are of the same nature
as our own" (1637/1988, p. 46). Descartes gave two independent
arguments for his denial of animal thought and reason, which have come
to be called his language-test argument and his action-test argument,
respectively (Radner & Radner 1989).
i. The Language-Test Argument
Not surprising, Descartes meant something different from Hume by
"thought." In the context of denying it of animals, Descartes appears
to take the term to stand for occurrent thought—that is, thoughts that
one entertains, brings to mind, or is suddenly struck by (Malcolm
1973). Normal adult human beings, of course, express their occurrent
thoughts through their declarative speech; and declarative speech and
occurrent thoughts share some important features. Both, for example,
have propositional content, both are stimulus independent (that is,
thoughts can occur to one, and declarative speech can be produced,
quite independently of what is going on in one's immediate perceptual
environment), and both are action independent (that is, thoughts can
occur to one, and declarative speech can be produced, that are quite
irrelevant to one's current actions or needs). In light of these
commonalities, it is understandable why Descartes took declarative
speech to be "the only certain sign of thought hidden in a body"
(1649/1970, p. 244-245).
In addition to taking speech to be thought's only certain sign,
Descartes argued that the absence of speech in animals could only be
explained in terms of animals lacking thought. Descartes was well
aware that animals produce calls, cries, songs, and various gestures
that function to express their "passions," but, he argued, they never
produce anything like declarative speech in which they "use words, or
put together other signs, as we do in order to declare our thoughts to
others" (1637/1988, p. 45). This fact, Descartes reasoned, could not
be explained in terms of animals lacking the necessary speech organs,
since, he argued, speech organs are not required, as evidenced by the
fact that humans born "deaf" or "dumb" typically invent signs to
engage in declarative speech, and some animals (e.g., magpies and
parrots) who have the requisite speech organs never produce
declarative speech; nor could it be explained as a result of speech
requiring a great deal of intelligence, since even the most "stupid"
and "insane" humans beings are capable of it; and neither could it be
explained, as it is in the case of human infants who are incapable of
speech but nevertheless possess thought, in terms of animals failing
to develop far enough ontogenetically, since "animals never grow up
enough for any certain sign of thought to be detected in them"
(1649/1970, p. 251). Rather, Descartes concluded, the best explanation
for the absence of speech in animals is the absence of what speech
expresses—thought. There are various places in his writings where
Descartes appears to go on from this conclusion to maintain that since
all modes of thinking and consciousness depend upon the existence of
thought, animals are devoid of all forms of thinking and consciousness
and are nothing but mindless machines or automata. It should be noted,
however, that not every commentator has accepted this interpretation
(see Cottingham 1978).
Various responses have been given to Descartes' language-test
argument. Malcolm (1973), for example, argued that dispositional
thinking is not dependent upon occurrent thought, as Descartes seemed
to suppose, and is clearly possessed by many animals. The fact that
Fido cannot entertain the thought, the cat is in the tree, Malcolm
argued, is not a reason to doubt that he thinks that the cat is in the
tree. Others (Hauser et al. 2002), following Noam Chomsky, have argued
that the best explanation for the absence of speech in animals is the
not the absence of occurrent thought but the absence of the capacity
for recursion (that is, the ability to produce and understand a
potentially infinite number of expressions from a finite array of
expressions). And others (Pepperberg 1999; Savage-Rumbaugh et al.
1998; Tetzlaff & Rey 2009) have argued that, contrary to Descartes and
Chomsky, some animals, such as grey parrots, chimpanzee, and
honeybees, possess the capacity to put together various signs in order
to express their thoughts. Finally, it has been argued that there are
behaviors other than declarative speech, such as insight learning,
that can reasonably be taken as evidence of occurrent thought in
animals (see Köhler 1925; Heinrich 2000).
ii. The Action-Test Argument
Whereas Descartes' principal aim in his language-test argument was to
prove that animals lack thought, his principal aim in his action-test
argument is prove that animals lack reason. By "reason," Descartes
meant "a universal instrument which can be used in all kinds of
situations" (1637/1988, p. 44). For Descartes, to act through reason
is to act on general principles that can be applied to an open-ended
number of different circumstances. Descartes acknowledged that animals
sometime act in accordance with such general rules of reason (e.g., as
when the kingfisher is said to act in accordance with Snell's Law when
it dives into a pond to catch a fish (see Boden 1984)), but he argued
that this does not show that they act for these reasons, since animals
show no evidence of transferring this knowledge of the general
principles under which their behaviors fall to an open-ended number of
novel circumstances.
Some researchers and philosophers have accepted Descartes' definition
of "reason" but have argued that some animals do show the capacity to
transfer their general knowledge to a wide (or wide enough) range of
novel situations. For example, honey bees that were trained to fly
down a corridor that had the same (or different) color as the entry
room into which they had initially flown automatically transferred
this knowledge to the novel stimulus dimension of smell: those that
were trained to choose the corridor with the same color, flew down the
corridor with the same smell as in the entry room; and those that were
trained to choose the corridor with a different color, flew down the
corridor with a different smell as in the entry room. It is difficult
to resist interpreting the bees' performance here, as the researchers
do, in terms of their grasping and then transferring the general rule,
"pick the same/different feature" (Giurfa et al. 2001). Other
researchers and philosophers, however, have objected to Descartes'
definition of "reason." They argue that reason is not, as Descartes
conceived it, a universal instrument but is more like a Swiss army
knife in which there is a collection of various specialized capacities
dedicated to solving problems in particular domains (Hauser 2000;
Carruthers 2006). On this view of intelligence, sometimes called the
massive modularity thesis, subjects have various distinct mechanisms,
or modules, in their brains for solving problems in different domains
(e.g., a module for solving navigation problems, a module for solving
problems in the physical environment, a module for solving social
problems within a group, and so on). It is not to be expected on this
theory of intelligence that an animal capable of solving problems in
one domain, such as exclusion problems for food, should be capable of
solving similar problems in a variety of other domains, such as
exclusion problems for predators, mates, and offspring. Therefore, on
the massive modularity thesis, the fact that "many animals show more
skill than we do in some of their actions, yet the same animals show
none at all in many others" is not evidence, as Descartes saw it
(1637/1988, p. 45), that animals lack intelligence and reason but that
their intelligence and reason are domain specific.
c. Davidson's Arguments Against Animal Thought and Reason
No 20th century philosopher is better known for his denial of animal
thought and reason than Donald Davidson (1917-2003). In a series of
articles (1984, 1985, 1997), Davidson put forward three distinct but
related arguments against animal thought and reason: the
intensionality test, the argument from holism, and his main argument.
Although Davidson's arguments are not much discussed these days (for
exceptions, see Beisecker 2001; Glock 2000; Fellows 2000), they were
quite influential in shaping the direction of the contemporary debate
in philosophy on animal thought and reason and continue to pose a
challenging skeptical position on this topic, which makes them
deserved of close examination.
i. The Intensionality Test
The intensionality test rest on the assumption that the contents of
beliefs (and thought in general) are finer grained than the states of
affairs they are about. The belief that Benjamin Franklyn was the
inventor of bifocals, for example, is not the same as the belief that
the first postmaster general of the US was the inventor of bifocals,
even though both beliefs are about the state of affairs. This
fine-grained nature of belief content is reflected in the sentences we
use to ascribe them. Thus, the sentence, "Sam believes that Benjamin
Franklyn was the inventor of bifocals," may be true while the
sentence, "Sam believes that the first postmaster general of the US
was the inventor of bifocals," may be false. Belief ascriptions that
have this semantic feature—that is, their truth value may be affected
by the substitution of co-referring expressions within their
"that"-clauses—are called intensional (or semantically opaque). The
reason that is typically given for why belief ascriptions are
intensional is that their purpose is to describe the way the subject
thinks or conceives of some object or state of affairs. Belief
ascriptions with this purpose are called de dicto ascriptions, as
opposed to de re ascriptions (see below).
Our de dicto belief ascriptions to animals are unjustified, Davidson
argued, since for any plausible de dicto belief ascription that we
make there are countless other and no principled way of deciding which
is the correct way of describing how the animal thinks. Take, for
instance, the claim that Fido believes that the cat is in the tree. It
seems that one could just as well have said that Fido believes that
the small furry object is in the tree, or that the small furry object
is in the tallest object in the yard, and so on. And yet there does
not appear to be any objective fact of the matter that would determine
the correct translation into our language of the way Fido thinks about
the cat and the tree. Davidson concludes that "unless there is
behaviour that can be interpreted as speech, the evidence will not be
adequate to justify the fine distinctions we are used to making in
attribution of thought" (1984, p. 164).
Some philosophers (Searle 1994; McGinn 1982) have interpreted
Davidson's argument here as aiming to prove that animals cannot have
thought on the basis of a verificationist principle which holds that
if we cannot determinately verify what a creature thinks, then it
cannot think. Such philosophers reject this principle on the grounds
that absence of proof of what is thought is not thereby proof of the
absence of thought. But Davidson himself states that he is not
appealing to such a principle in his argument (1985, p. 476), and
neither does he say that he takes the intensionality test to prove
that animals cannot have thought. Rather, he takes the argument to
undermine our intuitive confidence in our ascriptions of de dicto
beliefs to animals.
However, even on this interpretation of the intensionality test,
objections have been raised. Some philosophers (Armstrong 1973; Allen
& Bekoff 1997; Bermúdez 2003a, 2003b) have argued that, contrary to
Davidson's claim, there is a principled way of deciding among the
alternative de dicto belief ascriptions to animals—by scientifically
studying their discriminatory behaviors under various conditions and
by stipulating the meanings of the terms used in our de dicto
ascriptions so the they do not attribute more than what is necessary
to capture the way the animal thinks. Although at present we may not
be completely entitled to any one of the many de dicto belief
ascriptions to animals, according to this view, there is no reason to
think that we could not come to be so entitled through future
empirical research on animal behavior and by the stipulation of the
meanings of the terms used in our belief ascriptions. Also, it is
important to mention that Bermúdez (2003a; 2003b) has developed a
fairly well worked out theory of how to make de dicto ascriptions to
animals that takes the practice of making such attributions to be a
form of success semantics—"the idea that true beliefs are functions
from desires to action that cause thinkers to behave in the ways that
will satisfy their desires" (2003a, p. 65). (See Fodor 2003 for a
criticism of Bermúdez's success semantic approach.)
In addition, David Armstrong (1973) has objected that the
intensionality test merely undermines our justification of de dicto
belief ascriptions to animals, not de re belief ascriptions, since the
latter do not aim to describe how the animal thinks but simply to
identify the state of affairs the animal's thought is about.
Furthermore, Armstrong argues that it is in fact de re belief
ascriptions, not de dicto belief ascriptions, that we ordinarily use
to describe animal beliefs. When we say that Fido believes that the
cat is up the tree, for example, our intention is simply to pick out
the state of affair that Fido's belief is about, while remaining
neutral with respect to how Fido thinks about it. Roughly, what we are
saying, according to Armstong, is that Fido believes a proposition of
the form Rab, where "R" is Fido's relational concept that picks out
the same two-place relation as our term "up," "a" is Fido's concept
that refers to the same class of animals as our word "cat," and "b" is
Fido's concept that refers to the same class of objects as our word
"tree."
ii. The Argument from Holism
One thing that Armstrong's objection assumes is that we are at present
justified in saying what objects, properties, or states of affairs in
the world an animal's belief is about. Davidson's second argument, the
argument from holism, aims to challenge this assumption. Davidson
endorses a holistic principle regarding how the referents or extension
of beliefs are determined. According to this principle, "[b]efore some
object in, or aspect of, the world can become part of the subject
matter of a belief (true or false) there must be endless true beliefs
about the subject matter" (1984, p. 168). Applying this principle to
the case of animals, Davidson argues that in order for us to be
entitled to fix the extension of an animal's belief, we must suppose
that the animal has an endless stock of other beliefs. So, according
to Davidson, to be entitled to say that Fido has a belief about a cat,
we must assume that Fido has a large stock of other beliefs about cats
and related things, such as that cats are three-dimensional objects
that persist through various changes, that they are animals, that
animals are living organisms, that cats can move freely about their
environment, and so on. There is no fixed list of beliefs about cats
and related items that Fido needs to possess in order to have a belief
about cats, Davidson maintains, but unless Fido has a very large stock
of such general beliefs, we will not be entitled to say that he has a
belief about a cat as opposed to something else, such as undetached
cat parts, or the surface of a cat, or a cat appearance, or a stage in
the history of a cat. But in the absence of speech, Davidson claims,
"there could [not] be adequate grounds for attributing the general
beliefs needed for making sense of any thought" (Davidson 1985, p.
475). The upshot is that we are not, and never will be, justified even
in our de re ascriptions of beliefs to animals.
One chief weakness with Davidson's argument here is that its rests
upon a radical form of holism that would appear to deny that any two
human beings could have beliefs about the same things, since no two
human beings ever share all (or very nearly all) the same general
background beliefs on some subject. This has been taken by some
philosophers as a reductio of the theory (Fodor and Lepore 1992).
iii. Davidson's Main Argument
Davidson's main argument against animal thought consists of the
following two steps:
First, I argue that in order to have a belief, it is necessary to have
the concept of belief.
Second, I argue that in order to have the concept of belief one must
have language.
(1985, p. 478)
Davidson concludes from these steps that since animals do not
understand or speak a language, they cannot have beliefs. Davidson
goes on to defend the centrality of belief, which holds that no
creature can have thought or reason of any form without possessing
beliefs, and concludes that animals are incapable of any form of
thought or reason.
Davidson supports the first step of his main argument by pointing out
what he sees as a logical connection between the possession of belief
and the capacity for being surprised, and between the capacity for
being surprised and possessing the concept belief. The idea, roughly,
is that for any (empirical) proposition p, if one believes that p,
then one should be surprised to discover that p is not the case, but
to be surprised that p is not the case involves believing that one's
former belief that p was false, which, in turn, requires one to have
the concept belief (as well as the concept falsity). (See Moser (1983)
for a rendition of Davidson's argument that avoids Davidson's appeal
to surprise.)
Davidson's defense of the second step of his main argument is
sketchier and more speculative. The general idea, however, appears to
be as follows. If one has the concept belief and is thereby able to
comprehend that one has beliefs, then one must also be able to
comprehend that one's beliefs are sometimes true and sometimes false,
since beliefs are, by their nature, states capable of being true or
false. However, to comprehend that one's beliefs are true or false is
to comprehend that they succeed or fail to depict the objective facts.
But the only way for a creature to grasp the idea of a world of object
facts, Davidson speculates, is through its ability to triangulate—that
is, through its ability to compare its own beliefs with those of
others. Therefore, Davidson argues, since triangulation necessarily
involves the capacity of ascribing beliefs to others and this
capacity, according to the intensionality test and the argument from
holism (see sections 1c.i and 1c.ii. above), requires language,
possessing the concept belief requires the possession of language.
A number of commentators of Davidson's main argument have raised
objections to his defense of its first step—that having beliefs
requires having the concept belief. Carruthers (2008), Tye (1997) and
Searle (1996), for example, all argue that having beliefs does not
require having the concept belief. These philosophers agree that
beliefs, by their nature, are states that are revisable in light of
supporting or countervailing evidence presented to the senses but
maintain that this process of belief revision does not require the
creature to be aware of the process or to have the concept belief.
Carruthers (2008) offers the most specific defense of this claim by
developing an account of surprise that does not involve higher-order
beliefs, as Davidson maintains. According to Carruthers' account,
being surprised simply involves a mechanism that is sensitive to
conflicts between the contents of one's beliefs—that is, conflicts
with what one believes, not conflicts with the fact that one believes
such contents. On this model, being surprised that there is no coin in
one's pocket involves having a mechanism in one's head that takes as
its input the content that there is a coin in one's pocket (not the
fact that one believes this content) and the content that there is no
coin in one's pocket (again, not the fact that one believes this
content) and produces as its output a suite of reactions, such as
releasing chemicals into the bloodstream that heightens alertness,
widening the eyes, and orienting towards and attending to the
perceived state of affairs one took as evidence that there is no coin
in one's pocket. It is one's awareness of these changes, Carruthers
argues, not one's awareness that one's former belief was false, as
Davidson maintains, that constitutes being surprised.
Compared with the commentary on the first step of his main argument,
there is little critical commentary in print on Davidson's defense of
the second step of his main argument. However, Lurz (1998) has raised
the following objection. He argues that the intensionality test and
the argument from holism at most show that belief attributions to
nonlinguistic animals are unjustified but not that they are
impossible. The fact that we routinely attribute beliefs to
nonlinguistic animals shows that such attributions are quite possible.
But, Lurz argues, if we can attribute beliefs to nonlinguistic animals
on the basis of their nonlinguistic behavior, then there is no reason
to think (at least, none provided by the intensionality test and the
argument from holism) that a nonlinguistic animal could not in
principle attribute beliefs to other nonlinguistic animals on the same
basis. Of course, if the intensionality test and argument from holism
are sound, such belief attributions would be unjustified, but this
alone is irrelevant to whether it is possible for nonlinguistic
animals to attribute beliefs to others and thereby engage in
triangulation; for triangulation requires the capacity for belief
attribution, not the capacity for justified belief attribution.
Therefore, Lurz argues, if triangulation is possible without language,
then Davidson has failed to prove that having the concept belief
requires language. Furthermore, if some animals actually are capable
of attributing beliefs to others, as some researchers (Premack &
Woodruff 1978; Menzel 1974; Tschudin 2001) have suggested that
chimpanzees and dolphins may be (thought such claims are considered
highly controversial at present), then even if triangulation is a
requirement for having beliefs, as Davidson maintains, it may turn out
that some animals (e.g., chimpanzees and dolphins) actually have
beliefs, contrary to what Davidson's main argument concludes.
d. Contemporary Philosophical Arguments on Animal Reason
Although the vast majority of contemporary philosophers do not go as
far as Descartes and Davidson in denying reason to animals completely,
a number of them have argued for important limits on animal
rationality. The arguments here are numerous and complex; so only an
outline of the more influential ones is provided.
In Rationality (1964/1989), Jonathan Bennett argued that since it is
impossible for animals without language to express universal beliefs
(e.g., All As are Bs) and past-tensed beliefs (e.g., A was F)
separately, they cannot posses either type of belief, on the grounds
that what cannot be manifested separately in behavior cannot exist as
distinct and separate states in the mind. A consequence of this
argument is that animals cannot think or reason about matters beyond
their own particular and immediate circumstances. In Linguistic
Behaviour (1976), Bennett went further and argued that animals cannot
draw logical inferences from their beliefs, on the grounds that if
they did, they would do so for every belief that they possessed, which
is absurd. According to this argument, Fido may believe that the cat
is in tree, as well as believe that there is an animal in the tree,
but he cannot come to have the latter belief as result of inferring it
from the former.
More recently, José Bermúdez (2003a) has argued that the ability to
think about thoughts (what Bermúdez calls "intentional ascent")
requires the ability to think about words in one's natural language
(what Bermúdez calls "semantic ascent"), and that since animals cannot
do the latter, they cannot do the former. Bermudez's argument that
intentional ascent requires semantic ascent is, roughly, that thinking
about thought involves the ability to "'to hold a thought in mind' in
such a way that can only be done if the thought is linguistically
vehicled" via a natural language sentence that one understand (p. ix).
The idea is that the only way for a creature to grasp and think about
a thought (that is, an abstract proposition) is by its saying,
writing, or bringing to mind a concrete sentence that expresses the
thought in question. Bermúdez goes on to argue that the ability to
think about thoughts (propositions) is involved in a wide variety of
types of reasoning, from thinking about and reasoning with
truth-functional, temporal, modal, and quantified propositions, to
thinking and reasoning about one's own and others' propositional
attitudes (e.g., beliefs and desires). Bermúdez concludes that since
animals do no think about words or sentences in a natural language,
their thinking and reasoning are restricted to observable states of
affairs in their environment. However, see Lurz (2007) for critical
comment on Bermúdez's argument here.
Finally, John Searle (1994) has argued that since animals lack certain
linguistic abilities, they cannot think or reasons about institutional
facts (e.g., facts about money or marriages), facts about the distant
past (e.g., facts about matters before their birth), logically complex
facts (e.g., subjunctive facts or facts that involve mixed
quantifies), or facts that can only be represented via some symbolic
system (e.g., facts pertaining to the days of the week). In addition,
and more interesting, Searle (2001) has argued that since animals
cannot perform certain speech acts such as asserting, they cannot have
desire-independent reasons for action. According to this argument,
animals act only for the sake of satisfying some non-rationally
assessable desire (e.g., the satisfaction of hunger) and never out of
a sense of commitment. Consequently, if acts of courage, fidelity,
loyalty, and parental commitment involve desire-independent reasons
for action, as they arguably do, then on Searle's argument here, no
animal is or can be courageous, faithful, loyal, or a committed
parent.
e. Contemporary Philosophical Arguments for Animal Thought and Reason
There are four types of arguments in contemporary philosophy for
animal thought and reason. The first is the argument from the
intentional systems theory championed by Daniel Dennett (1987, 1995,
1997). The second is the argument from common-sense functionalism
championed by (among others) Jerry Fodor (1987), Stephen Stich (1979)
and Peter Carruthers (2004). The third is the argument from biological
naturalism, championed by John Searle (1994). And the fourth is the
argument from science championed by (among others) Allen and Bekoff
(1997) and Bermúdez (2003a).
i. The Intentional Systems Theory Argument
The intentional systems theory consists of two general ideas. The
first is that our concepts of intentional states, such as our concepts
belief, desire, and perceiving, are theoretical concepts whose
identity and existence are determined by a common-sense psychological
theory or folk-psychology. Folk psychology is a set of general
principles that state that subjects, on the assumption that they are
rational, tend to believe what they perceive, tend to draw obvious
logical inferences from their beliefs, and tend act so as to satisfy
their desires given what they believe. In many cases, we apply our
folk psychology to animals to predict and make sense of their
behaviors. When we do, we view animals as intentional systems and take
up, what Dennett (1987) calls, the intentional stance toward them. The
second important idea of the intentional systems theory is its
instrumentalist interpretation of folk psychology. On the
instrumentalist interpretation, what it is for a creature to have
intentional states is for its behaviors to be well predicted and
explained by the principles of folk psychology. Nothing more is
required. There need not be anything inside the creatures' brain or
body, for instance, that corresponds to or has structural or
functional features similar to the intentional state concepts employed
in our folk psychology. Our intentional state concepts, on the
instrumentalist reading, do not aim to refer to real, concrete
internal states of subjects but to abstract entities that are merely
useful constructs for predicting and explaining various behaviors
(much like centers of gravity used in mechanics). Therefore, according
to the intentional systems theory argument, the fact that much of
animal behavior is usefully predicted and explained from the
intentional stance makes animals genuine thinkers and reasoners.
There are two general types of objections raised against the
intentional systems theory argument. First, some have argued (Searle
1983) that our intentional state concepts are not theoretical
concepts, since intentional states are experienced and, hence, our
concepts of them are independent of our having any theory about them.
Second, some (Braddon-Mitchell & Jackson 2007) have objected to the
intentional systems theory's commitment to instrumentalism, arguing
that on such an interpretation of folk psychology, even lowly
thermostats, laptop computers, and Blockheaded robots have beliefs and
desires, since it is useful to predict and explain behaviors of such
objects from the intentional stance.
ii. The Argument from Common-Sense Functionalism
Similar to the intentional systems theory, common-sense functionalism
holds that our intentional state concepts are theoretical concepts
that belong to and are determined by our folk psychology. Unlike the
intentional systems theory, however, common-sense functionalism takes
a realist interpretation of folk psychology. (In addition, many
common-sense functionalists reject the rationality assumption that the
intentional systems theory places on folk psychology (Fodor 1987,
1991).) On the realist interpretation, for a subject to have
intentional states is for the subject to have in his brain a variety
of discrete internal states that play the causal roles and have the
internal structures that our intentional state concepts describe.
According to this view, if Fido believes that the cat is up the tree,
then he has in his brain an individual state, s, that plays the causal
role that beliefs play according to our folk psychology, and s has an
internal structure similar to the "that"-clause used to specify its
content—that is, s has the structure Rxy where "R" represents the
two-place relation up, "x" represents the cat, and "y" represents the
tree. Since the internal state s is seen as having an internal
structure similar to the sentence "the cat is up the tree,"
common-sense functionalism is often taken to support the view that
thinking involves an internal language or language of thought (Fodor
1975). It is then argued that since animal behavior is successfully
predicted and explained by our folk psychology, there is defeasible
grounds for supposing that animals actually have such internal states
in their heads (Fodor 1987; Stich 1979; Carruthers 2004).
Two problems are typically raised regarding the argument from
common-sense functionalism. Some (Stalnaker 1999) have objected that
if, as common-sense functionalism claims, our ascriptions of
intentional states to animals commit us to thinking that the animals
have in their heads states that have the same representational
structure as the "that"-clauses we use to specify their contents, then
intentional ascriptions to animals (and to ourselves) would be a far
more speculative practice than it actually is. The objection here does
not deny that animals actually have such representational structures
in their heads, it simply denies that that is what we are saying or
thinking when we ascribe intentional states to them. Others (Camp,
2009) accept the common-sense functionalist account of intentional
state concepts but have argued, on the basis of Evan's (1982)
generality constraint principle, that few animals have the sorts of
structured representational states in their heads that folk psychology
describes them as having. If Fido's thoughts are structured in the way
that common-sense functionalism claims, the objection runs, then if
Fido is able to think that he is chasing a cat, then he must also be
capable of thinking that a cat is chasing him, but, it is argued, this
may be a thought that is completely unthinkable by Fido. However, see
Carruthers (2009) and Tetzlaff and Rey (2009) for important objections
to this type of argument.
iii. The Argument from Biological Naturalism
Biological naturalism is the theory, championed by John Searle (1983,
1992), that holds that our concepts of intentional states are concepts
of experienced subjective states. The concept belief, for example, is
the concept of an experienced, conscious state that has truth
conditions and world-to-mind direction of fit; whereas, our concept
desires is the concept of an experienced, conscious state that has
satisfaction conditions and mind-to-world direction of fit.
Intentional states, according to this theory, are irreducibly
subjective states that are caused by low-level biochemical states of
the brain in virtue of their causal structures, not in virtue of their
functional or causal roles, or, if they have such, their
representational structures. According to biological naturalism, if
Fido believes that the cat is in the tree, then he has in his brain a
low-level biochemical state, s, that, in virtue of its unique causal
structure, causes Fido to have a subjective experience that has a
world-to-mind direction of fit and is true if and only if the cat is
in the tree.
Searle argues that there are two main reasons why we find it
irresistible to suppose that animals have intentional states, as
biological naturalism conceives them. First, many animals have
perceptual organs (e.g., eyes, ears, mouths, and skin) that we see as
similar to our own and which, we assume, operate according to similar
physiological principles. Since we know in our own case that the
stimulation of our perceptual organs leads to certain physiological
processes which cause us to have certain perceptual experiences, we
reason, from the principle of similar cause-similar effect, that the
stimulation of perceptual organs in animals leads to similar
physiological processes which cause them to have similar perceptual
experiences. The behavior of animals, Searle repeatedly stresses, is
by itself irrelevant to why we think animals have perceptual
experiences; it is only relevant if we take the behavior to be caused
by the stimulation of perceptual organs and underlying physiological
processes relevantly similar to our own. This argument, of course,
would only account for why we think that animals have perceptual
experiences, not why we think that they have beliefs, desires, and
other intentional states that are only distantly related to the
stimulation of sensory organs. So Searle adds that the second reason
we find it irresistible that animals have intentional states is that
we cannot make sense of their behaviors otherwise. To make sense of
why Fido is still barking up the tree when the cat is long out of
sight, for example, we must suppose that Fido continues to want to
catch the cat and continues to think that the cat is up the tree.
There are two main problems with Searle's argument for animal thought
and reason. First, according to biological naturalism, animals have
intentional states solely in virtue of their having brain states that
are relevantly similar in causal structure to those in human beings
which cause us to have intentional states. But this raises the
question: how are we to determine whether the brain states of animals
are relevantly similar to our own? They will not be exactly similar,
since animal brains and human brains are different. Suppose, for
example, scientists discover that a certain type of electro-chemical
process (XYZ) in human brains is necessary and sufficient for
intentional states in us, and that an electro-chemical process (PDQ)
similar to XYZ occurs in animal brains. Is PDQ similar enough to XYZ
to produce intentional states in animals? Well, suppose PDQ produces
behaviors in animals that are similar to those that XYZ produces in
humans. Would that show that PDQ is enough like XYZ to produce
intentional states in animals? No, says Searle, for unless those
behaviors are produced by relevantly similar physiological processes
they are simply irrelevant to whether the animal has intentional
states. But that is precisely what we are trying to determine here, of
course. It would appears that the only way to determine whether PDQ is
similar enough to XYZ, on biological naturalism, is if we humans could
temporarily exchange our brains for those of animals and see whether
PDQ produces intentional states in us. This, of course, is impossible.
And so it would appear that the question of whether animals have
intentional states is, on biological naturalism, unknowable in
principle.
Finally, Searle's explanation for why we find it irresistible to
ascribe perceptual experiences to animals seems questionable in some
cases. If Searle's explanation were correct, then most ordinary
individuals should not find it at all compelling, for example, to
ascribe auditory experiences (that is, hearing) to birds, or tactile
experiences (that is, feelings of pressures, pain, or temperature) to
fish or armadillos, since most ordinary individuals do not see
anything on birds' heads that looks like ears or on the outer surface
of fish or armadillos that looks like skin.
iv. The Argument from Science
Why should we believe that colds are caused by viruses and not by
drastic changes in weather, as many folk had (and still do) believe? A
reasonable answer is that our best scientific theory of the causes of
colds is in terms of viruses, commonsense notwithstanding. Sometimes,
of course, science and commonsense agree, and when they do,
commonsense can be said to be vindicated by science. In either case,
it is science that ultimately determines what should (and should not)
be believed. This type of argument, sometimes called the argument from
science, has been used to justify the claim that animals have thought,
reason, consciousness, and other folk-psychological states of mind
(see Allen & Bekoff 1997; Bermúdez 2003a). In the past thirty years or
so, due in large measure to the demise of radical behaviorism and the
birth of cognitivism in psychology, as well as from the influential
writings of ethologist Donald Griffin (1976, 1984, 2001), scientists
from various fields have found it increasingly useful to propose,
test, and ultimately accept hypotheses about the causes of animal
behavior in explicitly folk-psychological terms. It is quite common
these days to see scientific articles on whether, for example, animals
have conscious experiences such as pain, seeing and (even) joy
(Griffin & Speck 2004; Panksepp & Burgdorf 2003), on whether scrub
jays have desires, and beliefs, and can recollect their pasts (Clayton
et al. 2006), on whether primates understand that other animals know,
see, and hear(Hare et al. 2000; Hare et al. 2001; Santos et al. 2006),
on whether primates make judgments about their own states of knowledge
and ignorance (Hampton et al. 2004; Smith et al. 2003), and so on.
According to the argument, since scientists are finding it useful to
test and accept hypothesis about animal behavior in folk-psychological
terms, we are justified in believing that animals have such states of
mind.
Not everyone has found the argument from science here convincing,
however. The chief concern is whether explanations of animal behavior
in folk-psychological terms are, as the argument assumes,
scientifically respectable (see Kennedy 1992). There are two features
of scientific explanations of animal behavior that appear to count
against their being so. First, scientific explanations of animal
behavior are causal explanations in terms of concrete internal states
of the animal, but on some models of folk-psychology, such as
Dennett's intentional systems theory (see 1.e.i. above),
folk-psychological explanations are neither causal explanations nor
imply anything about the internal states of the animal. Second,
scientific explanations of animal behavior are objective in that there
is typically a general agreement among researchers in the field on
what would count in favor of or against the explanation; however, it
has been argued that since the only generally agreed upon indicators
of consciousness are verbal reports of the subject, explanations of
animal behavior in terms of consciousness are unscientific (see
Clayton et al. 2006, p. 206).
One standard type of reply to these objections has been to adopt a
common-sense functionalist model of folk-psychology (see 1e.ii above)
which interprets folk-psychological explanations as imputing causally
efficacious internal states while denying that these explanations
imply anything about the consciousness of the internal states. (This
seems to be the approach that Clayton et al. (2006) take in their
explanation of the behaviors of scrub jays in terms of "episodic-like"
memories, which are episodic memories minus consciousness.) This, of
course, raises the vexing issue of whether our folk-psychological
concepts, such as belief, desire, intention, seeing, etc. imply
consciousness (see Carruthers 2005; Lurz 2002a; Searle 1992; Stich
1979). Others have responded to above objections by developing
non-subjective measures for consciousness that could be applied to
animals (and humans) incapable of verbal reports (Dretske 2006). And
still others have proposed objective measures of consciousness in
animals by appealing to the communicative signals of animals as
non-verbal reports of the presence of conscious experiences (Griffin
1976, 1984, 2001).
2. The Problems of Animal Consciousness
It is generally accepted that most (if not all) types of mental states
can be either conscious or unconscious, and that unconscious mental
states can have effects on behavior that are not altogether dissimilar
from those of their conscious counterparts. It is quite common, for
example, for one to have a belief (e.g., that one's keys are in one's
jacket pocket) and a desire (e.g., to locate one's keys) that are
responsible for some behavior (e.g., reaching into one's jacket pocket
as one approaches one's apartment) even though at the time of the
behavior (and beforehand) one's mind is preoccupied with matters
completely unrelated to one's belief or desire. Similarly, scientists
have shown through various masking experiments and the like that our
behaviors are often influenced by stimuli that are perceived below the
level of consciousness (Marcel 1983). Also some philosophers have
argued that even pains and other bodily sensations can be unconscious,
such as when one continues to limp from a pain in one's leg though at
the time one is preoccupied with other matters and is not attending to
the pain (Tye 1995).
Given this distinction between conscious and unconscious mental
states, the question arises whether the mental states of animals are
or can be conscious. It should be noted that this question not only
has theoretical import but moral and practical import, as well. For
arguably the fact that conscious pains and experiences feel a certain
way to their subjects makes them morally relevant conditions, and it
is, therefore, of moral and practical concern to determine whether the
mental states of animals are conscious (Carruthers 1992). Of course,
as with the question of animal thought and reason, the answer to this
question depends in large part on what one takes consciousness to be.
There are two general philosophical approaches to
consciousness—typically referred to as first-order and higher-order
theories—that have played a prominent role in the debate over the
status of animal consciousness. These two approaches and their
relevance to the question of conscious states in animals are described
below.
a. Higher-Order Theories of Consciousness
Higher-order theories of consciousness start with the common
assumption that conscious mental states are states of which one is
higher-order aware, and unconscious mental states are states of which
one is not higher-order aware. The theories diverge, however, over
what is involved in being higher-order aware of one's mental states.
i. Inner-Sense Theories
Inner-sense theories take a subject's higher-order awareness to be a
type of perceptual awareness, akin to seeing, that is directed
inwardly toward the mind as opposed to outwardly toward the world
(Lycan 1996; Armstrong 1997). Since higher-order awareness is a
species of perceptual awareness, on this view, it is not usually taken
to require the capacity for higher-order thought or the possession of
mental-state concepts. A subject need not be able to think that he is
in pain or have the concepts I or pain, for example, in order for him
to be higher-order aware of his pain. On the inner-sense theory, then,
the mental states of animals will be conscious just in case they are
higher-order aware of them by means of an inner perception.
Some inner-sense theorists have argued that since higher-order
awareness does not require higher-order thought or the possession of
mental-state concepts, it is quite consistent with what we know about
animal behavior and brains that many animals may have such an
awareness of their own mental states. Furthermore, there are recent
studies in comparative psychology (Smith et al. 2003; Hampton et al.
2004) that suggest that monkeys, apes and dolphins actually have the
capacity to be higher-order aware of their own states of certainty,
memory, and knowledge. However, the results of these studies have not
gone unchallenged (see Carruthers 2008).
The chief problem with inner-sense theories, however, is not so much
their account of animal consciousness but their account of
higher-order awareness. Some (Rosenthal 1986; Shoemaker 1996) have
argued against a perceptual account of higher-order awareness on the
grounds that (i) there is no dedicated perceptual organ in the brain
for such a perception as there is for external perception; (ii) there
is no distinct phenomenology associated with higher-order awareness as
there is for all other types of perceptual modalities; and (iii) it is
impossible to reposition oneself in relation to one's mental states so
as to get a better perception of them as one can do in the case of
perception of external objects. And still others (Lurz 2003) have
objected that the inner-sense theory cannot explain how
concept-involving mental states, such as beliefs and desires, can be
conscious, since to be aware of such states would require being aware
of their conceptual contents, which cannot be done by way of a
perceptual awareness that is not itself concept-involving.
ii. Higher-Order Thought Theories
Problems such as these have led a number of higher-order theorists
(Rosenthal 1986; Carruthers 2000) to embrace some version or other of
the higher-order thought theory. According to this theory, a mental
state is conscious just in case one has (or is disposed to have) the
higher-order thought that one is in such a mental state. Animals will
have conscious mental states, on this theory, if and only if that they
are capable of higher-order thoughts about themselves as having mental
states. The question of animal consciousness, then, becomes the
question of whether animals are capable of such higher-order thought.
A number of philosophers have argued that animals are incapable of
such thought. Some have argued that since higher-order thoughts
require the possession of the first-person I-concept, it is unlikely
that animals are capable of having them. The selves of animals, the
argument runs, are selves that experience numerous mental states at
any one moment in time and that persist through various changes to
their mental states. Thus, if an animal possessed the I-concept, it
must be capable of understanding itself as such an entity—that is, it
must be capable of thinking not only, I am currently in pain, for
example, but I am currently in pain, am seeing, am hearing, am
smelling, as well as be capable of thinking I was in such-and-such
mental states but am not now. However, such thoughts appear to involve
the mental equivalent of pronominal reference and past-tensed
thoughts, both of which, it is argued, are impossible without language
(see Quine 1995; Bermúdez 2003a; Bennett 1964, 1966, 1988).
Various objections have been raised against this argument on behalf of
the higher-order theory and animal consciousness. Gennaro (2004, 2009)
argues that that the I-concept involved in higher-order thoughts need
be no more sophisticated than the concept this particular body or the
concept experiencer of mental states, and that the results of various
self-recognition studies with apes, dolphins and elephants, as well as
the results of a number of episodic memory tests with scrub jays,
suggest that many animals possess such minimal I-concepts (Parker et
al. 1994; Clayton et al., 2003). Lurz (1999) goes further and argues
that insofar as higher-order thoughts confer consciousness on mental
states, they need not involve any I-concept at all. The idea here is
that just as one can be aware that it is raining, where the "it" here
is not used to express one's concept of a thing or a subject—for there
is no thing or subject that is raining—an animal can be aware that it
hurts or thinks that p, where the "it" here does not express a concept
of a thing or a subject that is thought to possess pain or to think
that p. Animals, on this view, are thought to conceive of their mental
states as we conceive of rain and snow—that is, as subject-less
features placed at a time (see Strawson (1959) and Proust (2009) for
similar arguments).
The most common argument against animals possessing higher-order
thought, however, is that such thoughts requires linguistic
capabilities and mental-state concepts that animals do not possess.
Dennett (1991), for example, argues that the ability to say what
mental state one is in is the very basis of one's having the
higher-order thought that one is in such mental state, and not the
other way round. To think otherwise, Dennett argues, is to commit
oneself to an objectionable Cartesian theater view of the mind.
According to Dennett's argument, since animals are incapable of saying
what they are feeling or thinking, they are incapable of thinking that
they are feeling or thinking. In reply, Carruthers (1996) has argued
that there is a way of understand higher-order thoughts that is not
tied to linguistic expression of any kind or committed to a Cartesian
theater view of the mind.
In a somewhat similar vein of thought to Dennett's, Davidson (1984,
1985) and Bermúdez (2003a) argue, although on different grounds, that
since animals are incapable of speaking and interpreting a natural
language, they cannot possess mental-state concepts for propositional
attitudes and, therefore, cannot have higher-order thoughts about
their own or others propositional attitudes (see sections 1c and
1d.iii above). This alone, of course, is not sufficient to prove that
animals are incapable of higher-order thoughts about non-propositional
mental states, such as bodily sensations and perceptual experiences.
However, some have gone further and argued that animals are incapable
of possessing any type of mental-state concept and, therefore, any
type of higher-order thought. The argument for this view generally
consist of the following two main premises: (1) if animals possess
mental-state concepts, then they must have the capacity to apply these
concepts to themselves as well as to other animals; and (2) animals
have been shown to perform poorly in some important experiments
designed to test whether they can apply mental-state concepts to other
animals.
Premise (1) of this argument is sometimes supported (Seager 2004) by
an appeal to Evan's generality constraint (see section 1e.ii above);
roughly, the argument runs, if an animal can think, for example, I am
in pain, and can think of another animal that, for example, he walks,
then the animal in question must be capable of thinking of another
animal, he is pain, as well as be capable of thinking of himself, I
walk. Others, however, have supported premise (1) on evolutionary
grounds, arguing that animals would not have evolved the capacity to
think with mental-state concepts unless their doing so was of some
selective advantage, and the only selective advantage of thinking with
mental-state concepts is its use in anticipating and manipulating
other animals' behaviors (Humphrey 1976). Premise (2) of this argument
has been supported mainly by the results of a series of experiments
conducted by Povinelli and colleagues (see Povinelli & Vonk 2004)
which appear to show that chimpanzees are incapable of discriminating
betweenseeing and not seeing in other subjects.
Various objections have been raised against such defenses of premises
(1) and (2). Gennaro (2009), for example, has argued against the
defense of premise (1) based on Evan's generality constraint. Others
have argued that, contrary to the evolutionary defense given for
premise (1), the principal selective advantage of thinking with
mental-state concepts is its use in recognizing and correcting errors
in one's own thinking, and that the results of various meta-cognition
studies have shown that various animals are capable of reflecting upon
and improving their pattern of thinking (Smith et al., 2003).
(However, see Carruthers (2008) for a critique of such higher-order
interpretations of these studies.) And with respect to premise (2),
others have argued that, contrary to Povinelli's interpretation,
chimpanzees fail such discrimination tasks not because they are unable
to attribute mental states to others but because the experimental
tasks are unnatural and confusing for the animals, and that when the
experimental tasks are more suitable and natural, such as those used
in competitive paradigms (Hare et al. 2000; Hare et al. 2001; Santos
et al. 2006), the animals show signs of mental-state attribution.
However, see Penn and Povinelli (2007) for challenges to the supposed
successes of mental-state attributions by animals in these new
experimental protocols and for suggestions on how to improve
experimental methods used in testing mental-state attributions in
animals.
b. First-Order Theories
According to first-order theories, conscious mental states are those
that make one conscious of things or facts in the external environment
(Evans 1982; Tye 1995; Dretske 1995). Mental states are not conscious
because one is higher-order aware of them but because the states
themselves make one aware of the external world. Unconscious mental
states, therefore, are mental states that fail to make one conscious
of things or facts in the environment—although, they may have various
effects on one's behavior. Furthermore, mental states that make
subjects conscious of things or facts in the environment do so,
according to first-order theories, in virtue of their effecting, or
being poised to effect, subjects' belief-forming system. So, for
example, one's current perception of the computer screen is conscious,
on such theories, because it causes, or is poised to cause, one to
believe that there is a computer screen before one; whereas, those
perceptual states that are involved in subliminal perception, for
instance, are not conscious because they do not cause, nor are poised
to cause, subjects to form beliefs about the environment.
First-order theorists argue (Tye 1997; Dretske 1995) that many
varieties of animals, from fish to bees to chimpanzees, form beliefs
about their environment based upon their perceptional states and
bodily sensations and, therefore, enjoy conscious perceptual states
and bodily sensations. Additional virtues of first-order theories, it
is argued, are the fact that they offer a more parsimonious account of
consciousness than higher-order theories, since they do not require
higher-order awareness for consciousness, and that they provide a more
plausible account of animal consciousness than higher-order theories,
since they ascribe consciousness to animals that we intuitively
believe to possess conscious perceptual states (e.g., bats and mice)
but do not intuitively believe to possess higher-order awareness.
It has been argued (Lurz 2004, 2006), however, that first-order
theories are at their best when explaining the consciousness of
perceptual states and bodily sensations but have difficultly
explaining the consciousness of beliefs and desires. Most first-order
theorists have responded to this problem by endorsing a higher-order
thought theory of consciousness for such mental states (Tye 1997;
Dretske 2000, p. 188). On such a hybrid view, beliefs and desires are
conscious in virtue of having higher-order thoughts about them, while
perceptual states and bodily sensations are conscious in virtue of
their being poised to make an impact on one's belief-forming system.
This hybrid view faces two important problems, however. First, on such
a view, few, if any, animals would be capable of conscious beliefs and
desires, since it seems implausible, for various reasons, to suppose
that many animals are capable of higher-order thoughts about their own
beliefs and desires. And yet it has been argued (Lurz 2002b) that
there is intuitively compelling grounds for thinking that many animals
are capable of conscious beliefs and desires, since their behaviors
are quite often predictable and explainable in terms of the concepts
beliefand desire of our folk psychology, which is a set of laws about
the causal properties and interactions ofconscious beliefs and desires
(or, at the very least, a set of laws about the causal properties and
interactions of beliefs and desires that are apt to be conscious
(Stich 1978)). However, see Carruthers (2005) for a reply to this
argument.
The second problem for the hybrid view is that on its most plausible
rendition it would ascribe consciousness to the same limited class of
animals as higher-order thought theory and, thereby, provide no more
of an intuitively plausible account of animal consciousness than its
main competitor. For it seems intuitively plausible to suppose that a
perceptual state or bodily sensation will be conscious only if it
effects, or is poised to effect, a subject's conscious belief-forming
system. If it were discovered, for example, that the perceptual states
involved in subliminal perception (or blindsight) caused subjects to
form unconscious beliefs about the environment, no one but the most
committed first-order theorist would conclude from this alone that
these perceptual states were, after all, conscious. But if perceptual
states and bodily sensations are conscious only insofar as they effect
(or are poised to effect) a subject'sconscious belief-forming system,
and conscious beliefs, on the hybrid view, require higher-order
thought, then to possess conscious perceptions and bodily sensations,
an animal would have to be, as higher-order thought theories maintain,
capable of higher-order thought. What appears to be need here in order
to save first-order theories from this problem is a first-order
account of conscious beliefs and desires. See Lurz (2006) for a sketch
of such an account.
3. Other Issues
There are many other important issues in the philosophy of animal
minds in addition to those directly related to the nature and scope of
animal thought, reason, and consciousness. Due to considerations of
length, however, only a brief list of such issues with reference to a
few relevant and important sources is provided.
The nature and extent of animal emotions has been, and continues to
be, an important issue in the philosophy of animal minds (see Nussbaum
2001; Roberts 1996, 2009: Griffiths 1997), as well as the nature and
extent of propositional knowledge in animals (see Korblith 2002).
Philosophers have also been particularly interested in the
philosophical foundations and the methodological principles, such as
Lloyd Morgan's canon, employed in the various sciences that study
animal cognition and consciousness (see Bekoff et al. 2002; Allen and
Bekoff 1997; Fitzpatrick 2007, 2009; Sober 1998, 2001a, 2001b, 2005).
Philosophers have also been interested in the nature and justification
of the practice of anthropomorphism by scientist and lay folk
(Mitchell at al.1997; Bekoff & Jamieson 1996; Datson & Mitman 2005).
And finally, there is a rich history of philosophical thought on
animal minds dating back to the earliest stages of philosophy and,
therefore, there has been, and continues to be, philosophical interest
and issues related to the history of the philosophy of animal minds
(see Sorabji, 1993; Wilson, 1995; DeGrazia, 1994).
4. References and Further Reading
a. References
Allen, C. & Bekoff, M. (1997). Species of Mind. Cambridge, MA: MIT Press.
Armstrong, D. (1973). Belief, Truth and Knowledge. Cambridge:
Cambridge University Press.
Armstrong. D. (1997). What Is Consciousness? In N. Block, O. Flanagan
& G. Güzledere (Eds.) The Nature of Consciousness: Philosophical
Debates. Cambridge, Mass.: MIT Press.
Beisecker, D. (2002). Some More Thoughts About Thought and Talk:
Davidson and Fellows on Animal Belief. Philosophy 77: 115-124.
Bekoff, M. & Jamieson, D. (1996). Readings in Animal Cognition.
Cambridge, Mass.: MIT Press.
Bekoff, M., Allen, C. & Burghardt, G. (2002). The Cognitive Animal:
Empirical and Theoretical Perspectives on Animal Cognition. Cambridge,
Mass.: MIT Press.
Bennett, J. (1964/1989). Rationality: An Essay Towards and Analysis.
Indianapolis: Hackett.
Bennett, J. (1966). Kant's Analytic. Cambridge: Cambridge University Press.
Bennett, J. (1976). Linguistic Behaviour. Indianapolis: Hackett.
Bennett, J. (1988). Thoughtful Brutes. Proceedings and Addresses of
the American Philosophical Association 62: 197-210.
Bermúdez, J. L. (2003a). Thinking Without Words. Oxford: Oxford
University Press.
Bermúdez, J. L. (2003b). Ascribing Thoughts to Non-linguistic
Creatures. Facta Philosophica 5: 313-334.
Boden, M. A. (1984). Animal Perception from an Artificial Intelligence
Viewpoint. In C. Hookway (Ed.) Minds, Machines and Evolution.
Cambridge: Cambridge University Press.
Braddon-Mitchell, D. & Jackson, F. (2007). Philosophy of Mind and
Cognition: An Introduction (2nd edition). Oxford: Blackwell
Publishing.
Camp, E. (2009). Putting Thoughts to Work: Concepts, Stimulus
Independence, and the Generality Constraint. Philosophy and
Phenomenological Research, 78 (2): 275-311.
Carruthers, P. (1992). The Animal Issue: Moral Theory in Practice.
Cambridge: Cambridge University Press.
Carruthers, P. (1996). Language, Thought and Consciousness. Cambridge:
Cambridge University Press.
Carruthers, P. (2000) Phenomenal Consciousness: A naturalistic Theory.
Cambridge: Cambridge University Press.
Carruthers, P. (2004). On Being Simple Minded. American Philosophical
Quarterly 41: 205-220.
Carruthers, P. (2005). Why the Question of Animal Consciousness Might
Not Matter Very Much. Philosophical Psychology 17: 83-102.
Carruthers, P. (2006). The Architecture of the Mind. Oxford: Oxford
University Press.
Carruthers, P. (2008). Meta-Cognition in Animals: A Skeptical Look.
Mind and Language 23: 58-89.
Carruthers, P. (2009). Invertebrate concepts confront the Generality
Constraint (and win). In R. Lurz (Ed.) Philosophy of Animal Minds.
Cambridge: Cambridge University press.
Cottingham, J. (1978). A Brute to the Brutes?: Descartes Treatment of
Animals. Philosophy 53: 551-561.
Clayton, N., Bussey, N. & Dickinson, A. (2003). Can Animals Recall the
Past and Plan for the Future?Nature Reviews of Neuroscience 4:
685-691.
Clayton, N., Emery, N. & Dickinson, A. (2006). The Rationality of
Animal Memory: Complex Caching Strategies of Western Scrub Jays. In S.
Hurley and M. Nudds (Eds.) Rational Animals? Oxford: Oxford University
Press.
Daston, L. & Mitman, G. (2005). Thinking With Animals: New
Perspectives on Anthropomorphism. New York: Columbia University Press.
Davidson, D. (1984). Thought and Talk. In Inquiries into Truth and
Interpretation (pp. 155-179). Oxford: Clarendon Press.
Davidson, D. (1985). Rational Animals. In E. Lepore & B. McLaughlin
(Eds.) Actions and Events: Perspectives on the Philosophy of Donald
Davidson. New York: Basil Blackwell.
Davidson, D. (1997). The Emergence of Thought. Erkenntnis 51: 7-17.
DeGrazia, D. (1994). Wittgenstein and the Mental Life of Animals.
History of Philosophy Quarterly11: 121-137.
Dennett, D. (1987) The Intentional Stance. Cambridge, MA: MIT Press.
Dennett, D. (1991). Consciousness Explained. Boston: Little, Brown and Company.
Dennett, D. (1995). Do Animals Have Beliefs? In H. Roitblat (Ed.)
Comparative Approaches to Cognitive Science. Cambridge, MA: MIT Press
Dennett, D. (1997) Kinds of Minds: Towards an Understanding of
Consciousness New York: Basic Books (Science Masters Series).
Descartes, R. (1637/1988). Discourse on the Method. In Cottingham,
Stoothoff, and Murdoch (Trans.) Descartes: Selected Philosophical
Writings. Cambridge: Cambridge University Press.
Descartes, R. (1649/1970). Letter to Henry More (5 February 1649). In
A. Kenny (Trans.)Philosophical Letters. Oxford: Clarendon Press.
Dretske, F. (1988). Explaining Behavior: Reasons in a World of Causes.
Cambridge, Mass.: MIT Press.
Dretske, F. (1995) Naturalizing the Mind. Cambridge, MA: MIT Press.
Dretske. F. (2000). Perception, Knowledge and Belief. Cambridge:
Cambridge University Press.
Dretske, F. (2006). Perception without Awareness. In T. Gendler & J.
Hawthorne (Eds.) Perceptual Experience. Oxford: Oxford University
Press
Dummett, M. (1993). The Origins of Analytic Philosophy. London: Duckworth.
Evans, G. (1982). The Varieties of Reference. Oxford: Clarendon Press.
Fellows, R. (2000). Animal Belief. Philosophy 75: 587-598.
Fitzpatrick, S. (2007). Doing Away with Morgan's Canon. Mind and Language.
Fitzpatrick, S. (2009). Simplicity and Methodology in Animal
Psychology: A Case Study. In R. Lurz (Ed.) The Philosophy of Animal
Minds. Cambridge: Cambridge University Press.
Fodor, J. (1975). The Language of Thought.
Fodor, J. (1987). Psychosemantics: The Problem of Meaning in
Philosophy of Mind. Cambridge, Mass.: MIT Press.
Fodor, J. (1991). A Theory of Content and Other Essays. Cambridge,
Mass.: MIT Press.
Fodor, J. & Lepore, E. (1992). Holism: A Shopper's Guide. Oxford: Blackwell.
Fodor, J. (2003). More Peanuts. London Review of Books 25.
Gennaro, R. (2004). Higher-order thoughts, animal consciousness, and
misrepresentation: A reply to Carruthers and Levine. In R. Gennaro
(Ed.) Higher-Order Theories of Consciousness. Amsterdam &
Philadelphia: John Benjamins.
Gennaro, R. (2009). Animals, Consciousness, and I-thoughts. In R. Lurz
(Ed.) The Philosophy of Animal Minds. Cambridge: Cambridge University
Press.
Giurfa, M., Zhang, S., Jenett, A., Menzel, R. & Srinivasan, M. (2001).
The Concept of "Sameness" and "Difference" in an Insect. Nature 410:
930-933.
Glock, H. (2000). Animals, Thoughts and Concepts. Synthese 123: 35-64.
Griffin, D. (1976). The Question of Animal Awareness. New York:
Rockefeller University Press.
Griffin, D. (1984). Animal Thinking. Cambridge, MA: Harvard University Press.
Griffin, D. (2001). Animal Minds: Beyond Cognition to Consciousness.
Chicago: Chicago University Press.
Griffin, D. & Speck, G. (2007). New Evidence of Animal Consciousness.
Animal Cognition 7:5-18.
Griffiths, P. (1997). What Emotions Really Are: The Problem of
Psychological Categories.Chicago: University of Chicago Press.
Hare, B., Call, J., Agnetta, B. & Tomasello, M. (2000) Chimpanzees
Know what Conspecifics Do and Do Not See. Animal Behavior 59: 771-785.
Hare, B. Call, J. & Tomasello, M. (2001). Do Chimpanzees Know What
Conspecifics Do and Do Not See? Animal Behavior 59:771-785.
Hauser, M. (2000). Wild Minds. New York: Henry Holt and Co.
Hauser, M., Chomsky, N. & Fitch, W. T. (2002). The Faculty of
Language: What Is It, Who Has It, and How Did It Evolve? Science 298:
1569-1579.
Hampton, R., Zivin, A., & Murray, E. (2004). Rhesus Monkeys (Macaca
mulatta) Discriminate Between Knowing and not Knowing and Collect
Information as Needed Before Acting. Animal Cognition, 7, 239-246.
Heinrich, B. (2000). Testing Insight in Ravens. In C. Heyes & L. Huber
(Eds.) The Evolution of Cognition. Cambridge, Mass: MIT Press.
Hume, D. (1739/1978). A Treatise of Human Nature. Edited by P. H.
Nidditch, 2nd Ed. Oxford: Oxford University Press.
Humphrey, N. (1976). The Social Function of Intellect. In P. P. G.
Bateson & R. A. Hinde (Eds.)Growing Points in Ethology. Cambridge:
Cambridge University Press.
Hurley, S. & Nudds, M. (2006). Rational Animals? Oxford: Oxford
University Press.
Kennedy, J. (1992). The New Anthropomorphism. Cambridge: Cambridge
University Press.
Köhler, W. (1925). The Mentality of Apes. London: Routledge and Kegan Paul.
Kornblith, H. (2002). Knowledge and its Place in Nature. Oxford:
Oxford University Press.
Lurz, R. (1998). Animal Minds: The Possibility of Second-Order Beliefs
in Non- Linguistic Animals. (Doctoral dissertation Temple University,
1998). Dissertation Abstracts International,59, no. 03A.
Lurz, R. (1999). Animal Consciousness. Journal of Philosophical
Research 24: 149-168.
Lurz, R. (2002a). Reducing Consciousness by Making it HOT: A
Commentary on Carruthers'Phenomenal Consciousness, Psyche 8.
Lurz, R. (2002b). Neither HOT nor COLD: An Alternative Account of
Consciousness. Psyche 9.
Lurz, R. (2003). Advancing the Debate Between HOT and FO Theories of
Consciousness. Journal of Philosophical Research 28: 23-44.
Lurz, R. (2004). Either FOR or HOR: A False Dichotomy, in R. Gennaro
(Ed.) Higher- Order Theories of Consciousness, John Benjamins,
Amsterdam, 2004, pp. 226- 54.
Lurz, R. (2007). In Defense of Wordless Thoughts about Thoughts. Mind
and Language 22 : 270-296.
Lurz. R. (2006). Conscious Beliefs and Desires: A Same-Order Approach,
in U. Kriegel and K. Williford (Eds.) Self-Representational Approaches
to Consciousness, MIT Press.
Lurz, R. (2009). Philosophy of Animal Minds : New Essays on Animal
Thought and Consciousness. Cambridge : Cambridge University Press.
Lycan, W. (1996). Consciousness and Experience. Cambridge, Mass.: MIT Press.
Malcolm, N. (1977). Thoughtless Brutes. In Thought and Knowledge.
Ithaca: Cornell University Press.
Marcel , A. (1983). Conscious and unconscious perception. Cognitive
Psychology, 15: 197-237
Menzel, E. (1974). A Group of Chimpanzees in a 1-Acre Field:
Leadership and Communication. In A. M. Shrier & F. Stollnitz (Eds.)
Behaviour of Nonhuman Primates. New York: Academic Press.
McGinn, C. (1982). The Character of Mind. Oxford: Oxford University Press.
Mitchell, R., Thompson, N. & Miles, H. (1997). Anthropomorphism,
Anecdotes, and Animals. New York: SUNY Press.
Moser, P. (1983). Rationality without Surprise: Davidson on Rational
Belief. Dialectica 37: 221-226.
Nussbaum, M. (2001). Upheavals of Thought: The Intelligence of
Emotions. Cambridge: Cambridge University Press.
Panksepp, J. & Burgdorf, J. (2003). "Laughing Rats and the
Evolutionary Antecedents of Human Joy?Physiology and Behavior
79:533-547.
Parker, S. T., Mitchell, R. & Boccia, M. (1994). Self-Awareness in
Animals and Humans: Developmental Perspectives. Cambridge: Cambridge
University Press.
Penn, D. & Povinelli, D. (2007). On the Lack of Evidence that
Non-Human Animals Possess Anything Remotely Resembling a "Theory of
Mind." Philosophical Transactions of the Royal Society B362: 731-744.
Pepperberg, E. (1999). The Alex Studies: Cognitive and Communicative
Abilities of Grey Parrots.Cambridge, MA: Harvard University Press.
Povinelli, D & Vonk, J. (2004). We don't need a microscope to explore
the chimpanzee's mind.Mind and language 19: 1-28. Reprinted in Hurley
and Nudds (Eds.) Rational Animals? 2006. Oxford: Oxford University
Press.
Premack, D. & Woodruff, G. (1978). Does the Chimpanzee have a Theory
of Mind? Behaviorial and Brain Sciences 1: 515-526.
Proust, J. (2009). Metacognitive states in non-human animals: a
defense. In R. Lurz (Ed.) The Philosophy of Animal Minds. Cambridge:
Cambridge University Press.
Quine, W. V. O. (1995). From Stimulus to Science. Cambridge, Mass.:
Harvard University Press.
Radner, D. and Radner, M. (1989). Animal Consciousness. Buffalo, NY:
Prometheus Books.
Rey, G. & Tetzlaff, M. (forthcoming). Systematicity in Honeybee
Navigation. In Lurz Ed.) Philosophy of Animal Minds. Cambridge:
Cambridge University Press.
Roberts, R. (1996). Propositions and Animal Emotion. Philosophy 71:147-156.
Roberts, R. (2009). The Sophistication of Non-Human Emotion. In R.
Lurz (Ed.) The Philosophy of Animal Minds. Cambridge: Cambridge
University Press.
Rosenthal, D. (1986). Two Concepts of Consciousness. Philosophical
Studies 49: 329-359.
Santos, L., Nissen, A., & Ferrugia, J. (2006). Rhesus Monkeys, Macca
mulatta, Know What Others Can and Cannot Hear. Animal Behavior
71:1175-1181.
Savage-Rumbaugh, S., Shanker, S. and Taylor, T. (1998). Apes,
Language, and the Human Mind. Oxford: Oxford University Press.
Seager, W. (2004). A Cold Look at HOT Theory. In R. Gennaro (Ed.)
Higher-Order Theories of Consciousness. Amsterdam: John Benjamins.
Searle, J. (1983). Intentionality. Cambridge: Cambridge University Press.
Searle, J. (1992). The Rediscovery of Mind. Cambridge, Mass.: MIT Press.
Searle, J. (1994). Animal Minds. Midwest Studies in Philosophy 19: 206-219.
Searle. J. (2001). Rationality in Action. Cambridge, Mass.: MIT Press.
Shoemaker, S. (1996). Self-Knowledge and "Inner Sense." Lecture I: The
Object Perception Model. In The First-Person Perspective and Other
Essays. Cambridge: Cambridge University Press.
Stich, S. (1978). Belief and Subdoxastic States. Philosophy of Science
45: 499-518.
Stich, S. (1979). Do Animals Have Beliefs? Australasian Journal of
Philosophy 57: 15- 28.
Stalnaker, R. (1999). Mental Content and Linguistic Form. In Context
and Content. Oxford: Oxford University Press.
Smith, J., Shields, W., & Washburn, D. (2003). The Comparative
Psychology of Uncertainty Monitoring and Meta-Cognition. Behavioral
and Brain Sciences 26, 317-373.
Sober, E. (1998). Morgan's Canon. In C. Allen and D. Cummins (Eds.)
The Evolution of Mind. Oxford: Oxford University Press.
Sober, E. (2001a). Simplicity. In W.H. Newton-Smith (Ed.), Companion
to the Philosophy of Science. Oxford: Blackwell Publishing.
Sober, E. (2001b). The Principle of Conservatism in Cognitive
Ethology. In D. Walsh (Ed.)Naturalism, Evolution, and Mind. Cambridge:
Cambridge University Press.
Sober, E. (2005). Comparative Psychology Meets Evolutionary Biology:
Morgan's Canon and Cladistic Parsimony. In L. Daston & G. Mitman
(Eds.) Thinking With Animals: New Perspectives on Anthropomorphism.
New York. Columbia University Press.
Sorabji, R. (1993). Animal Minds and Human Morals: The Origins of the
Western Debate. Ithaca: Cornell University Press.
Tetzlaff, M. & Rey, G. (2009). Systematicity in Honeybee Navigation.
In R. Lurz (Ed.) Philosophy of Animal Minds. Cambridge: Cambridge
University Press.
Tschudin , A.J-P.C. (2001). "Mindreading" mammals? Attribution of
belief tasks with dolphins . Animal Welfare , 10 , 119-127 .
Tye, M. (1995). Ten Problems of Consciousness. Cambridge, Mass.: MIT Press.
Tye, M. (1997). The Problem of Simple Minds: Is There Anything it is
Like to be a Honey Bee?Philosophical Studies 88: 289-317.
Wilson, M. D. (1995). Animal Ideas. Proceedings and Addresses of the
American Philosophical Association 69:7-25.
b. Suggested Further Readings
Recent Volumes of New Essays in the Philosophy of Animal Mind
Lurz, R. (2009). The Philosophy of Animal Minds: New Essays on Animal
Thought and Consciousness. Cambridge : Cambridge University Press.
Hurley, S. & Nudds, M. (2006). Rational Animals? Oxford: Oxford
University Press.
Articles and Books on Contemporary Issues in Philosophy of Mind
Akins, K. A. (1993) A Bat Without Qualities. In M. Davies and G.
Humphreys (Eds.) Consciousness. Oxford: Blackwell.
Allen, C. and Hauser, M. (1991). Concept Attribution in Non-Human
Animals:Theoretical and Methodological Problems of Ascribing Complex
Mental Processes. Philosophy of Science 58: 221-240.
Allen, C. (1995) Intentionality: Natural and Artificial. In H.
Roitblat and J.-A.Meyer (Eds.)Comparative Approaches to Cognitive
Science. Cambridge, MA: MIT Press.
Allen, C. (1999). Animal Concepts Revisted: The Use of Self-Monitoring
as An Empirical Approach.Erkenntnis 51: 33-40.
Allen, C. (2004). Animal Pain. Noûs 38: 617-643.
Allen, C. (2004). Animal Consciousness. Stanford Encyclopedia of Philosophy.
Bennett, J. (1990). How to Read Minds in Behaviour: A Suggestion from
a Philosopher. In A. Whiten's (Ed.) The Emergence of Mindreading.
Oxford: Blackwell.
Bennett, J. (1996). How is Cognitive Ethology Possible? In C. Ristau
(Ed.) Cognitive Ethology: The Minds of Other Animals. New Jersey:
Lawrence Erlbaum Associates.
Bermúdez, J. (2009). Mindreading in the Animal Kingdom? In R. Lurz
(Ed.) The Philosophy of Animal Minds. Cambridge: Cambridge University
Press.
Bishop, J. (1980). More Thought on Thought and Talk. Mind 89:1-16.
Browne, D. (2004) "Do Dolphins Know Their Own Minds?" Biology &
Philosophy 19: 633-653.
Carruthers, P. (1989). Brute Experience. The Journal of Philosophy 86: 258-269.
Carruthers, P. (1998). Animal Subjectivity. Psyche 4, .
Cherniak, C. (1986). Minimal Rationality. Cambridge, MA: MIT Press.
Dennett, D. (1983). Intentional Systems in Cognitive Ethology: The
"Panglossian Paradigm" Defended.Behavioral and Brain Sciences
6:343-390.
Dennett, D. (1995). Animal Consciousness: What Matters and Why. Social
Research 62: 691-711.
DeGrazia, D. (2009). Self-Awareness in Animals. In R. Lurz (Ed.) The
Philosophy of Animal Minds. Cambridge: Cambridge University Press.
Dixon, B. (2001). Animal Emotions. Ethics and the Environment 6.2: 22-30.
Dreckmann, F. (1999). Animal Beliefs and Their Contents. Erkenntnis 51:93-111.
Dretske, F. (1999). Machines, Plants and Animals: The Origins of
Agency. Erkenntnis 51: 19-31.
Dummett, M. (1993). Language and Communication. In The Seas of
Language. Oxford: Clarendon Press.
Dummett, M. (1993). The Origins of Analytic Philosophy. London: Duckworth.
Fodor, J. (1986). Why Paramecia Don't Have Mental Representations.
Midwest Studies in Philosophy 10: 3-23.
Graham, G. (1993). Belief in Animals. In Philosophy of Mind: An
Introduction. Oxford: Blackwell.
Griffiths, P. E. (2003). Basic Emotions, Complex Emotions,
Machiavellian Emotions. In Philosophy and the Emotions A. Hatzimoysis
(Ed.), Cambridge, CUP: 39-67.
Griffiths, P and Scarantino, A. (in press). Emotions in the Wild: The
situated perspective on emotion. in P. Robbins and M. Aydede (eds.)
Cambridge Handbook of Situated Cognition, Cambridge: Cambridge
University Press.
Heil, J. (1982). Speechless Brutes. Philosophy and Phenomenological
Research 42: 400-406.
Heil, J. (1992). Talk and Thought. In The Nature of True Minds.
Cambridge: Cambridge University Press.
Jamieson, D. and Bekoff, M. (1992) Carruthers on Nonconscious
Experience. Analysis 52: 23-28.
Jamieson, D. (1998). Science, Knowledge, and Animals Minds.
Proceedings of the Aristotelian Society 98: 79-102.
Jamieson, D. (2009). What Do Animals Think? In R. Lurz (Ed.) The
Philosophy of Animal Minds. Cambridge: Cambridge University Press.
Kornblith, H. (1999). Knowledge in Humans and Other Animals.
Philosophical Perspectives 13: 327-346.
Marcus, R. B. (1995). The Anti-Naturalism of Some Language Centered
Accounts of Belief.Dialectica 49: 113-129.
McAninch, A., Goodrich, G. & Allen, C. (2009). Animal Communication
and Neo- Expressivism. In R. Lurz (Ed.) The Philosophy of Animal
Minds. Cambridge: Cambridge University Press.
McGinn, C. (1995). Animal Minds, Animal Morality. Journal of Social
Research 62: 731-747.
Millikan, R. G. (1997). Varieties of Purposive Behavior. In R.
Mitchell, N. Thompson, and H. L. Miles (Eds.) Anthropomorphism,
Anecdotes, and Animals. New York: State University of New York Press.
Nagel, T. (1974). What is it Like to be a Bat? Philosophical Review 83: 435-450.
Papineau, D. (2001). The Evolution of Means-End Reasoning. Philosophy
49: 145-178.
Proust, J. (1999). Mind, Space and Objectivity in Non-Human Animals.
Erkenntnis 51: 41-58.
Proust, J. (2000). L'animal intentionnel. Terrain 34:23-36.
Proust, J. (2000). Can Non-Human Primates Read Minds? Philosophical
Topics 27:203-232.
Putnam, H. (1992). Intentionality and Lower Animals. In Renewing
Philosophy. Cambridge, MA: Harvard University Press.
Radner, D. (1993) Direct Action and Animal Communication. Ratio 6: 135-154.
Radner, D. (1994) Heterophenomenology: Learning About the Birds and
the Bees. Journal of Phiosophy 91: 389-403.
Radner, D. (1999). Mind and function in animal communication.
Erkenntnis 51: 129-144.
Rescorla, M. (2009). Chrysippus's Dog as a Case Study in
Non-Linguistic Cognition. In R. Lurz (Ed.)The Philosophy of Animal
Minds. Cambridge: Cambridge University Press.
Ridge, M. (2001). Taking Solipsism Seriously: Nonhuman Animals and
Meta-Cognitive Theories of Consciousness. Philosophical Studies 103:
315-340.
Rollin, B. E. (1989) The Unheeded Cry: Animal Consciousness, Animal
Pain and Science. Oxford: Oxford University Press.
Rowlands, M. (2002). Do Animals Have Minds? In Animals Like Us. New York: Verso.
Routley, R. (1981). Alleged Problems in Attributing Beliefs and
Intentionality to Animals. Inquiry, 24, 385-417.
Saidel, E. (2002). Animal Minds, Human Minds. In M. Bekoff, C. Allen,
and G. M. Burghardt The Cognitive Animal. Cambridge, MA: MIT Press.
Saidel, E. (2009). Attributing Mental Representations to Animals. In
R. Lurz (Ed.) The Philosophy of Animal Minds. Cambridge: Cambridge
University Press.
Smith, P. (1982). On Animal Beliefs. Southern Journal of Philosophy 20, 503-512.
Sober, E. (2001). The Principle of Conservatism in Cognitive Ethology.
In Denis M. Walsh (ed.)Naturalism, Evolution, and Mind. Cambridge
University Press.
Sober, E. (2001). Comparative Psychology Meets Evolutionary Biology:
Morgan's Canon and Cladistic Parsimony. In L. Daston and G. Mitman
(Eds.) Thinking With Animals: New Perspective on Anthropomorphism.
Columbia University Press.
Stephan, A. (1999). Are Animals Capable of Concepts? Erkenntnis 51:79-92.
Sterelny, K. (1995). Basic Minds. Philosophical Perspectives 9: 251-270.
Sterelny, K. (2000). Primate Worlds. In C. Heyes and L. Huber (Eds.)
Evolution and Cognition. Cambridge, MA: MIT Press.
Stich, S. (1983). Animal Beliefs. In From Folk Psychology to Cognitive
Science. Cambridge, MA: MIT Press.
Ward, A. (1988). Davidson on Attributions of Beliefs to Animals.
Philosophia 18: 97-106.
Historical Works on Animal Minds
Arnold, D. G. (1995). Hume on the Moral Difference Between Humans and
Other Animals. History of Philosophy Quarterly 12: 303-316.
Beauchamp, T. L. (1999). Hume on the Nonhuman Animal. Journal of
Medicine and Philosophy24:322-335.
Boyle, D. (2003). Hume on Animal Reason. Hume Studies 29: 3-28.
Churchill, J. (1989). If a Lion Could Talk. Philosophical
Investigations 12: 308-324.
Fuller, B. A. G. (1949). The Messes Animals Make in Metaphysics. The
Journal of Philosophy 46: 829—838.
Frongia, G. (1995). Wittgenstein and the Diversity of Animals. Monist
78: 534-552.
Glock, H. J. (2000). Animals, Thoughts and Concepts. Synthese 123:35-64.
Gordon, D. M. (1992). Wittgenstein and Ant-Watching. Biology and
Philosophy 7: 13- 25.
Harrison, P. (1992). Descartes on Animals. Philosophical Quarterly 42: 219-227.
Seidler, M. J. (1977). Hume and the Animals. Southern Journal of
Philosophy 15:361- 372.
Squadrito, K. (1980). Descartes, Locke and the Soul of Animals.
Philosophy Research Archives 6.
Squadrito, K. (1991). Thoughtful Brutes: The Ascription of Mental
Predicates to Animals in Locke'sEssay. Dialogos 91: 63-73.
Sorabji, R. (1992). Animal Minds. Southern Journal of Philosophy 31: 1-18.
Tranoy, K. (1959). Hume on Morals, Animals, and Men. Journal of
Philosophy 56: 94- 192.
No comments:
Post a Comment