Keyboard Shortcuts?

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

\title {Social Cognition \\ Lecture 05}
 
\maketitle
 

Lecture 05:

Social Cognition

\def \ititle {Lecture 05}
\def \isubtitle {Social Cognition}
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}
 
\iemail %
\end{center}
‘In saying that an individual has a theory of mind, we mean that the individual [can ascribe] mental states’
\citep[p.\ 515]{premack_does_1978}

Premack & Woodruff, 1978 p. 515

What sort of evidence might cause researchers to accept that a nonhuman has a theory of mind? Last week we reviewed a variety of findings ...
There is a lot of evidence for theory of mind in nonhuman animals ...
\begin{itemize} \item Apes : anticipatory gaze depends on protagonists’ false belief \citep{krupenye:2016_great} \item Apes, goals : food avoidance differs depending on competitors’ (mis)information \citep{Hare:2001ph,kaminski:2006_goats} \item Apes : avoid being seen or making sounds when taking food \citep{melis:2006_chimpanzees} \item Apes : will exploit facts about what others can see in mirrors or through screens \citep{karg:2015_chimpanzees,lurz:2018_chimpanzees} \item Corvids : caching differs depending what others can, or have, seen \citep{Clayton:2007fh,bugnyar:2016_ravens} \item Dogs : responses to requests depend on what requester can see \citep{kaminski:2009_domestic} \item Ringtail lemurs, common marmosets : food avoidance depending on competitors’ line of sight \citep{burkart:2007_understanding,sandel:2011_evidence} \end{itemize}

Apes : anticipatory gaze depends on protagonists’ false belief (Krupnye et al, 2017)

Apes, goals : food avoidance differs depending on competitors’ (mis)information (Hare et al, 2001; Kaminski et al, 2006)

Apes : avoid being seen or making sounds when taking food (Melis et al, 2006)

Apes : will exploit facts about what others can see in mirrors or through screens (Karg et al, 2015; Lurz et al, 2018)

Corvids : caching differs depending what others can, or have, seen (Clayton et al, 2007; Bugnyar et al, 2016)

Dogs : responses to requests depend on what requester can see (Kaminski et al, 2009)

Ringtail lemurs, common marmosets : food avoidance depending on competitors’ line of sight (Sandel et al, 2011; Burkart & Heschl, 2007)

Given the extensive body of evidence, can we conclude that Theory of Mind is present in nonhuman animals? Things are not quite so simple ...
‘In saying that an individual has a theory of mind, we mean that the individual [can ascribe] mental states’
\citep[p.\ 515]{premack_does_1978}

Premack & Woodruff, 1978 p. 515

So : nonhumans have theories of mind ?

tracking vs representing mental states

What is observed: that nonhumans track others’ mental states.

Tracking mental states does not require representing them.

So:

How can we draw conclusions about what nonhumans represent?

This is a real problem, an interesting problem, and one that we can solve. It’s interesting because solving it will require us to think about what is involved in having a theory of mind in the first place, and that’s where the philosophy comes in.
But before we get there, I want to consider how this problem has been developed in theoretical and philosophical discussions over the last 25 years ...

behaviour-reading demon -> we can’t

If your objection involves a

behaviour-reading demon

(or any kind of demon),

then it is probably

a merely sceptical issue

rather than a scientific objection.

 

Three Responses to the Logical Problem

 
\section{Three Responses to the Logical Problem}
 
\section{Three Responses to the Logical Problem}

The logical problem

‘since mental state attribution in [nonhuman] animals will (if extant) be based on observable features of other agents’ behaviors and environment ... every mindreading hypothesis has ... a complementary behavior-reading hypothesis.

‘Such a hypothesis proposes that the animal relies upon certain behavioral/environmental cues to predict another agent’s behavior

[... the behaviour which], on the mindreading hypothesis, the animal is hypothesized to use as its observable grounds for attributing the mental state in question.’
\citep[p.~26]{lurz:2011_mindreading}; also \citep[p.~453]{lurz:2011_how}

Lurz (2011, 26)

Do any nonhuman animals ever represent others’ mental states?

1. Representing others’ mental states depends on making a transition from behaviour to mental state.

2. For any hypothesis about mindreading there is a ‘complementary hypothesis’ about behaviour reading

3. The two hypotheses generate the same predictions.

4. No experiment can distinguish between them.

\begin{enumerate} \item It is not a logical problem at all, but one that should be resolved by better experimental methods. Therefore, we lack evidence for nonhuman mindreading (except maybe from ‘goggles’ and ‘mirror’ experiments) \item It is a merely logical problem (so a form of sceptical hypothesis). Therefore, we already have evidence for nonhuman mindreading \item It is an illusory problem, caused by a theoretical mistake. Therefore, we’re thinking about the issue in the wrong way \end{enumerate}

1. It is not a logical problem at all, but one that should be resolved by better experimental methods.

- we lack currently evidence for nonhuman mindreading
(except maybe from ‘goggles’ and ‘mirror’ experiments)

2. It is a merely logical problem (so a form of sceptical hypothesis).

- we already have evidence for nonhuman mindreading

3. It is an illusory problem, caused by a theoretical mistake.

- we’re thinking about the issue in the wrong way

‘Comparative psychologists test for mindreading in non-human animals by determining whether theydetect the presence and absence of particular cognitive states in a wide variety of circumstances.

They eliminate potential confounding variables by ensuring that there is no one observable state to which subjects might be responding’ \citep[p.~487]{halina:2015_there}.

Halina, 2015 p. 487

If we agree with Halina that the ‘Logical Problem’ is a logical problem, should we also accept her claim?
I don’t think we should. Researchers have so far focussed on a sceptical problem rather than a hypothesis, but I think this is a mistake.

The ‘Logical Problem’ is a sceptical problem

the evidence supports mindreading?

Apes : anticipatory gaze depends on protagonists’ false belief (Krupnye et al, 2017)

Apes, goals : food avoidance differs depending on competitors’ (mis)information (Hare et al, 2001; Kaminski et al, 2006)

Apes : avoid being seen or making sounds when taking food (Melis et al, 2006)

Apes : will exploit facts about what others can see in mirrors or through screens (Karg et al, 2015; Lurz et al, 2018)

Corvids : caching differs depending what others can, or have, seen (Clayton et al, 2007; Bugnyar et al, 2016)

Dogs : responses to requests depend on what requester can see (Kaminski et al, 2009)

Ringtail lemurs, common marmosets : food avoidance depending on competitors’ line of sight (Sandel et al, 2011; Burkart & Heschl, 2007)

Should we really take all the evidence at face value?
For example, what about dogs?
Two versions of this objection: (a) cross-species: How confident are we that ringtailed lemurs are doing what chimpanzees are doing?
(b) within an individual: In humans and other animals, tracking mental states likely involves many different processes, including plenty of which that rely on simple cues.

‘Comparative psychologists test for mindreading in non-human animals by determining whether they detect the presence and absence of particular cognitive states in a wide variety of circumstances.

Halina, 2015 p. 487

You might say, it depends on having a wide variety of evidence, which we lack in the case of dogs, lemurs and marmosets. This is true. But why shouldn’t it be possible to track mental states in a wide variety of circumstances without actually representing them?

Requirement:

We can distinguish,

both within an individual

and between individuals,

mindreading which involves representing mental states

from

mindreading which does not.

Requirement 1: We need a theory that allows us to distinguish mental state tracking underpinned by mindreading from other forms of mental state tracking within an individual.
Halina’s positive proposal (as I’m interpreting it) fails to meet this requirement. (We agree on the negative part of the proposal : the logical problem is a sceptical issue, not an experimental challenge.)

1. It is not a logical problem at all, but one that should be resolved by better experimental methods.

- we lack currently evidence for nonhuman mindreading
(except maybe from ‘goggles’ and ‘mirror’ experiments)

2. It is a merely logical problem (so a form of sceptical hypothesis).

- we already have evidence for nonhuman mindreading

3. It is an illusory problem, caused by a theoretical mistake.

- we’re thinking about the issue in the wrong way

‘Nonhumans represent mental states’ is not a hypothesis

... or at least not one that generates readily testable predictions.

‘chimpanzees understand … intentions perception and knowledge

\citep[p.~191]{Call:2008di}

‘chimpanzees probably do not understand others in terms of a fully human-like belief–desire psychology’

Call & Tomasello (2008, 191)

‘the core theoretical problem in ... animal mindreading is that ... the conception of mindreading that dominates the field ... is too underspecified to allow effective communication among researchers’

‘the core theoretical problem in contemporary research on animal mindreading is that ... the conception of mindreading that dominates the field ... is too underspecified to allow effective communication among researchers, and reliable identification of evolutionary precursors of human mindreading through observation and experiment.’
\citep[p.~321]{heyes:2014_animal}

Heyes (2015, 321)

What does Heyes mean?
Think about what might anchor our understanding of knowledge? There seem to me to be two options.
(1) epistemology, which is not at all about how anyone thinks of knowledge (you know this because epistemologists don’t draw on research on how ordinary people ordinarily think about knowledge).
(2) theorising about adult humans’ mindreading abilities
The only computational theory we have is Davidson’s theory of radical interpretation.

‘Nonhumans represent behaviours only’
is also not a hypothesis

‘an intelligent chimpanzee could simply use the behavioural abstraction […]: ‘Joe was present and oriented; he will probably go after the food. Mary was not present; she probably won’t.’’

\citep{Povinelli:2003bg}

Povinelli and Vonk (2003)

What’s that?

‘because behavioural strategies are so unconstrained ...it is very difficult indeed, perhaps impossible, to design experiments that could show that animals are mindreading rather than behaviour reading.’

‘because behavioural strategies are so unconstrained ... it is very difficult indeed, perhaps impossible, to design experiments that could show that animals are mindreading rather than behaviour reading.’
\citep[p.~322]{heyes:2014_animal}

Heyes (2015, 322)

1. It is not a logical problem at all, but one that should be resolved by better experimental methods.

- we lack currently evidence for nonhuman mindreading
(except maybe from ‘goggles’ and ‘mirror’ experiments)

2. It is a merely logical problem (so a form of sceptical hypothesis).

- we already have evidence for nonhuman mindreading

3. It is an illusory problem, caused by a theoretical mistake.

- we’re thinking about the issue in the wrong way

Do you understand all three options? Which of these three do you think is correct?
Recall that this was the issue which led us to the ‘logical problem’

tracking vs representing mental states

What is observed: that nonhumans track others’ mental states.

Tracking mental states does not require representing them.

So:

How can we draw conclusions about what nonhumans represent?

What else have we learnt about the ‘Logical Problem’?

What does the ‘Logical Problem’ show?

Lurz, Krachun et al

The ‘Logical Problem’ can be overcome with better experimental design.

Halina, Heyes (?), et al

The ‘Logical Problem’ is a logical problem.

Its existence shows that these are the wrong experiments whereas those are the right ones.

Its existence shows that being able to track mental states does not logically entail being able to to represent mental states.

I’m not here arguing for one side or the other, merely pointing out that there are two sides. But in what follows I will assume that the ‘Logical Problem’ is a logical problem.

The standard question:

Do nonhuman animals represent mental states or only behaviours?

Obstacle:

The ‘logical problem’ (Lurz 2011)

What could make others’ behaviours intelligible to nonhuman animals?

-- the teleological stance

What could make others’ mental states intelligible to nonhuman animals?

-- minimal theory of mind

Logical Problem: Summary

1. statement of what the problem is

2. three responses to the Logical Problem

My proposal :

3. it is a logical problem, not an experimental issue ...

4. ... but recognising this leaves open the question of what nonhumans represent.

 

A Different Approach

 
\section{A Different Approach}
 
\section{A Different Approach}
Earlier I mentioned this way of setting up the debate. If you think about the requirements, I think you can see that this is not a productive way to shape the debate.

What do infants, chimps and scrub-jays reason about, or represent, that enables them, within limits, to track others’ perceptions, knowledge, beliefs and other propositional attitudes?

The standard debate

-- Is it mental states?

-- Or only behaviours?

What models of minds and actions, and of behaviours,

and what kinds of processes,

underpin mental state tracking in different animals?

In answering this question, we need to satisfy three requirements ...

Requirement 3: Predict Dissociations

Let me explain by pointing to some cases in which chimpanzees fail to track others’ beliefs according to one measure but may succeed according to another ...

‘the present evidence may constitute an implicit understanding of belief’

\citep[p.~113]{krupenye:2016_great}

Krupenye et al, 2016 p. 113

Krachun et al, 2009 figure 2 (part)

Chimpanzee anticipatory looking indicates tracking others’ false beliefs.
“Looking. Our second measure was whether participants looked at least once at the container the competitor was not reaching for during the couple of seconds it took E to slide the platform towards them.” “Note that proportions are based only on trials in which participants chose the same container as the competitor [ie: incorrect trials], and for apes in version A the measure was face rather than eye orientation. Bars show standard error. * p < .05.”
Contrast performance on a competitive object choice task. Ropey looking time measure, but nice to have looking time and action measures for a single scenario.
[NB: This quote is from another paper.]
This seems reasonable: just as there are dissociations among different measures of mindreading in adults, and developmental dissociations, so it is plausible that there will turn out to be dissociations concerning the tasks that adult humans and adult nonhumans can pass.

Requirement 3: Predict Dissociations

Which tasks should chimps and jays pass and fail?

This requires a theory of processes

Requirement 2: Models

The second question concerns how various individuals (or systems within them) model minds and actions. Let me explain with an illustration ...

‘chimpanzees understand … intentions … perception and knowledge,

‘chimpanzees probably do not understand others in terms of a fully human-like belief–desire psychology’

Call & Tomasello, 2008 p.~191

‘chimpanzees understand … intentions … perception and knowledge,’ but ‘chimpanzees probably do not understand others in terms of a fully human-like belief–desire psychology’ \citet[p.~191]{Call:2008di}.
After claiming that ‘chimpanzees understand … intentions … perception and knowledge,’ \citet{Call:2008di} qualify their claim by adding that ‘chimpanzees probably do not understand others in terms of a fully human-like belief–desire psychology’ (p.~191).
This is true. The emergence in human development of the most sophisticated abilities to represent mental states probably depends on rich social interactions involving conversation about the mental \citep{Slaughter:1996fv, peterson:2003_opening, moeller:2006_relations}, on linguistic abilities \citep{milligan:2007_language,kovacs:2009_early}, (\citet[p.~760]{moeller:2006_relations}: ‘Our results provide support for the concept that access to conversations about the mind is important for deaf children’s ToM development, in that there was a significant relationship between maternal talk about mental states and deaf children’s performance on verbal ToM tasks.’) and on capacities to attend to, hold in mind and inhibit things \citep{benson:2013_individual, devine:2014_relations}. These are all scarce or absent in chimpanzees and other nonhumans. So it seems unlikely that the ways humans at their most reflective represent mental states will match the ways nonhumans represent mental states. Reflecting on how adult humans talk about mental states is no way to understand how others represent them. But then what could enable us to understand how nonhuman animals represent mental states?

Requirement 2: Models

How do chimps or jays variously model minds and actions?

‘Nonhumans represent mental states’ is not a hypothesis

... or at least not one that generates readily testable predictions.

‘the core theoretical problem in contemporary research on animal mindreading is that the bar—the conception of mindreading that dominates the field—is too low, or more specifically, that it is too underspecified to allow effective communication among researchers, and reliable identification of evolutionary precursors of human mindreading through observation and experiment’

\citep[p.~318]{heyes:2014_animal}

Heyes, 2014, p. 318

Requirement 2: Models

Requirement 1: Diversity in Strategies

Requirement:

We can distinguish,

both within an individual

and between individuals,

mindreading which involves representing mental states

from

mindreading which does not.

Three Requirements

1. Diversity in Strategies

2. Models

3. Predict Dissociations

What models of minds and actions, and of behaviours,

and what kinds of processes,

underpin mental state tracking in different animals?

Way forward:

1. Construct a theory of behaviour reading

2. Construct a theory of mindreading

This would be the logical place to start, but like the BBC I want to educate, inform *and* entertain so ...
 

Minimal Theory of Mind

 
\section{Minimal Theory of Mind}
 
\section{Minimal Theory of Mind}

What models of minds and actions

underpin mental state tracking in different animals?

Krupenye et al, 2017 (movie 1)

Anything unclear?

Objections?

the

dogma

of mindreading

To make progress in understanding how minds might be intelligible to chimpanzees, we need to reject a dogma. The dogma is that there is one model of the mental and mindreading involves the use of that model. Or, more carefully (to accommodate Wellman et al), the dogma is that there is either just one model or else a family of models where one of the models, the best and most sophisticated model, contains all of the states that are contained in any of the models.
[Is this way of putting is clearer? : the mental states included in each model are a subset of the mental states included in the best, most sophisticated model. (The idea is that there is a model containing all the states in the union of the sets of states contained in each model.) ]
Lots of researchers’ views and arguments depend on this dogma. But I think you can see that the dogma is not something we should take for granted by drawing a parallel between mindreading and physical cognition.

Fact:

Minimal theory of mind specifics a model of minds and actions,

one which could in principle explain chimps, jays or other animals track mental states.

Conjecture:

Nonhuman mindreading processes are characterised by a minimal model of minds and actions.

My question ....

What models of minds and actions underpin mental state tracking in chimpanzees, scrub jays and other animals?

The standard question

Does the chimpanzee have a theory of mind?

I.e. we know that the chimpanzee has theory of mind abilities; but does exercising these abilities involve representing mental states?
Minimal theory of mind characterises a mindreading system. Does this system explain chimpanzee theory of mind abilities?

Could a system characterised by minimal theory of mind explain chimpanzee theory of mind abilities?

- yes

But does it?

conclusion

In conclusion, ...

The ‘Logical Problem’ defined

Three responses to the ‘Logical Problem’ examined

How can we draw conclusions about what nonhumans represent from evidence about tracking?

Proposal : identify models of minds and actions, and of behaviours, which could underpin mental state tracking in different animals.

The Dogma of Mindreading rejected

Minimal Theory of Mind constructed