Keyboard Shortcuts?

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

\title {Social Cognition \\ Lecture 04}
 
\maketitle
 

Lecture 04:

Social Cognition

\def \ititle {Lecture 04}
\def \isubtitle {Social Cognition}
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}
 
\iemail %
\end{center}
 
\section{Mindreading Intro}
 
\section{Mindreading Intro}
My topic is mindreading, the process of tracking another’s mental states.
Let me start with a canonical illustration of mindreading in action.

Which box will Maxi look in?

Mindreading

Not Mindreading

Maxi wants his chocolate.

Chocolate is good.

Maxi believes his chocolate is in the blue box.

Maxi’s chocolate is in the red box.

Therefore:

Therefore:

Maxi will look in the blue box.

Maxi will look in the red box.

So where the nonmindreader uses facts to generate predictions about actions, the mindreading attributes beliefs.
In many cases, you will get the same predictions whether you are mindreading or just using facts. But in cases like this, where there are false belief, the predictions can come apart. \textbf{This means that we can detect mindreading by measuring action predictions.}
In what follows I’m going to focus on belief.
The existence of mindreading processes for tracking beliefs raises many questions:
\begin{enumerate} \item Why is belief-tracking in adults sometimes but not always automatic? (or automatic to varying degrees) \item How and why is automatic belief-tracking limited in ways that nonautomatic belief-tracking is not? \item How and why are inhibition, attention and working memory involved in belief-tracking? \item Why is there an age at which children pass some false belief tasks but systematically fail others? \item What feature or features distinguish the tasks these children fail from those they pass? \end{enumerate}
I think there is more than one kind of mindreading process which tracks beliefs. That’s why my title is ...

Krupenye et al, 2016

NB: mindreading defined as *tracking* mental states.

So far:

1. mindreading defined [preliminary]

2. how action predictions can indicate mindreading

3. a nonverbal mindreading test: how anticipatory looking can indicate action prediction

 

Some Evidence

 
\section{Some Evidence}
 
\section{Some Evidence}

Hare et al (2001, figure 1)

In this experiment by Brian Hare and colleagues, a subordinate chimpanzee makes predictions about a dominant chimpanzee’s ability to retrieve food. They found that the subordinate’s predictions take into account whether the dominant’s view was blocked while the food was placed. This could be explained by the Third Principle. For the subordinate to predict that the dominant will not be able to recover the food, it is sufficient to think: because the dominant did not encounter the food, she will not be able to retrieve it.
‘In informed trials dominant individuals witnessed the experimenter hiding food behind one of the occluders whereas in uninformed trials they could not see the baiting procedure. In misinformed trials, dominants witnessed the experimenter hiding food behind one of the occluders, and once the dominant’s visual access was blocked, the experimenter switched the food from its original location to the other occluder’ \citep{Hare:2001ph}.

Hare et al (2001, figure 2b)

‘Mean percentage ±SE of ... (b) trials in which subordinate subjects chose not to approach as a function of whether the dominant competitor was informed, uninformed, or misinformed about the location of the food in experiment 1.’
We need to talk about how to evaluate, and write about, scientific research. Consider how the results of Experiment 1 is reported

‘subordinate subjects retrieved a significantly larger percentage of food when dominants lacked accurate information about the location of food

‘(Wilcoxon test: Uninformed versus Control Uninformed: T=36, N=8, P<0.01; Misinformed versus Control Misinformed: T=36, N=8, P<0.01)’

\citep[p.~143]{Hare:2001ph}

Hare et al (2001, p. 143)

Clayton et al, 2007 figure 11

Clayton et al, 2007 figure 12

‘the jays were much more likely to re-cache if they had been observed by a conspecific while they were caching than when they had cached in private. By re-caching items that the observer had seen them cache, the cachers significantly reduce the chance of cache theft, as observers would be unable to rely on memory to facilitate accurate cache theft’ \citep[p.~516]{Clayton:2007fh}.

Bugnyar et al, 2016 figure 1

First show that window open vs window closed affects caching behaviour. Then familiarise ravens with the peephole, letting them use it while humans cache and letting them ‘steal’ the human-cached food. Now test them by allowing them to cache when there’s apparently (from the sound, which is actually a recording) a raven on the other side.

Bugnyar et al, 2016 figure 2a

Conclusion 1: ‘Peephole designs can allow researchers to overcome the confound of gaze cues’ \citep{bugnyar:2016_ravens}.
‘ravens can transfer knowledge from their own experience in a novel context---using peepholes to look into an adjacent room---to a caching situation in which they can hear but not see a conspecific in that room’ \citep{bugnyar:2016_ravens}.

Bugnyar et al, 2016 figure 1

 

Tracking vs Representing Mental States

 
\section{Tracking vs Representing Mental States}
 
\section{Tracking vs Representing Mental States}
Researchers sometimes use the term ‘theory of mind’.
‘In saying that an individual has a theory of mind, we mean that the individual [can ascribe] mental states’
\citep[p.\ 515]{premack_does_1978}

Premack & Woodruff, 1978 p. 515

New term. Is this is the same thing as mindreading?
Mindreading : tracking others’ mental states

Which action an ape predicts another will perform

depends to some extent on

what the other sees, knows or believes.

theory of mind abilities

vs

theory of mind cognition

A \textit{theory of mind ability} \label{df:tom_ability} is an ability that exists in part because exercising it brings benefits obtaining which depends on exploiting or influencing facts about others’ mental states.
Not all theory of mind abilities depend on theory of mind cognition. For example, preening others may be worthwhile in part because it influences their attitudes towards oneself and thereby strengthens social bonds \citep{Clayton:2007ob}. Where this is so, preening is a theory of mind ability. It doesn’t follow, of course, that preening involves theory of mind cognition.
One might be driven to preen others without understanding that preening is worthwhile because it influences others’ attitudes.

Tracking mental states does not imply representing them.

Mindreading is the ability to track others’ mental states

Theory of mind ability

Mindreading is the ability to represent others’ mental states

Theory of mind cognition

A \textit{theory of mind ability} \label{df:tom_ability} is an ability that exists in part because exercising it brings benefits obtaining which depends on exploiting or influencing facts about others’ mental states.
This is evidence for mindreading = tracking; it is at best indirectly evidence for mindreading = representing.

Krupenye et al, 2016

‘In saying that an individual has a theory of mind, we mean that the individual [can ascribe] mental states’
\citep[p.\ 515]{premack_does_1978}

(Premack & Woodruff 1978: 515)

Which action an ape predicts another will perform

depends to some extent on

what the other sees, knows or believes.

Theory of mind abilities are widespread

18-month-olds point to inform, and predict actions based on false beliefs.

Scrub-jays selectively re-cache their food in ways that deprive competitors of knowledge of its location.

Chimpanzees conceal their approach from a competitor’s view, and act in ways that are optimal given what another has seen.

18-month-old human infants point to inform \citep{Liszkowski:2006ec}, and predict actions based on false beliefs \citep{Onishi:2005hm,Southgate:2007js}.
Scrub-jays selectively re-cache their food in ways that deprive competitors of knowledge of its location \citep{Clayton:2007fh}.
Chimpanzees conceal their approach from a competitor’s view \citep{Hare:2006ih}, and act in ways that are optimal given what another has seen \citep{Hare:2001ph}.
The distinction between theory of mind abilities and cognition might help in describing some data. Scrub jays’ complex caching and infants’ imperative and declarative pointing are two behaviours which clearly manifest theory of mind abilities. It is a further question whether they also involve theory of mind cognition.
Eventually, we want to understand, What explains these abilities? But first, let’s see some evidence.

NSS

 

The Question, version 0.1

 
\section{The Question, version 0.1}
 
\section{The Question, version 0.1}
Many animals including scrub jays \citep{Clayton:2007fh}, ravens \citep{bugnyar:2016_ravens}, goats \citep{kaminski:2006_goats}, dogs \citep{kaminski:2009_domestic}, ringtailed lemurs \citep{sandel:2011_evidence}, monkeys \citep{burkart:2007_understanding, hattori:2009_tufted} and chimpanzees \citep{melis:2006_chimpanzees,karg:2015_chimpanzees} reliably vary their actions in ways that are appropriate given facts about another’s mental states. What could underpin such abilities to track others’ mental states?

tracking vs representing mental states

Contrast {representing} a mental state with {tracking} one.
For you to \emph{track} someone’s mental state (such as a belief that there is food behind that rock) is for there to be a process in you which nonaccidentally depends in some way on whether she has that mental state.
Representing mental states is one way, but not the only way, of tracking them. In principle it is possible to track mental states without representing them. For example, it is possible, within limits, to track what another visually represents by representing her line of sight only. More sophisticated illustrations of how you could in principle track mental states without representing them abound \citep[e.g.][pp.~571ff]{buckner:2014_semantic}. What many experiments actually measure is whether certain subjects can track mental states: the question is whether changes in what another sees, believes or desires are reflected in subjects’ choices of route, caching behaviours, or anticipatory looking (say). It is surely possible to infer what is represented by observing what is tracked. But such inferences are never merely logical.
Example: toxicity / smell

What do
human infants, nonhuman great ape adults or adult scrub-jays
reason about, or represent,
that enables them,
within limits,
to track others’ perceptions, knowledge, beliefs and other propositional attitudes?

The standard debate

-- Is it mental states?

What could make others’ mental states intelligible (or identifiable) to a chimpanzee, infant or scrub-jab?

-- Or only behaviours?

What could make others’ behaviours intelligible (or identifiable) to a chimpanzee, infant or scrub-jab?

 

The Behaviour Reading Demon

 
\section{The Behaviour Reading Demon}
 
\section{The Behaviour Reading Demon}

Hare et al (2001, figure 1)

Recall this experiment. Could we explain success without appeal to the idea that chimpanzees can ascribe knowledge and ignorance?

‘an intelligent chimpanzee could simply use the behavioural abstraction […]: ‘Joe was present and oriented; he will probably go after the food. Mary was not present; she probably won’t.’’

\citep{Povinelli:2003bg}

Povinelli and Vonk (2003)

What’s that?
\begin{quote} For any food (x) and agent (y), if any of the following do not hold: (i) the agent (y) was present when the food (x) was placed, (ii) the agent (y) was oriented to the food (x) when it was placed, and: (iii) the agent (y) can go after the food (x) then probably not: (iv) the agent (y) will go after the food (x). Also, if all of (i)–(iii) do hold, then probably (iv). \end{quote}

For any food (x) and agent (y), if any of the following do not hold:

(i) the agent (y) was present when the food (x) was placed,

(ii) the agent (y) was oriented to the food (x) when it was placed,

and:

(iii) the agent (y) can go after the food (x)

then probably not:

(iv) the agent (y) will go after the food (x).

Doesn’t work as it stands because in the ‘misinformed’ condition the dominant was oriented to the food when it was first placed (just not when it was subsequently moved).
Right way to formulate this is tricky: should we say last placed? Or should we say when the food was last placed at its current location, the dominant was oriented to the food there?
Predicting a non-behaviour is particularly challenging.
There are some conditions which could trump this; for example, if the food is in a place where food is has recently been found, or if the food is particularly smelly or noisy, or if a trail of grapes leads to the food ...
So is the hypothesis that chimps ignore all such further factors, or do we think that this claim should be elaborated?

Also, if all of (i)–(iii) do hold, then probably (iv).

‘Don't go after food if a dominant who is present has oriented towards it’

\citep[p.~735]{Penn:2007ey}

Penn and Povinelli (2007, 735)

By shifting from a prediction of the dominant’s behaviour to a restriction on how the subordinate behaves, a lot of the problems vanish. But this clearly not an interesting proposal: to see why, we need to think about scientific methodology (predicting vs retrodicting the order in which people entered the room).

The ‘Logical Problem’ ...

Maybe it’s not necessary that we actually construct detailed accounts of how each task is solved using behavioural rules. Following Povinelli and colleagues, Lurz has argued that the sorts of experiments we have been considering are in principle incapable of eliminating a hypothesis about behaviour reading ...

‘since mental state attribution in [nonhuman] animals will (if extant) be based on observable features of other agents’ behaviors and environment ... every mindreading hypothesis has ... a complementary behavior-reading hypothesis.

Such a hypothesis proposes that the animal relies upon certain behavioral/environmental cues to predict [... the behaviour which], on the mindreading hypothesis, the animal is hypothesized to use as its observable grounds for attributing the mental state in question.’
\citep[p.~26]{lurz:2011_mindreading}; also \citep[p.~453]{lurz:2011_how}

Lurz (2011, 26)

5 min

 

Nonhuman Mindreading: The Logical Problem

 
\section{Nonhuman Mindreading: The Logical Problem}
 
\section{Nonhuman Mindreading: The Logical Problem}

The logical problem

‘since mental state attribution in [nonhuman] animals will (if extant) be based on observable features of other agents’ behaviors and environment ... every mindreading hypothesis has ... a complementary behavior-reading hypothesis.

‘Such a hypothesis proposes that the animal relies upon certain behavioral/environmental cues to predict another agent’s behavior

[... the behaviour which], on the mindreading hypothesis, the animal is hypothesized to use as its observable grounds for attributing the mental state in question.’
\citep[p.~26]{lurz:2011_mindreading}; also \citep[p.~453]{lurz:2011_how}

Lurz (2011, 26)

1. Ascriptions of mental states ultimately depend on information about behaviours

- information triggering ascription

- predictions derived from ascription

2. In principle, anything you can predict by ascribing a mental state you could have predicted just by tracking behaviours.

Is there an experimental solution to the ‘Logical Problem’?

Lurz and Karchun (2011) : yes ...

‘Behavior-reading animals can appeal only to ... reality-based, mind-independent facts, such as facts about agents’ past behavior or their current line of gaze to objects in the environment.

‘Mindreading animals, in contrast, can appeal to the subjective ways environmental objects perceptually appear to agents to predict their behavior.’

\citep[p.~469]{lurz:2011_how}
Experimental implementations: e.g. \citep{karg:2015_goggles}

Lurz and Krachun (2011, p. 469)

E.g. seeing an insect or a piece of fruit as an insect or piece of fruit

Lurz & Krachun 2011, figure 1

There’s a competitor, two graps and some size-distorting glass ... can Subject predict that Competitor will choose not the largest grape but the one which appears largest?
‘The objective here is to test whether chimpanzees are capable of anticipating (as evidenced by their looking behavior) the actions of a human competitor by understanding how an illusory stimulus (grape) looks to the human.’ \citep{lurz:2011_how}
‘the competitor is absent when the grapes are placed inside the containers (as in the pretest stage), he does not observe the change in apparent size of the grapes as they are placed inside the containers, as the chimpanzee does. After the grapes are placed, the competitor enters the room, sits at the table, looks at each grape through the sides of the containers’ \citep[p.~476]{lurz:2011_how}
‘dependent measure is anticipatory looking’. In this case, the subjects (chimps) know which grape is which and how they appear. Also a Krachun-like competitive version (in this case, the chimps have to infer which is larger from the failed reach of the human competitor)...
Also a Krachun-like reaching measure: ‘After the competitor has acted in one of these two ways toward the container with the large grape, the experimenter places opaque coverings completely over each container and moves the platform toward the chimpanzee. Once the platform is in place before the chimpanzee, the chimpanzee is allowed to choose one of the two containers (by pointing to it through a finger hole). Since the containers are covered, the chimpanzee cannot make its choice by seeing the grape or grapes inside the containers. Rather, to make its preferred choice (which we assume will be the remaining small grape in competitor-first success trials and the large grape in competitor-first failure trials) the chimpanzee must attend to the selection process of the competitor.’ \citep[p.~475]{lurz:2011_how}
The Lurz and Krachun proposal looks really good because it identifies a gap in the argument for the logical problem ...

The logical problem

1. Ascriptions of mental states ultimately depend on information about behaviours and objects

- information triggering ascription

- predictions derived from ascription

- ways objects appear

2. In principle, anything you can predict by ascribing a mental state you could have predicted just by tracking behaviours.

Progress ... but does this enable us to solve the logical problem?

‘Behavior-reading animals can appeal only to ... reality-based, mind-independent facts, such as facts about agents’ past behavior or their current line of gaze to objects in the environment.

‘Mindreading animals, in contrast, can appeal to the subjective ways environmental objects perceptually appear to agents to predict their behavior.’

Lurz and Krachun (2011, p. 469)

E.g. seeing an insect or a piece of fruit as an insect or piece of fruit

But: objects have appearance properties, and provide affordances, which are independent of any particular mind.

Camoflauge ain’t mindreading.
Return to the question; this is for discussion ...

Do goggles solve the ‘Logical Problem’?

‘“self-informed” belief induction variables [... are those] that, if the participant is capable of mentalizing, he or she knows only through extrapolation from her own experience to be indicative of what an agent can or cannot see and, therefore, does or does not believe’

\citep[p.~139]{heyes:2014_submentalizing}

Heyes, 2014 p. 139

Karg et al, 2015 figure 4 (Experiment 2)

Experiment 2: chimp attempts to steal from human, who will remove food if she sees the chimp approach. One cover is transparent, the other opaque. But actually when they are down, both look opaque from the chimp’s point of view.

Karg et al, 2015 figure 5 (Experiment 2)

(This is just a photo of the apparatus, with covers up (top) and down (bottom).)

Karg et al, 2015 figure 5 (Experiment 2)

‘Mean percentage of trials with the choice of the opaque box across the three conditions. In the transparent condition, one lid was opaque, the other transparent. In the screen condition, one lid was a screen, the other opaque. The control was like the transparent condition, but without the experimenter's presence at the time of choice. Error bars indicate 95% CI. The horizontal line indicates chance level (50%). *P < 0.05; **P < 0.01’ \citep{karg:2015_goggles}.
Return to the question; this is for discussion ...

Do goggles solve the ‘Logical Problem’?

‘“self-informed” belief induction variables [... are those] that, if the participant is capable of mentalizing, he or she knows only through extrapolation from her own experience to be indicative of what an agent can or cannot see and, therefore, does or does not believe’

\citep[p.~139]{heyes:2014_submentalizing}

Heyes, 2014 p. 139

I think: what you ‘project’ from your own experience could be merely knowledge of affordances (in Bugnyar et al) and appearances (in Karg et al). There’s no special reason to think that what you project is mental states rather than a better understanding of objects and their properties.

conclusion

In conclusion, ...

Mindreading defined (twice: tracking vs representing)

Evidence for nonhuman mindreading

A question (to be revisited and revised)

The ‘Logical Problem’

Is the ‘Logical Problem’ a logical problem?

What can we conclude so far?

The standard question:

Do nonhuman animals represent mental states or only behaviours?

Obstacle:

The ‘logical problem’ (Lurz 2011)

What could make others’ behaviours intelligible to nonhuman animals?

-- the teleological stance

What could make others’ mental states intelligible to nonhuman animals?

-- minimal theory of mind