Keyboard Shortcuts?

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

\title {Social Cognition \\ Lecture 06}
 
\maketitle
 

Lecture 06:

Social Cognition

\def \ititle {Lecture 06}
\def \isubtitle {Social Cognition}
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}
 
\iemail %
\end{center}

abilities to track mental states are widespread among animals

tracking vs representing mental states

What is observed: that nonhumans track others’ mental states.

Tracking mental states does not require representing them.

So:

How can we draw conclusions about what nonhumans represent?

This is a real problem, an interesting problem, and one that we can solve. It’s interesting because solving it will require us to think about what is involved in having a theory of mind in the first place, and that’s where the philosophy comes in.
What I’m suggesting is a way to achieve what Hare at el themselves suggest we should be doing ...

‘we should be focused not on the yes–no question (do chimpanzees have a theory of mind?), but rather on a whole panoply of more nuanced questions concerning precisely what chimpanzees do and do not know about the psychological functioning of others’

\citep[p.~149]{Hare:2001ph}

Hare et al (2001, 149)

What models of minds and actions, and of behaviours,

and what kinds of processes,

underpin mental state tracking in different animals?

Way forward:

1. Construct a theory of behaviour reading

2. Construct a theory of mindreading

This is what we are in the process of doing ...
Last week I introduced minimal theory of mind. This is a theory which specifies a model.
My question ....

What models of minds and actions underpin mental state tracking in chimpanzees, scrub jays and other animals?

The standard question

Does the chimpanzee have a theory of mind?

I.e. we know that the chimpanzee has theory of mind abilities; but does exercising these abilities involve representing mental states?
Minimal theory of mind characterises a mindreading system. Does this system explain chimpanzee theory of mind abilities?

Could a system characterised by minimal theory of mind explain chimpanzee theory of mind abilities?

- yes

But does it?

 

Signature Limits (Part I)

 
\section{Signature Limits (Part I)}
 
\section{Signature Limits (Part I)}
How can we Automatic belief-tracking in adults and belief-tracking in infants are both subject to signature limits associated with minimal theory of mind (\citealp{wang:2015_limits,Low:2012_identity,low:2014_quack,mozuraitis:2015_privileged}; contrast \citealp{scott:2015_infants}).
\begin{center}
\includegraphics[width=0.25\textwidth]{fig/signature_limits_table.png}
\end{center}
\begin{center}
\includegraphics[width=0.3\textwidth]{fig/low_2012_fig.png}
\end{center}

signature limits generate predictions

Hypothesis:

Some belief-tracking in chimpanzees (say) relies on minimal models of the mental.

Hypothesis:

Human infants’ belief-tracking abilities rely on minimal models of the mental.

Prediction:

Some chimpanzee belief-tracking is subject to the signature limits of minimal models.

Prediction:

Infants’ belief-tracking is subject to the signature limits of minimal models.

There is some evidence that this prediction is correct. Jason Low and his colleagues set out to test it. They have now published three different papers showing such limits; and Hannes Rakoczy and others have more work in progress on this. Collapsing several experiments using different approaches, the basic pattern of their findings is this ...
Take non-automatic responses first; in this case, communicative responses. When you do a false-belief-identity task, you see the pattern you also find for false-belief-locations tasks. But things look different when you measure non-automatic responses ...
The non-automatic responses all show the signature limit of minimal models of the mental. This is evidence for the hypothesis that Some automatic belief-tracking systems rely on minimal models of the mental.
I also hear that quite a few scientists have pilot data that speaks against this signature limit.
One particular task for future research will be to examine whether other automatic responses to scenarios involving false beliefs about identity, such as response times and movement trajectories, are also subject to this signature limit.
Just say that you can do this with other stimuli and paradigms, and we have done this with infants and would like to do it with adults.
These findings complicate the picture: is helping driven by automatic processes only? If not, why do we predict that the signature limit of minimal theory of mind is found in this case too?

signature limits generate predictions

Hypothesis:

Some belief-tracking in chimpanzees (say) relies on minimal models of the mental.

Hypothesis:

Human infants’ belief-tracking abilities rely on minimal models of the mental.

Prediction:

Some chimpanzee belief-tracking is subject to the signature limits of minimal models.

Prediction:

Infants’ belief-tracking is subject to the signature limits of minimal models.

Look at the three year olds. What might make us think that three year old’s responses are a consequence of the same system that underpin’s chimpanzees’ belief-tracking? One compelling consideration would be that three year old’s responses manifest to the same signature limit as chimpanzees’.

reidentifying systems:

same signature limit -> same system

My question ....

What models of minds and actions underpin mental state tracking in chimpanzees, scrub jays and other animals?

The standard question

Does the chimpanzee have a theory of mind?

I.e. we know that the chimpanzee has theory of mind abilities; but does exercising these abilities involve representing mental states?
Minimal theory of mind characterises a mindreading system. Does this system explain chimpanzee theory of mind abilities?

Could a system characterised by minimal theory of mind explain chimpanzee theory of mind abilities?

- yes

But does it?

What models of minds and actions, and of behaviours,

and what kinds of processes,

underpin mental state tracking in different animals?

Way forward:

1. Construct a theory of behaviour reading

2. Construct a theory of mindreading

This is what we are in the process of doing ... ... actually a theory of mindreading requires quite a bit more than what we have so far. And we’ll come back to that ...
... but first, let’s consider a how we might develop a theory of behaviour reading.
 

The Teleological Stance

 
\section{The Teleological Stance}
 
\section{The Teleological Stance}

What makes behaviour intelligible to others?

Want to (a) predict the future, and (b) know the likely effects of actions on our environment.

Krupenye et al, 2016

What do you see? Joint displacements.

Krupenye et al, 2016

[push up]

Could behaviour reading be

simply a matter of tracking

joint displacements,
bodily configurations
,
and their sensory effects?

In one sense: these are the ultimate inputs for behaviour reading. In another sense no. Why not? Because ...

We can identify actions from these simuli

without ascribing any mental states

in such a way as to enable us to make useful predictions.

Predictions : e.g. pushup -> thirsty; quite strong

This depends on categorising actions

in ways that abstract from joint displacements,

bodily configurations and their sensory effects.

Dennett : intentional stance / design stance

There is something we need that is clearly missing from Dennett. -- no option for making sense of what we are doing in reading these behaviours

What makes behaviour intelligible to others?

Or as we can now put the question, What is the computational theory of behaviour reading?
We are interested in this question for three reasons. First, behaviour reading just is part of social cognition.
Second, having a handle on this question will be critical when it comes to thinking about nonhuman social cognition and the question of whether nonhumans, such as primates or corvids, can represent others’ mental states.
Third, the objections to Davidson’s account of radical interpretation seem to stem from the fact that it starts and ends with linguistic expressions of changes in attitudes towards whole sentences. Davidson’s account of radical interpretation doesn’t consider simple object-directed actions like reaching for a mug or catching a ball, and it doesn’t consider nonlinguistic communicative activities like pointing; nor does it consider expressions of emotion like some smiles and grimaces. By focusing on behaviour we are searching for a more primitive basis for radical interpretation.

Criterion of intelligibility ...goals

In order to ask this question, we need to know what you could know about someone’s behaviour that would make it intelligible to you. (For comparision, when we asked what makes minds intelligible to others’, our tacit assumption was that knowing facts about someone’s beliefs, desires, experiences, pains and emotions would make her mind intelligible.)
In these lectures I will rely on a simple answer: if you know to which goal or goals some behaviour is directed, that behaviour is intelligible to you.
I suspect this answer is a simplification, but I’m not aware of a more sophisticated answer that makes much difference to what follows.

goal != intention

What is the relation between a purposive action and the outcome or outcomes to which it is directed?

light
[Not supported by viewer]
smoke
[Not supported by viewer]
open
[Not supported by viewer]
pour
[Not supported by viewer]
tilt
[Not supported by viewer]
soak
[Not supported by viewer]
scare
[Not supported by viewer]
freak out
[Not supported by viewer]
fill
[Not supported by viewer]
intention or motor representation
or ???
coordinates
[Not supported by viewer]
specifies
[Not supported by viewer]
As this illustrates, some actions involving are purposive in the sense that
among all their actual and possible consequences,
there are outcomes to which they are directed
In such cases we can say that the actions are clearly purposive.
Concerning any such actions, we can ask What is the relation between a purposive action and the outcome or outcomes to which it is directed?
The standard answer to this question involves intention.
An intention (1) specifies an outcome,
(2) coordinates the one or several activities which comprise the action;
and (3) coordinate these activities in a way that would normally facilitate the outcome’s occurrence.
What binds particular component actions together into larger purposive actions? It is the fact that these actions are all parts of plans involving a single intention. What singles out an actual or possible outcome as one to which the component actions are collectively directed? It is the fact that this outcome is represented by the intention.
So the intention is what binds component actions together into purposive actions and links the action taken as a whole to the outcomes to which they are directed.
But is intention the only thing that can link actions to outcomes? I will suggest that motor representations can likewise perform this role.
Some ants harvest plant hair and fungus in order to build traps to capture large insects; once captured, many worker ants sting the large insects, transport them and carve them up \citep{Dejean:2005vb}.
We can think of the ants’ behaviour as goal-directed without also thinking of it as involving intention.

goal != mental state

An account of pure goal ascription is an account of how you could in principle infer facts about the goals to which actions are directed from facts about joint displacements, bodily configurations and their effects (e.g. sounds). Such an account is a computational theory of pure goal ascription.

pure goal ascription

Infer The Goals from The Evidence

The Goals: facts which goals particular actions are directed to...

The Evidence: facts about events and states of affairs that could be known without knowing which goals any particular actions are directed to, nor any facts about particular mental states ...

‘an action can be explained by a goal state if, and only if, it is seen as the most justifiable action towards that goal state that is available within the constraints of reality’

\citep[p.~255]{Csibra:1998cx}

Csibra & Gergely (1998, 255)

A goal is an outcome to which an action is directed. A goal-state is a representation of the outcome in virtue of which the action is directed to that outcome. So an intention is a goal state. By contrast, a goal is not a mental state at all. In order for this to be about *pure* goal ascription, we need to ignore the Csibra and Gergely’s odd choice of terminology.

1. action a is directed to some goal;

2. actions of a’s type are normally means of realising outcomes of G’s type;

3. no available alternative action is a significantly better* means of realising outcome G;

4. the occurrence of outcome G is desirable;

5. there is no other outcome, G′, the occurrence of which would be at least comparably desirable and where (2) and (3) both hold of G′ and a

Therefore:

6. G is a goal to which action a is directed.

We start with the assumption that we know the event is an action.
Why normally? Because of the ‘seen as’.
What does it mean to say that one means is better than another? There are different respects in which one action can be better than another as a means to some realising some outcome; for example, one action can require less effort than another, or one action be a more reliable way to bring the outcome about than another.
An action of type $a'$ is a \emph{better} means of realising outcome $G$ in a given situation than an action of type $a$ if, for instance, actions of type $a'$ normally involve less effort than actions of type $a$ in situations with the salient features of this situation and everything else is equal; or if, for example, actions of type $a'$ are normally more likely to realise outcome $G$ than actions of type $a$ in situations with the salient features of this situation and everything else is equal.
Any objections?
I have an objection. Consider a case in which I perform an action directed to the outcome of pouring some hot tea into a mug. Could this pattern of inference imply that the outcome be the goal of my action? Only if it also implies that moving my elbow is a goal of my action as well. And pouring some liquid. And moving air in a certain way. And ...
How can we avoid this objection?
Doesn’t this conflict with the aim of explaining *pure* behaviour reading? Not if desirable is understood as something objective. [explain]
Now we are almost done, I think.
We just need to add a clause ensuring that the goal in question is maximally desirable; this is an attempt to reduce overgeneration of goals.
OK, I think this is reasonably true to the quote. So we’ve understood the claim. But is it true?

pure goal ascription = no mental state ascriptions needed

Why else is this significant? Partly because pure goal ascription is an important part of social cognition, so it is good to taste some success in giving a computational theory after our recent less than successful attempts.
Also important is the additional structure we have.

Davidson & Dennett

from:

joint displacements, bodily configurations and their effects

to:

propositional attitudes (belief, desire, ...)

Csibra & Gergely

from:

joint displacements, bodily configurations and their effects

to:

goal-directed actions

to:

propositional attitudes (belief, desire, ...)

Why is adding an extra step significant? Not only because it provides the beginnings of a missing component that Dennett’s and Davidson’s theories each require. But, more importantly, because it suggests that we can broaden the evidential basis for radical interpretation.
If I am performing an action directed to stripping a nettle, then it is likely that I have beliefs, desires and intentions about the nettle and this outcome. So plausibly facts about which goals someone’s actions are directed to can constrain facts about which objects her beliefs, desires and intentions are about. This may help Davidson with the problem of indeterminacy.
Recall that, on Davidson’s account, the evidence for radical interpretation was changes in attitudes towards the truth of particular sentences. Maybe the evidence should include goal-directed actions, including those which occur in nonlinguistic and noncommunicative contexts.

What models of minds and actions, and of behaviours,

and what kinds of processes,

underpin mental state tracking in different animals?

Way forward:

1. Construct a theory of behaviour reading

2. Construct a theory of mindreading

This is what we’ve just started. Let’s see what kind of progress we have made ...

Hare et al (2001, figure 1)

The Teleological Stance will not enable us to predict success on this experiment, nor on many others. This is for a very simple reason: the teleological stance involves inferring an outcome based on observed bodily movements plus the principle that these are a best available means to an end. When you change the information available to the dominant, you change the subordinate’s action predictions. But you do not change what the Teleological Stance predicts. After all, the Teleological Stance is insensitive to facts about what information is available to the agent.

What models of minds and actions, and of behaviours,

and what kinds of processes,

underpin mental state tracking in different animals?

Way forward:

1. Construct a theory of behaviour reading

2. Construct a theory of mindreading

This is what we’ve just started. We’ve still got a long way to go, clearly. But I want to flip back to mindreading, and to focus for a moment on adults. (We will eventually return to nonhuman mindreading.)

conclusion

In conclusion, ...

1. There is a better* question to ask about nonhuman mindreading.

*better than the standard question

2. The conjecture that minimal theory of mind characterises a nonhuman’s model of minds and actions can be tested using signature limits.

3. The Teleological Stance provides a computational description of pure goal tracking.
(And so provides a first step towards a theory of behaviour reading.)

4. Mindreading in humans is sometimes but not always automatic.

 

Radical Interpretation Reprise

 
\section{Radical Interpretation Reprise}
 
\section{Radical Interpretation Reprise}

The domain: what is a theory of social cognition a theory of?

Social cognition:

cognition of
others’ actions and mental states
in relation to social functioning.

Our goal for this course: construct a theory of social cognition. But what questions does such a theory aim to answer.

The question: Radical Interpretation*

How in principle could someone infer facts about actions and mental states from non-mental evidence?

What is the relation between an account of radical interpretation* and a theory of social cognition?

A theory of radical interpretation* is supposed to provide a computational description of social cognition.

I’ve told you what this is, but I’ll remind you in a moment.

radical interpretation*

Infer The Mind from The Evidence

The Mind: facts about actions, desires, beliefs, emotions, perspectives ...

The Evidence: facts about events and states of affairs that could be known without knowing what any particular individual believes, desires, intends, ...

Theories of radical interpretation*:

The Intentional Stance (Dennett)

Davidson’s Theory

The Teleological Stance & Your-Goal-Is-My-Goal

Minimal Theory of Mind

also an implicit theory associated with perception of emotion

‘the intentional stance ...

‘first you decide to treat the object whose behavior is to be predicted as a rational agent;

‘then you figure out what beliefs that agent ought to have , given its place in the world and its purpose.

‘Then you figure out what desires it ought to have, on the same considerations,

‘and finally you predict that this rational agent will act to further its goals in the light of its beliefs’

Dennett (1987, 17)

\citet[p.~22ff]{Marr:1982kx} distinguishes:
\begin{itemize}
\item computational description---What is the thing for and how does it achieve this?
\item representations and algorithms---How are the inputs and outputs represented, and how is the transformation accomplished?
\item hardware implementation---How are the representations and algorithms physically realised?
\end{itemize}
One possibility is to appeal to David Marr’s famous three-fold distinction bweteen levels of description of a system: the computational theory, the representations and algorithm, and the hardware implementation.
This is easy to understand in simple cases. To illustrate, consider a GPS locator. It receives information from four satellites and tells you where on Earth the device is.
There are three ways in which we can characterise this device.

1. computational description

First, we can explain how in theory it is possible to infer the device’s location from it receives from satellites. This involves a bit of maths: given time signals from four different satellites, you can work out what time it is and how far you are away from each of the satellites. Then, if you know where the satellites are and what shape the Earth is, you can work out where on Earth you are.

-- What is the thing for and how does it achieve this?

The computational description tells us what the GPS locator does and what it is for. It also establishes the theoretical possibility of a GPS locator.
But merely having the computational description does not enable you to build a GPS locator, nor to understand how a particular GPS locator works. For that you also need to identify representations and alogrithms ...

2. representations and algorithms

At the level of representations and algorthms we specify how the GPS receiver represents the information it receives from the satellites (for example, it might in principle be a number, a vector or a time). We also specify the algorithm the device uses to compute the time and its location. The algorithm will be different from the computational theory: it is a procedure for discovering time and location. The algorithm may involve all kinds of shortcuts and approximations. And, unlike the computational theory, constraints on time, memory and other limited resources will be evident.
So an account of the representations and algorithms tells us ...

-- How are the inputs and outputs represented, and how is the transformation accomplished?

3. hardware implementation

The final thing we need to understand the GPS locator is a description of the hardware in which the algorithm is implemented. It’s only here that we discover whether the device is narrowly mechanical device, using cogs, say, or an electronic device, or some new kind of biological entity.

-- How are the representations and algorithms physically realised?

The hardware implementation tells us how the representations and algorithms are represented physically.

Marr (1992, 22ff)

How is this relevant to my question? My question was, What is the relation between an account of radical interpretation* and a theory of social cognition?
I suggest that an account of radical interpretation* is supposed to provide a computational description of social cognition; it tells us what social cognition is for and how, in the most abstract sense, it is possible.

The Intentional Stance
can be (mis)interpreted as an attempt to provide
a computational description of social cognition.

‘the intentional stance ...

‘first you decide to treat the object whose behavior is to be predicted as a rational agent;

‘then you figure out what beliefs that agent ought to have , given its place in the world and its purpose.

‘Then you figure out what desires it ought to have, on the same considerations,

‘and finally you predict that this rational agent will act to further its goals in the light of its beliefs’

Dennett (1987, 17)

1. Humans can distinguish each others desires in ways unrelated to their purpose and place in the world.

BUT

2. The Intentional Stance provides no way to do this.

3. The Intentional Stance does not provide a correct computational description of human social cognition.

Objection 1

The Intentional Stance provides
no way to identify
false beliefs, ‘incorrect’ desires or failures of rationality.

Objection 2

The Intentional Stance provides
no adequate way to
distinguish me from you.

1. Humans can sometimes identify false beliefs, ‘incorrect’ desires or failures of rationality.

BUT

2. The Intentional Stance provides no way to do this.

3. The Intentional Stance does not provide a correct computational description of human social cognition.

Objection 1

The Intentional Stance provides
no way to identify
false beliefs, ‘incorrect’ desires or failures of rationality.

Objection 2

The Intentional Stance provides
no adequate way to
distinguish me from you.

Theories of radical interpretation*:

The Intentional Stance (Dennett)

Davidson’s Theory

The Teleological Stance & Your-Goal-Is-My-Goal

Minimal Theory of Mind

also an implicit theory associated with perception of emotion

‘an action can be explained by a goal state if, and only if, it is seen as the most justifiable action towards that goal state that is available within the constraints of reality’

\citep[p.~255]{Csibra:1998cx}

Csibra & Gergely (1998, 255)

1. action a is directed to some goal;

2. actions of a’s type are normally means of realising outcomes of G’s type;

3. no available alternative action is a significantly better* means of realising outcome G;

4. the occurrence of outcome G is desirable;

5. there is no other outcome, G′, the occurrence of which would be at least comparably desirable and where (2) and (3) both hold of G′ and a

Therefore:

6. G is a goal to which action a is directed.

Objection 1

The Intentional Stance provides
no way to identify
false beliefs, ‘incorrect’ desires or failures of rationality.

Objection 2

The Intentional Stance provides
no adequate way to
distinguish me from you.

The Teleological Stance does not face objection 2 (no way to distinguish you from me), at least not at first pass.
What about objection 1? This is tricky. By construction the Teleological Stance provides no way to identify false beliefs, ‘incorrect’ desires or failures of rationality.
But is this an objection ... ?

1. Humans can sometimes identify false beliefs, ‘incorrect’ desires or failures of rationality.

BUT

2. The Teleological Stance provides no way to do this.

3. The Teleological Stance does not provide a correct computational description of human social cognition.

The Teleological Stance isn’t trying to be a comprehensive theory of human social cognition. The idea, rather, is that human social cognition has many parts and one of them is a process of pure goal ascription and the Teleological Stance provides a computational description for that.

Dennett, Davidson:

We need a single theory covering all social cognition.

A better approach:

Social cognition involves a cluster of disparate abilities.

These include pure goal ascription.

There is no such thing as a theory of social cognition.

Instead we need a theory for each disparate ability.

Theories of radical interpretation*:

The Intentional Stance (Dennett)

Davidson’s Theory

The Teleological Stance & Your-Goal-Is-My-Goal

Minimal Theory of Mind

also an implicit theory associated with perception of emotion

What do we perceptually experience of others’ mental states?

Evidence:

Humans have categorical perception of expressions of emotion.

Question:

Are expressions of emotion facial configurations?

Observation:

Facial configurations are not diagnostic of emotions (Aviezer et al)

Theory:

The objects of categorical perception are actions directed to the goals of expressing particular emotions (Butterfill, 2015).

Categorical perception of expressions of emotions

 

levelspecification
computational descriptionThe Teleological Stance
representations and algorithms... are broadly perceptual

Theories of radical interpretation*:

The Intentional Stance (Dennett)

Davidson’s Theory

The Teleological Stance & Your-Goal-Is-My-Goal

Minimal Theory of Mind

also an implicit theory associated with perception of emotion

Btw, mTm is a description of a model of mind at the level of a computational theory; it is completely agnostic about representations and algorithms.
MTM overcomes both objections ... at a cost (it’s limited)

Objection 1

The Intentional Stance provides
no way to identify
false beliefs, ‘incorrect’ desires or failures of rationality.

Objection 2

The Intentional Stance provides
no adequate way to
distinguish me from you.

1. Humans can sometimes identify false beliefs involving mistakes about numerical identity.

BUT

2. Minimal Theory of Mind provides no way to do this.

3. Minimal Theory of Mind does not provide a correct computational description of human social cognition.

dogma

the first

of mindreading

The dogma of mindreading: any individual has at most one model of minds and actions at any one point in time.
The first dogma of mindreading: there is just one mindreading process ... so we need just one true theory of radical interpretation
dual-process recap
So what does a Dual Process Theory of Mindreading claim? The core claim is just this:

Dual Process Theory of Mindreading (core part)

Two (or more) mindreading processes are distinct:
the conditions which influence whether they occur,
and which outputs they generate,
do not completely overlap.

\textbf{You might say, this is a schematic claim, one totally lacking substance.} You’d be right: and that’s exactly the point.
A key feature of this Dual Process Theory of Mindreading is its \textbf{theoretical modesty}: it involves no a priori committments concerning the particular characteristics of the processes.

dogma

the first

of mindreading

The dogma of mindreading: any individual has at most one model of minds and actions at any one point in time.
The first dogma of mindreading: there is just one mindreading process ... so we need just one true theory of radical interpretation

Theories of radical interpretation*:

The Intentional Stance (Dennett)

Davidson’s Theory

The Teleological Stance & Your-Goal-Is-My-Goal

Minimal Theory of Mind

also an implicit theory associated with perception of emotion

The domain: what is a theory of social cognition a theory of?

Social cognition:

cognition of
others’ actions and mental states
in relation to social functioning.

Our goal for this course: construct a theory of social cognition. But what questions does such a theory aim to answer.

The question: Radical Interpretation*

How in principle could someone infer facts about actions and mental states from non-mental evidence?

What is the relation between an account of radical interpretation* and a theory of social cognition?

A theory of radical interpretation* is supposed to provide a computational description of social cognition.