# Keyboard Shortcuts?

×
• Next step
• Previous step
• Skip this slide
• Previous slide
• mShow slide thumbnails
• nShow notes
• hShow handout latex source
• NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

\title {Social Cognition \\ Lecture 03}

\maketitle

# Social Cognition

\def \ititle {Lecture 03}
\def \isubtitle {Social Cognition}
\begin{center}
{\Large
\textbf{\ititle}: \isubtitle
}

\iemail %
\end{center}

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

\citet[p.\ 573]{mcneill:2012_embodiment}: ‘We sometimes {see} aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’
What is the evidence for this claim? Is research on categorical perception of expressions of emotion relevant?
What disciplines talk of seeing here? This is a hard question to answer in a fully satisfying way, but I suggested we take a short cut ...

How do you know about it?it = this penit = this joy

perceive indicator, infer its presence

- vs -

perceive it

challenge

Evidence? Categorical Perception!

But is evidence really needed?
I think there’s an obvious reason why we shouldn’t expect philosophers or theoreticians to have substantive ways of answering questions about what can be perceived, nor more generally about how minds work. Their theories are all based almost entirely on information observation, guesswork (‘intuition’) and elegance.
They’re not answerable to evidence, but only to ordinary thinking about minds and actions.

informal observation, guesswork (‘intuition’) and elegance

An obvious problem here is that our ordinary, commonsense, folk thinking about the minds serves a number of purposes. It has normative and regulatory functions. For commonsense thinking about minds and actions, accuracy far from being the only goal.

Smith : Red Tomato

A dramatic illustration of this is Smith’s use of ‘Red Tomato’, which he contrasts with ’Happy Sylviva’. Most philosophers seem to agree with Smith that perceiving redness is a basic case of perceptually experiencing how things look. For example ...

‘If someone with normal color vision looks at a tomato in good light, the tomato will appear to have a distinctive property—a property that strawberries and cherries also appear to have, and which we call ‘red’ in English’

(Byrne & Hilbert 2003, p. 4)

It is a ‘subject-determining platitude’ that ‘“red” denotes the property of an object putatively presented in visual experience when that object looks red’

(Jackson 1996, pp. 199–200)

I won’t go into it here, but there is excellent evidence that we do not actually perceptually experience categorical colour properties like redness.

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

challenge

Evidence? Categorical Perception!

Earlier I asked, what evidence could bear on the perceptual hypothesis?
As we saw, Part of the evidence relevant to the perceptual hypothesis is evidence that humans discriminate stimuli according to the expressions of emotion they involve. That is, humans have categorical perception of expressions of emotion.
What does it mean to say that there is categorical perception of expressions of emotion? To a first approximation (we will have to be much clearer later), it means that there are broadly perceptual processes which categorise stimuli according to which emotions they express.
Assume that we as theorists have a system which allows us to categorise static pictures of faces and other stimuli according to which emotion we think they are expressing: some faces are happy, others fearful, and so on. From five months of age, or possibly much earlier \citep{field:1982_discrimination}, through to adulthood, humans are better at distinguishing faces when they differ with respect to these categories than when they do not \citep{Etcoff:1992zd,Gelder:1997bf,Bornstein:2003vq,Kotsoni:2001ph,cheal:2011_categorical,hoonhorst:2011_categoricala}.

Cheal and Rutherford, 2011 figure 1

You can do this with happy-sad and other emotions too.

Cheal and Rutherford, 2011 figure 4

Here are some results from adults. The top graph shows labelling. The lower graph shows the probability that adults, when presented with two faces, would say ‘feels different’. (The task was to sort people into two houses, so that people who felt the same all went into one house.)

But could it be merely an effect of unvoiced labelling?

‘facial expressions of emotions are perceived categorically regardless of whether the viewer has lexical categories that distinguish between the perceptual categories.’\citep[p.~1482]{sauter:2011_categorical}

Sauter et al, 2011 p. 1482

Sauter et al, 2011 p. figure 1

Sauter et al, 2011 p. figure 4

Task: 2AFC Subjects: Germans and Yucatec Maya who had no word for disgust

But could it be merely an effect of extraneous visual differences?

This objection has been addressed by Sato and Yoshikawa

Sato and Yoshikawa, 2009 figure 1A

Sato and Yoshikawa used a contrast between normal expressions of emotion and ‘anti’ expressions. But what exactly are these ‘anti’ expressions?

‘we reversed the direction of the facial features of the emotional expressions but retained the general configuration. For example, if the angry expression had V-shaped eyebrows and the neutral expression had horizontal eyebrows, our computer manipulation generated faces with eyebrows shaped in an upside-down V shape.’

\citep[p.~371]{sato:2009_detection}

Sato and Yoshikawa, 2009 p. 371

Sato and Yoshikawa reasoned that if the effects taken to be evidence were artefacts of extraneous visual differences, then much the same effects would occur for the ‘anti’ expressions of emotion. To test this, they used a visual search paradigm ...

Sato and Yoshikawa, 2009 figure 1B

They used a visual detection paradigm and measured how quickly participants could detect the odd face.
I don’t find this super convincing. How can we relate detection speed to categorical perception? We might if we had pop out effects. So what I would be looking for here is an interaction between expressions and anti-expressions, not merely that one is faster to detect. But I think the approach is good; it would be useful to use the same ‘anti’ stimuli in other experiments, for example in testing for preattentive affects \citep{vuilleumier:2001_emotional}.

Sato and Yoshikawa, 2009, figure 2A

‘the normal angry and happy expressions were detected faster than were the respective anti- expressions.’

‘detection of an emotional expression was superior even when the effects of stimulus visual characteristics were controlled.’

\citep[p.~378]{sato:2009_detection}
This is not to say that we have conclusive evidence for categorical perception of facial expressions of emotion.% \footnote{% One issue which as been dealt with in the case of categorical perception of colour is prototype effects (responses are influenced by distance from a prototype). As far as I know, this is yet to be considered explicitly in the case of categorical perception of expressions of emotion (but I could be missing something). }

Or ...

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

challenge

Evidence? Categorical Perception!

So much for categorical perception (some people doubted the evidence). I hope that its existence does at least this much: it convinces you But what does the evidence from studies of categorical perception tell us about the truth or falsity of the perceptual hypothesis ?
Just here it is natural to take a sceptical line and claim ...
Counter argument:

1. The objects of categorical perception, ‘expressions of emotion’, are facial configurations.

2. Facial configurations are not emotions.

so ...

3. The things we perceive in virtue of categorical perception are not emotions.

Some have tried to resist the conclusion by arguing that expressions are parts of emotions. This raises some complex issues that I won’t delve into here, since Will McNeil has already given an excellent argument on this topic in a paper mentioned on your handout.
Can the argument be blocked by claming that expressions are parts of emotions? See \citet{mcneill:2012_embodiment}.
Instead I want to consider the first step of the argument. Is this true? Actually we haven’t considered this carefully at all. For a start, we are talking about expressions of emotion but we haven’t yet paused to consider what they are. More pressingly, we wse have to ask, What are the objects of categorical perception? That is, What Are the Perceptual Processes Supposed to Categorise?

## Aviezer’s Puzzle about Categorical Perception

What are the perceptual processes supposed to categorise?

standard view: fixed expressions linked to emotional categories

Aviezer et al (2012, figure 2A3)

Are the things categorised by perceptual processes facial configurations? This view faces a problem. There is evidence that the same facial configuration can express intense joy or intense anguish depending on the posture of the body it is attached to, and, relatedly, that humans cannot accurately determine emotions from spontaneously occurring (spontaneously occurring---i.e.\ as opposed to acted out) facial configurations \citep{motley:1988_facial,aviezer:2008_angry,aviezer:2012_body}. These and other findings, while not decisive, cast doubt on the view that categories of emotion are associated with categories of facial configurations \citep{hassin:2013_inherently}.

Aviezer et al’s puzzle

Given that facial configurations are not diagnostic of emotion, why are they categorised by perceptual processes?

This evidence makes the findings we have reviewed on categorical perception puzzling. Given that the facial configurations are not diagnostic of emotion, why are they categorised by perceptual processes?% \footnote{ Compare \citet[p.\ 1228]{aviezer:2012_body}: although the faces are inherently ambiguous, viewers experience illusory affect and erroneously report perceiving diagnostic affective valence in the face.' } This question appears unanswerable as long as we retain the assumption---for which, after all, no argument was given---that the things categorical perception is supposed to categorise are facial configurations. .handout: :t \citet[p.\ 1228]{aviezer:2012_body}: [A]lthough the faces are inherently ambiguous, viewers experience illusory affect and erroneously report perceiving diagnostic affective valence in the face' \citet[p.\ 1228]{aviezer:2012_body}. But do they?

‘[A]lthough the faces are inherently ambiguous, viewers experience illusory affect and erroneously report perceiving diagnostic affective valence in the face'

Aviezer et al (2012, 1228)

... maybe they aren’t.

But if we reject this assumption, what is the alternative?
So now you see why I have some doubts about the first premise of this argument. That is not yet to say that it is wrong. It’s just that we should be open about what the objects of categorical perception are.

1. The objects of categorical perception, ‘expressions of emotion’, are facial configurations.

2. Facial configurations are not emotions.

so ...

3. The things we perceive in virtue of categorical perception are not emotions.

## Categorical Perception of Speech

\section{Categorical Perception of Speech}

\section{Categorical Perception of Speech}
Consider this following representation of twelve sounds. Each sound differs from its neighbours by the same amount as any other sound, at least when difference is measured by frequency. Most people would not be able to discriminate two adjacent sounds ...
except for two special cases (one around -3 to -1 and one around +1 to +3) where the discrimination is easier; here people hear the sound change from da to ga or from ga to ba.
So although these sounds are, from an acoustic point of view, no more different than ...
... these sounds, the second pair of sounds are easy to discriminate.
This pattern of heightened discrimination is the defining characteristic of categorical perception as it is usually operationally defined. Small changes to stimuli can make large differences to perception, large changes to stimuli can make small differences to perception, and the stimuli can be ordered and sorted into categories such that discriminating nearby pairs of stimuli on either side of a category boundary is dramatically easier than discriminating pairs from within a category.

What determines where the category boundaries fall?

It is perhaps tempting to think that categorical perception of speech is just a matter of categorising sounds. But this is not straightforward.
For one thing, the existence of these category boundaries is specific to speech perception as opposed to auditory perception generally. When special tricks are used to make subjects perceive a stimulus first as speech and then as non-speech, the locations of boundaries differ between the two types of perception \citep[p.~20--1]{Liberman:1985bn}.
phonetic context
phonetic context
coarticulation
the location of the category boundaries changes depending on contextual factors such as the speaker’s dialect or the rate at which the speaker talks ; both factors dramatically affect which sounds are produced. This means that between two different contexts, different stimuli may result in the same perceptions and the same stimulus may result in different perceptions.
So which features of the stimuli best predict category membership?
Liberman and Mattingly argue that, in the case of speech, category boundaries typically correspond to differences between intended phonic gestures. The existence of category boundaries and their correspondence to intended phonic gestures needs explaining.
Following Liberman and Mattingly, we can explain this by postulating a module for speech perception. Anything which is potentially speech (including both auditory and visual stimuli) is passed to the module which attempts to interpret it as speech. It does this by attempting to replicate stimuli by issuing the same gestures that are also used for producing speech (this is the ‘motor’ in ‘motor theory’). Where a replication is possible, the stimuli are perceived as speech, further auditory or visual processing is partially suppressed, and the module identifies the stimuli as composed of the gestures that were used in the successful replication. Accordingly we can say that the stimuli are perceived as a sequence of phonic gestures.
One line of response to this argument involves attempting to show that the category boundaries correspond to some acoustic property of speech at least as well as to intended phonic gestures. If such a correspondence were found, it might be possible to give a explanation better than Liberman and Mattingly’s for the existence of categories corresponding to intended phonic gestures. Reasons for doubting any such explanation exists include the constancy effects already mentioned and also coarticulation, the fact that phonic gestures overlap (this is what makes talking fast).
In outline Liberman and Mattingly’s argument for the claim that the objects of speech perception are intended phonic gestures has this form:

(1) There are category boundaries … .

(2) … which correspond to phonic gestures.

What is a phonic gesture?
In speaking we produce an overlapping sequence of articulatory gestures, which are motor actions involving coordinated movements of the lips, tongue, velum and larynx. These gestures are the units in terms of which we plan utterances (Browman and Goldstein 1992; Goldstein, Fowler, et al. 2003).

(3) Facts (1) and (2) stand in need of explanation.

(4) The best explanation of (1) and (2) involves the claim that the objects of speech perception are phonic gestures.

This illustrates how we might establish claims about the objects of perception
But why accept that the best explanation of (1) and (2) involves this claim? Part of the reason concerns relations between speech production and speech perception ...

‘word listening produces a phoneme specific activation of speech motor centres’ \citep{Fadiga:2002kl}

‘Phonemes that require in production a strong activation of tongue muscles, automatically produce, when heard, an activation of the listener's motor centres controlling tongue muscles.’ \citep{Fadiga:2002kl}

‘word listening produces a phoneme specific activation of speech motor centres’

‘Phonemes that require in production a strong activation of tongue muscles, automatically produce, when heard, an activation of the listener's motor centres controlling tongue muscles.’

Good, but this stops short of showing that the motor activations actually faciliatate speech recognition ...

D'Ausilio et al (2009, figure 1)

‘Double TMS pulses were applied just prior to stimuli presentation to selectively prime the cortical activity specifically in the lip (LipM1) or tongue (TongueM1) area’ \citep[p.~381]{dausilio:2009_motor}
‘We hypothesized that focal stimulation would facilitate the perception of the concordant phonemes ([d] and [t] with TMS to TongueM1), but that there would be inhibition of perception of the discordant items ([b] and [p] in this case). Behavioral effects were measured via reaction times (RTs) and error rates.’ \citep[p.~382]{dausilio:2009_motor}

D'Ausilio et al (2009, figure 1)

‘Effect of TMS on RTs show a double dissociation between stimulation site (TongueM1 and LipM1) and discrimination performance between class of stimuli (dental and labial). The y axis represents the amount of RT change induced by the TMS stimulation. Bars depict SEM. Asterisks indicate significance (p < 0.05) at the post-hoc (Newman-Keuls) comparison.’ \citep{dausilio:2009_motor}

(1) There are category boundaries … .

(2) … which correspond to phonic gestures.

(3) Facts (1) and (2) stand in need of explanation.

(4) The best explanation of (1) and (2) involves the claim that the objects of speech perception are phonic gestures.

## The Objects of Categorical Perception

\section{The Objects of Categorical Perception}

\section{The Objects of Categorical Perception}

phonic gesture

expression of emotion

Compare expressing an emotion by, say, smiling or frowning, with articulating a phoneme.
Variations due to coarticulation, rate of speech, dialect and many other factors mean that isolated acoustic signals are not generally diagnostic of phonemes: in different contexts, the same acoustic signal might be a consequence of the articulation of any of several phonemes.

- isolated acoustic signals not diagnostic

So here there is a parallel between speech and emotion. Much as isolated facial expressions are not diagnostic of emotions (as we saw a moment ago), isolated acoustic signals are plausibly not diagnostic of phonetic articulations.
This is why Aviezer et al’s puzzle arises.

- isolated facial expressions not diagnostic

Both have a communicative function (on expressions of emotion, see for example \citealp{blair:2003_facial,sato:2007_spontaneous}) and both are categorically perceived, but the phonetic case has been more extensively investigated.

- communicative function

- communicative function ???

cultural variation

How emotions are expressed facially varies between cultures \citep{jack:2012_facial}.

is partially explained by historical heterogeneity

‘cultural differences in expressive behavior are determined by historical heterogeneity, or the extent to which a country’s present-day population descended from migration from numerous vs few source countries over a period of 500 y[ears]’ \citep{rychlowska:2015_heterogeneity}

and perhaps driven by communicative needs

‘people from historically heterogeneous cultures [as measured by the number of countries in which ancestors of members of the present population lived in the last 500 years] produce facial expressions of emotion that are recognized more accurately than expressions produced by people from homogeneous cultures.’ \citep{wood:2016_heterogeneity}
What do I want to conclude from this? That it is plausible to think of expressions of emotion as having a communicative function in much the sense than phonic gestures have communicative functions.

phonic gesture

expression of emotion

Compare expressing an emotion by, say, smiling or frowning, with articulating a phoneme.

- isolated acoustic signals not diagnostic

- isolated facial expressions not diagnostic

Both have a communicative function (on expressions of emotion, see for example \citealp{blair:2003_facial,sato:2007_spontaneous}) and both are categorically perceived, but the phonetic case has been more extensively investigated.

- communicative function

- communicative function ???

Why then are isolated acoustic signals---which rarely even occur outside the lab---categorised by perceptual or motor processes at all? To answer this question we first need a rough idea of what it is to articulate a phoneme. Articulating a phoneme involves making coordinated movements of the lips, tongue, velum and larynx. How these should move depends in complex ways on numerous factors including phonetic context \citep{Browman:1992da,Goldstein:2003bn}. In preparing for such movements, it is plausible that the articulation of a particular phoneme is an outcome represented motorically, where this motor representation coordinates the movements and normally does so in such a way as to increase the probability that the outcome represented will occur.

- complex coordinated, goal-directed movements

This implies that the articulation of a particular phoneme, although probably not an intentional action, is a goal-directed action whose goal is the articulation of that phoneme.
(On the link between motor representation and goal-directed action, see \citealp{butterfill:2012_intention}.)
Now some hold that the things categorised in categorical perception of speech are not sounds or movements (say) but rather these outcomes---the very outcomes in terms of which speech actions are represented motorically (\citealp{Liberman:2000gr}; see also \citealp{Browman:1992da}).% \footnote{ Note that this claim does not entail commitment to other components of the motor theory of speech perception. } % On this view, categorical perception of speech is a process which takes as input the bodily and acoustic effects of speech actions and attempts to identify which outcomes the actions are directed to bringing about, that is, which phonemes the speaker is attempting to articulate. That isolated acoustic signals can engage this process and thereby trigger categorical perception is merely a side-effect, albeit one with useful methodological consequences.

- complex coordinated, goal-directed movements

We can think of expressions of emotion as goal-directed in the same sense that articulations of phonemes are. They are actions whose goal is the expression of a particular emotional episode.
This may initially strike you as implausible given that such expressions of emotion can be spontaneous, unintentional and involuntary. But note that expressing an emotion by, say, smiling or frowning, whether intentionally or not, involves making coordinated movements of multiple muscles where exactly what should move and how can depend in complex ways on contextual factors. That such an expression of emotion is a goal-directed action follows just from its involving motor expertise and being coordinated around an outcome (the goal) in virtue of that outcome being represented motorically.% \footnote{ To increase the plausibility of the conjecture under consideration, we should allow that some categorically perceived expressions of emotion are not goal-directed actions but events grounded by two or more goal-directed actions. For ease of exposition I shall ignore this complication. }
Recognising that some expressions of emotion are goal-directed actions in this sense makes it possible to explain what distinguishes a genuine expression of emotion of this sort, a smile say, from something unexpressive like the exhalation of wind which might in principle resemble the smile kinematically. Like any goal-directed actions, genuine expressions of emotion of this sort are distinguished from their kinematically similar doppelgänger in being directed to outcomes by virtue of the coordinating role of motor representations and processes.

another comparison

categorical perception of speech

categorical perception of expressions of emotion

the objects of categorical perception are not acoustic signals,

the objects of categorical perception are not facial configurations

they are actions directed to the goal of performing a particular phonic gesture

they are actions directed to the goal of expressing a particular emotion

Aviezer et al’s puzzle

Given that facial configurations are not diagnostic of emotion, why are they categorised by perceptual processes?

Facial configurations are not what perceptual processes are supposed to categorize,

instead they are supposed to categorize
actions
directed to the goal
of expressing a particular emotion.

Recall the earlier sceptical line ...

1. The objects of categorical perception, ‘expressions of emotion’, are facial configurations.

2. Facial configurations are not emotions.

so ...

3. The things we perceive in virtue of categorical perception are not emotions.

I’ve rejected the letter of the first premise. But does this matter for the claim that we perceptually experience emotions?
If we are categorically perceiving an action directed to the expression of a particular emotion, then there is a (perhaps quite weak) sense in which we categorically perception puts us in touch with the emotion.
So I first asked, what evidence could bear on the perceptual hypothesis? I answered that part of the evidence is from studies of categorical perception of facial expressions of emotion.

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

\citet[p.\ 573]{mcneill:2012_embodiment}: ‘We sometimes {see} aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

challenge 2

Evidence? Categorical Perception!

Which model of the emotions?

Part of the evidence relevant to the perceptual hypothesis is evidence that humans can categorically perceive expressions of emotion.
Then my next question was, What does the evidence from studies of categorical perception tell us about the truth or falsity of the perceptual hypothesis ?
Now I think we can answer this question. Or rather, we know that the answer depends on what the objects of categorical perception are.
If the objects of categorical perception are merely facial expresssions, then I think the evidence does not support the claim that we see aspects of each others’ mental lives.
But if, as I’ve conjectured, the objects of categorical perception are actions directed to the expression of particular emotions, then I think the evidence does provide modest support for the claim that we perceive aspects of each others’ mental lives.
This claim will need qualifying in various ways.
First, the evidence suggests that categorical perception of expressions of emotion, like categorical perception of speech, is not tied to single modality in any very robust way.
Second, I suggest elsewhere that categorical perception isn’t exactly like perception as philosophers standardly understand it. In particular, its phenomenology is to be understood in terms of \textbf{PHENOMENAL EXPECTATIONS} rather than by analogy with perception of shapes or textures.
But I want to continue explaining the view we have arrived at ...

## How could the objects of categorical perception be actions?

\section{How could the objects of categorical perception be actions?}

\section{How could the objects of categorical perception be actions?}
the wild conjecture under consideration is that the things categorical perception is supposed to categorise, the ‘expressions of emotion’, are actions of a certain type, and these are categorised by which outcomes they are directed to.

What are the perceptual processes supposed to categorise?

Actions whose goals are to express certain emotions.

- The perceptual processes categorise events (not e.g. facial configurations).

Let me explain the increasingly bold commitments involved in accepting this conjecture.
First, the things categorised in categorical perception of expressions of emotion are events rather than configurations or anything static. (Note that this is consistent the fact that static stimuli can trigger categorical perception; after all, static stimuli can also trigger motor representations of things like grasping \citep{borghi:2007_are}.)

- These events are not mere physiological reactions.

Second, these events are not mere physiological reactions (as we might intuitively take blushing to be) but things like frowning and smiling, whose performance involves motor expertise. \footnote{ To emphasise, one consequence of this is that not everything which might intuitively be labelled as an expression of emotion is relevant to understanding what is categorised by perceptual processes. %For example, in the right context a blush may signal emotion without requiring motor expertise. }

- These events are are perceptually categorised by the outcomes to which they are directed.

Third, these events are perceptually categorised by the outcomes to which they are directed. That is, outcomes represented motorically in performing these actions are things by which these events are categorised in categorical perception.
Should we accept the wild conjecture? It goes well beyond the available evidence and currently lacks any reputable endorsement. In fact, we lack direct evidence for even the first of the increasingly bold commitments just mentioned (namely, the claim that the things categorically perceived are events). A further problem is that we know relatively little about the actions which, according to the wild conjecture, are the things categorical perception is supposed to categorise (\citealp[p.\ 47]{scherer:2013_understanding}; see also \citealp{scherer:2007_are} and \citealp{fernandez-dols:2013_advances}). However, the wild conjecture is less wild than the only published responses to the problems that motivate it.% % (which, admittedly, are wilder than an acre of snakes).% \footnote{ See \citet[p.\ 15]{motley:1988_facial}: ‘particular emotions simply cannot be identified from psychophysiological responses’; and \citet[p.\ 289]{barrett:2011_context}: ‘scientists have created an artifact’. } And, as I shall now explain, several considerations make the wild conjecture seem at least worth testing.
Consider again the procedure used in testing for categorical perception. Each experiment begins with a system for categorising the stimuli (expressions). This initial system is either specified by the experimenters or, in some cases, by having the participants first divide stimuli into categories using verbal labels or occasionally using non-verbal decisions. The experiment then seeks to measure whether this initial system of categories predicts patterns in discrimination. But what determines which category each stimulus is assigned to in the initial system of categories? You might guess that it is a matter of how likely people think it is that each stimulus---a particular facial configuration, say---would be associated with a particular emotion. In fact this is wrong. Instead, each stimulus is categorised in the initial system according to how suitable people think such an expression would be to express a given emotion: this is true whether the stimuli are facial \citep{horstmann:2002_facial} or vocal \citep{laukka:2011_exploring} expressions of emotion (see also \citealp[pp.\ 98--9]{parkinson:2013_contextualizing}). To repeat, in explicitly assigning an expression to a category of emotion, people are not making a judgement about the probability of someone with that expression having that emotion: they are making a judgement about which category of emotion the expression is most suited to expressing. Why is this relevant to understanding what perceptual processes categorise? The most straightforward way of interpreting the experiments on categorical perception is to suppose that they are testing whether perceptual processes categorise stimuli in the same ways as the initial system of categories does. But we have just seen that the initial system categorises stimuli according to the emotions they would be best suited to expressing. So on the most straightforward interpretation, the experiments on categorical perception of expressions of emotion are testing whether there are perceptual processes whose function is to categorise actions of a certain type by the outcomes to which they are directed. So the wild conjecture is needed for the most straightforward interpretation of these experiments. This doesn't make it true but it does make it worth testing.
So far I have focussed on evidence for categorical perception from experiments using faces as stimuli. However, there is also evidence that perceptual processes categorise vocal and facial expressions alike (\citealp{grandjean:2005_voices,laukka:2005_categorical}; see also \citealp{jaywant:2012_categorical}). We also know that %judgements about which emotion an observed person is expressing in a photograph can depend on the posture of the whole body and not only the face \citep{aviezer:2012_body}, and that various contextual factors can affect how even rapidly occurring perceptual processes discriminate expressions of emotion \citep{righart:2008_rapid}. There is even indirect evidence that categorical perception may concern whole bodies rather than just faces or voices \citep{aviezer:2008_angry,aviezer:2011_automaticity}. In short, categorical perception of expressions of emotion plausibly resembles categorical perception of speech in being a multimodal phenomenon which concerns the whole body and is affected by several types of contextual feature. This is consistent with the wild conjecture we are considering. The conjecture generates the further prediction that the effects of context on categorical perception of expressions of emotion will resemble the myriad effects of context on categorical perception of speech so that `every potential cue ... is an actual cue' (\citealp[p.\ 11]{Liberman:1985bn}; for evidence of context effects see in categorical perception of speech, for example, \citealp{Repp:1987xo}; \citealp{Nygaard:1995po} pp.\ 72--5; \citealp{Jusczyk:1997lz}, p.\ 44).

?

How could the objects of categorical perception be actions directed to the goals of expressing particular emotions?

Just here there is a further problem. It’s not just that, as I mentioned yesterday, categorical perception involves processes that identify expressions of emotion within fractions of a second (under 200ms) ...
... it’s also that the goals in this case involve emotions. Since emotions aren’t represented motorically, how could they be represented?

disgust: first-person experience vs third-person observation

-- common activation

‘observation of others’ disgust activated neuronal substrates within AI [the insula] that were selectively activated by the exposure to the disgusting odorants’ (Rizzolatti and Sinigaglia, in preparation).

-- common impairment

\citep{adolphs:2003_dissociable} reported that ‘Patient (B.) had bilateral damage to the insula: he was able to retrieve knowledge about all basic emotions except disgust when viewing facial expressions or hearing descriptions of emotionally related actions. Strikingly, when the experimenter acted out behaviors typically associated with disgust (e.g. spitting out of food), the patient remained indifferent or even indicated that the food was ‘delicious’’ (Rizzolatti and Sinigaglia, in preparation).

Wood et al’s Sensorimotor Theory*

*modified by me: they make it specific to the face; although faces are clearly special, I think this may be a mistake for reasons given in the last lecture.

1. Observer covertly recreates expression

‘Observers of a facial expression of emotion automatically recreate it (covertly, partially, or completely)’
\citep[p.~229]{wood:2016_fashioninga}

2. Recreation activates associated emotion-related processes

‘[S]imulating another's facial expression can fully or partially activate the associated emotion system in the brain of the perceiver.’

\citep[p.~231]{wood:2016_fashioninga}

3. This activation ‘is the basis from which accurate facial expression recognition is achieved’

\citep[p.~231]{wood:2016_fashioninga}

-- TMS to motor and somatosensory areas slows recognition of changes in facial expressions

‘inhibiting the right primary motor (M1) and somatosensory (S1) cortices of female participants with TMS reduced spontaneous facial mimicry and delayed the perception of changes in facial expressions’ \citep{wood:2016_fashioninga}

Wood et al (2016, p. 229, 231)

How could the objects of categorical perception be actions directed to the goals of expressing particular emotions?

Have we solved the problem?

conclusion

In conclusion, ...
So I first asked, what evidence could bear on the perceptual hypothesis? I answered that part of the evidence is from studies of categorical perception of facial expressions of emotion.

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

\citet[p.\ 573]{mcneill:2012_embodiment}: ‘We sometimes {see} aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

challenge

Evidence? Categorical Perception!

Part of the evidence relevant to the perceptual hypothesis is evidence that humans can categorically perceive expressions of emotion.
Then my next question was, What does the evidence from studies of categorical perception tell us about the truth or falsity of the perceptual hypothesis ?
Recall the earlier sceptical line ...

1. The objects of categorical perception, ‘expressions of emotion’, are facial configurations.

2. Facial configurations are not emotions.

so ...

3. The things we perceive in virtue of categorical perception are not emotions.

I’ve rejected the letter of the first premise. But does this matter for the claim that we perceptually experience emotions?
If we are categorically perceiving an action directed to the expression of a particular emotion, then there is a (perhaps quite weak) sense in which we categorically perception puts us in touch with the emotion.

‘We sometimes see aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

McNeill (2012, p. 573)

\citet[p.\ 573]{mcneill:2012_embodiment}: ‘We sometimes {see} aspects of each others’ mental lives, and thereby come to have non-inferential knowledge of them.’

challenge 2

Evidence? Categorical Perception!

Which model of the emotions?

Part of the evidence relevant to the perceptual hypothesis is evidence that humans can categorically perceive expressions of emotion.
Then my next question was, What does the evidence from studies of categorical perception tell us about the truth or falsity of the perceptual hypothesis ?
Now I think we can answer this question. Or rather, we know that the answer depends on what the objects of categorical perception are.
If the objects of categorical perception are merely facial expressions, then I think the evidence does not support the claim that we see aspects of each others’ mental lives.
But if, as I’ve conjectured, the objects of categorical perception are actions directed to the expression of particular emotions, then I think the evidence does provide modest support for the claim that we perceive aspects of each others’ mental lives.
This claim will need qualifying in various ways.
First, the evidence suggests that categorical perception of expressions of emotion, like categorical perception of speech, is not tied to single modality in any very robust way.
Second, I suggest elsewhere that categorical perception isn’t exactly like perception as philosophers standardly understand it. In particular, its phenomenology is to be understood in terms of \textbf{PHENOMENAL EXPECTATIONS} rather than by analogy with perception of shapes or textures.
But let me put these aside because I want to mention a second challenge.