Keyboard Shortcuts?

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide (or swipe left)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

Nonhuman Mindreading: The Logical Problem

The logical problem

‘since mental state attribution in [nonhuman] animals will (if extant) be based on observable features of other agents’ behaviors and environment ... every mindreading hypothesis has ... a complementary behavior-reading hypothesis.

‘Such a hypothesis proposes that the animal relies upon certain behavioral/environmental cues to predict another agent’s behavior

[... the behaviour which], on the mindreading hypothesis, the animal is hypothesized to use as its observable grounds for attributing the mental state in question.’
\citep[p.~26]{lurz:2011_mindreading}; also \citep[p.~453]{lurz:2011_how}

Lurz (2011, 26)

1. Ascriptions of mental states ultimately depend on information about behaviours

- information triggering ascription

- predictions derived from ascription

2. In principle, anything you can predict by ascribing a mental state you could have predicted just by tracking behaviours.

Is there an experimental solution to the ‘Logical Problem’?

Lurz and Karchun (2011) : yes ...

‘Behavior-reading animals can appeal only to ... reality-based, mind-independent facts, such as facts about agents’ past behavior or their current line of gaze to objects in the environment.

‘Mindreading animals, in contrast, can appeal to the subjective ways environmental objects perceptually appear to agents to predict their behavior.’

\citep[p.~469]{lurz:2011_how}
Experimental implementations: e.g. \citep{karg:2015_goggles}

Lurz and Krachun (2011, p. 469)

E.g. seeing an insect or a piece of fruit as an insect or piece of fruit

Lurz & Krachun 2011, figure 1

There’s a competitor, two graps and some size-distorting glass ... can Subject predict that Competitor will choose not the largest grape but the one which appears largest?
‘The objective here is to test whether chimpanzees are capable of anticipating (as evidenced by their looking behavior) the actions of a human competitor by understanding how an illusory stimulus (grape) looks to the human.’ \citep{lurz:2011_how}
‘the competitor is absent when the grapes are placed inside the containers (as in the pretest stage), he does not observe the change in apparent size of the grapes as they are placed inside the containers, as the chimpanzee does. After the grapes are placed, the competitor enters the room, sits at the table, looks at each grape through the sides of the containers’ \citep[p.~476]{lurz:2011_how}
‘dependent measure is anticipatory looking’. In this case, the subjects (chimps) know which grape is which and how they appear. Also a Krachun-like competitive version (in this case, the chimps have to infer which is larger from the failed reach of the human competitor)...
Also a Krachun-like reaching measure: ‘After the competitor has acted in one of these two ways toward the container with the large grape, the experimenter places opaque coverings completely over each container and moves the platform toward the chimpanzee. Once the platform is in place before the chimpanzee, the chimpanzee is allowed to choose one of the two containers (by pointing to it through a finger hole). Since the containers are covered, the chimpanzee cannot make its choice by seeing the grape or grapes inside the containers. Rather, to make its preferred choice (which we assume will be the remaining small grape in competitor-first success trials and the large grape in competitor-first failure trials) the chimpanzee must attend to the selection process of the competitor.’ \citep[p.~475]{lurz:2011_how}
The Lurz and Krachun proposal looks really good because it identifies a gap in the argument for the logical problem ...

The logical problem

1. Ascriptions of mental states ultimately depend on information about behaviours and objects

- information triggering ascription

- predictions derived from ascription

- ways objects appear

2. In principle, anything you can predict by ascribing a mental state you could have predicted just by tracking behaviours.

Progress ... but does this enable us to solve the logical problem?

‘Behavior-reading animals can appeal only to ... reality-based, mind-independent facts, such as facts about agents’ past behavior or their current line of gaze to objects in the environment.

‘Mindreading animals, in contrast, can appeal to the subjective ways environmental objects perceptually appear to agents to predict their behavior.’

Lurz and Krachun (2011, p. 469)

E.g. seeing an insect or a piece of fruit as an insect or piece of fruit

But: objects have appearance properties, and provide affordances, which are independent of any particular mind.

Camoflauge ain’t mindreading.
Return to the question; this is for discussion ...

Do goggles solve the ‘Logical Problem’?

‘“self-informed” belief induction variables [... are those] that, if the participant is capable of mentalizing, he or she knows only through extrapolation from her own experience to be indicative of what an agent can or cannot see and, therefore, does or does not believe’

\citep[p.~139]{heyes:2014_submentalizing}

Heyes, 2014 p. 139

Karg et al, 2015 figure 4 (Experiment 2)

Experiment 2: chimp attempts to steal from human, who will remove food if she sees the chimp approach. One cover is transparent, the other opaque. But actually when they are down, both look opaque from the chimp’s point of view.

Karg et al, 2015 figure 5 (Experiment 2)

(This is just a photo of the apparatus, with covers up (top) and down (bottom).)

Karg et al, 2015 figure 5 (Experiment 2)

‘Mean percentage of trials with the choice of the opaque box across the three conditions. In the transparent condition, one lid was opaque, the other transparent. In the screen condition, one lid was a screen, the other opaque. The control was like the transparent condition, but without the experimenter's presence at the time of choice. Error bars indicate 95% CI. The horizontal line indicates chance level (50%). *P < 0.05; **P < 0.01’ \citep{karg:2015_goggles}.
Return to the question; this is for discussion ...

Do goggles solve the ‘Logical Problem’?

‘“self-informed” belief induction variables [... are those] that, if the participant is capable of mentalizing, he or she knows only through extrapolation from her own experience to be indicative of what an agent can or cannot see and, therefore, does or does not believe’

\citep[p.~139]{heyes:2014_submentalizing}

Heyes, 2014 p. 139

I think: what you ‘project’ from your own experience could be merely knowledge of affordances (in Bugnyar et al) and appearances (in Karg et al). There’s no special reason to think that what you project is mental states rather than a better understanding of objects and their properties.