LIP6 CNRS Sorbonne Université Tremplin Carnot Interfaces

Colloquium d’Informatique de Sorbonne Université
Patrick Haggard, University College London

Wednesday 25 November 2020 18:00
Sorbonne University - Faculté des Sciences

Responsibility for intelligent machines: a cognitive approach

Patrick Haggard

Patrick Haggard leads the "Action and Body" research group at the Institute of Cognitive Neuroscience, University College London. During 2020 he holds the Jean D'Alembert visiting professorship at IEA-Paris/Paris-Saclay. His core research interests lie in the sensory and motor bases of human cognition. He has published several articles on the bases of voluntary action, agency and responsibility in the human brain. He has a specific interest in the technological ethics of human action, and has published on the ethics of VR.

Abstract

Much is written about "responsible AI" - but what does responsibility mean in this context? This talk begins by considering the cognitive basis of human responsibility, in order to inform comparisons between human and artificial agents. Human agents make a mental link between their intended action, and the outcome of that action. I will show that this mental link underpins the everyday experiences of sense of agency and responsibility - which algorithmic systems currently lack. Human agency has two specific important features, which make (most) humans safe agents for us to interact with. First, human agents can step back from a current goal once circumstances mean that goal is no longer appropriate. Many artificial agents still rely on a human over-ride to perform this stepping-back function. Second, while human actions have low explainability (we often don't know why we do what we do), they can have high fixability (we often change what we do, given appropriate learning signals). Discussions about the explainability of AI should be replaced by discussions of fixability. Last, I will consider the social dimension of human and machine action. The human sense of agency and responsibility are carefully trained by society, through reinforcement and cultural learning in early childhood experience that we do not generally remember. The public sphere is increasingly inhabited and shaped by artificial agents. I will consider what cognitive attributes AIs will need to have in order for us to cohabit with them, as opposed to merely use them, or avoid them.

Other information

Contact: Antoine Miné

Steering committee

Electronic access

Colloquium announcements

In order to be informed of future events via emails, you can subscribe to colloquium announcements.
If you do not want to be informed anymore, you can unsubscribe to colloquium announcements