BEGIN:VCALENDAR
CALSCALE:GREGORIAN
VERSION:2.0
X-WR-TIMEZONE:Europe/Paris
METHOD:PUBLISH
PRODID:-//LIP6//www.lip6.fr//FR
X-WR-CALNAME;VALUE=TEXT:Séminaire LIP6
X-LIC-LOCATION:Europe/Paris
BEGIN:VTIMEZONE
TZID:Europe/Paris
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
DTSTART:19810329T020000
TZNAME:GMT+02:00
TZOFFSETTO:+0200
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
DTSTART:19961027T030000
TZNAME:GMT+01:00
TZOFFSETTO:+0100
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
SUMMARY:Balancing fidelity and interpretability in XAI for global mod
 el understanding
ORGANIZER;CN=Christophe Marsala:MAILTO:Christophe.Marsala@lip6.fr
ATTENDEE;CN=Caroline MAZINI RODRIGUES (COMPACT team -- IRISA/CNRS);CU
 TYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:mailto:Christophe.Marsala@lip6.fr
DESCRIPTION:<br>
  <b>Abstract</b>: As neural networks grow larger and
  more complex, they achieve strong performance but also introduce imp
 ortant risks. For example, a facial attribute recognition model may r
 ely on biased correlations related to gender or race rather than on m
 eaningful visual features, leading to unfair or misleading decisions 
 despite high accuracy. Explainable Artificial Intelligence (XAI) aims
  to make these black-box models more understandable. In particular, p
 ost-hoc explanation methods identify which features influence a mode
 l’s predictions without changing the model itself. However, highly 
 faithful explanations are often difficult for humans to interpret, wh
 ile simpler explanations may not accurately reflect the model’s rea
 soning. In this work, we study how to balance fidelity and interpreta
 bility to improve explanations for model auditing and debugging.
  <br
 >
  <b>Short BIO</b>: Caroline Mazini Rodrigues is a postdoctoral rese
 archer at the Institut de Recherche en Informatique et Systèmes Alé
 atoires (IRISA), where she develops low-complexity algorithms for vid
 eo compression networks, aiming for simplicity, interpretability, and
  frugality in AI models. She earned her PhD from Université Gustave 
 Eiffel, conducting research at the Laboratoire d’Informatique Gaspa
 rd-Monge (LIGM) and the Laboratoire de Recherche de l'EPITA (LRE) on 
 explainable Artificial Intelligence (XAI), with a particular focus on
  understanding the reasoning processes of deep neural networks. Her r
 esearch interests include making AI more transparent, frugal, and hum
 an-interpretable, while exploring the cognitive aspects of machine le
 arning and its connections to human reasoning and learning.
  <br>
 <b>
 Website</b>: https://carolmazini.github.io/
  <i>Un lien de connexion 
 Zoom sera affiché sur la (<a href="https://lfi.lip6.fr/seminaires/">
  page des séminaires LFI </a>) le jour du séminaire.</i>
  Ce sémin
 aire est organisé conjointement avec le Chapitre Français de l’IE
 EE Computational Intelligence Society.
DTSTAMP:20260512T184144Z
DTSTART;TZID=Europe/Paris:20260528T140000
DURATION:PT2H
URL;VALUE=URI:https://www.lip6.fr/liens/organise-fiche.php?ident=O1239
UID:LIP6/SEM/O1239
LOCATION:Campus Pierre et Marie Curie, salle Jacques Pitrat (25-26/10
 5)
GEO:48.847047;2.354619
END:VEVENT
END:VCALENDAR
