[Liste-proml] Cfp: Transparency and Interpretability in Sequential Models @ ICGI'18

Francois Coste francois.coste at inria.fr
Mer 28 Mar 12:08:17 CEST 2018


Bonjour,

Ci-dessous, un appel pour une session originale à ICGI'18 largement 
ouvert à ceux qui sont s'intéressés par l'apprentissage sur les 
séquences et l'interprétabilité des modèles...
Cordialement,
François

[Please accept our apologies if you receive multiple copies of this Call 
for Papers (CFP)]
*
*The ICGI Steering Committee is calling for proposals on the broad topic 
of "*Transparency and Interpretability in Sequential Models*".


====================
Submission Deadline
====================

May 15, 2018


====================
Call for Proposals
====================

We are requesting position papers on how sequential models should be 
evaluated and/or designed for transparency. Proposals should address the 
questions of how to produce an explanation for an individual prediction 
and how to evaluate the quality of such explanation. Proposals must 
clearly describe the context for the proposed approach, including a 
description of the type of models to which the proposal applies. We 
welcome both proposals that address interpretability of black-box models 
as well as proposals tailored to a particular family of models. We also 
welcome proposals addressing interpretability in the context of specific 
applications involving sequential data, including natural language 
processing, biology and software engineering.

====================
Context
====================

The widespread adoption of ML and AI technologies raises ethical, 
technical and regulatory issues around fairness, transparency and 
accountability. Tackling these issues will require a community-wide 
effort ranging from the development of new mathematical and algorithmic 
tools to the understanding of the regulatory and ethical aspects of each 
of these concerns by academic and industry researchers.

A particular topic of growing interest is the capacity of holding 
data-driven algorithms accountable for their decisions. For example, the 
upcoming GDPR EU regulations require companies to be fair and 
transparent about their use of personal data [4]. This has spurred the 
interest of the research community [3] not only to show examples of 
unfair treatment by existing algorithms [1], but also to come up with 
solid measures to evaluate if an algorithm is fair [2,5] and techniques 
to embed fairness as a constraint in machine learning algorithms.

Recently proposed methods to produce explanations for decisions made by 
machine learning models include focus on models for fixed-size data, and 
in general are not applicable to models involving sequential data. 
Interpreting sequential models is an inherently harder because of the 
non-locality introduced by memory and the recurrence properties of such 
models.


====================
Practical Details
====================

Submissions (max. 6 pages plus references in JMLR format) should be 
submitted to the “Transparency and Interpretability” track of ICGI 
(https://easychair.org/conferences/?conf=icgi2018), before May, 15th 
2018. Accepted proposals will be presented at a special session during 
ICGI 2018 (http://icgi2018.pwr.edu.pl/, Wroclaw, Poland; Sept 5-7)

The ICGI Steering Committee intends the special session to spur 
development of a future competition around interpretable sequence 
models. The ICGI Steering Committee will invite selected authors of 
papers presented during the special session to organize a competition on 
interpretable sequence models, for which we are discussing sponsorship.



====================
Programme Committee
====================

Borja Balle Pigem - Amazon Research
Leonor Becerra-Bonache - Jean Monnet University
François Coste - INRIA Rennes
Rémi Eyraud - LIF Marseille
Matthias Gallé - Naver Labs Europe
Jeffrey Heinz - Stony Brooks University
Olgierd Unold - Wroclaw University of Technology
Menno van Zaanen - Tilburg University
Sicco Verwer - Delft University of Technology
Ryo Yoshinaka - Kyoto University



====================
References
====================



[1] for examples see for instancehttps://fairmlclass.github.io/

[2] Sorelle A. Friedler, Carlos Scheidegger, and Suresh 
Venkatasubramanian. On the (im)possibility of fairness. 
arXiv:1609.07236, Sept. 23, 2016

[3] workshops 
(https://www.fatml.org,http://home.earthlink.net/~dwaha/research/meetings/ijcai17-xai/ 
<http://home.earthlink.net/%7Edwaha/research/meetings/ijcai17-xai/>), as 
well as a long list of smaller events and discussion in ML conferences 
(https://www.oii.ox.ac.uk/blog/workshops-on-artificial-intelligence-ethics-and-the-law-what-challenges-what-opportunities/,https://nips.cc/Conferences/2017/Schedule?showEvent=8734,https://nips.cc/Conferences/2017/Schedule?showEvent=8744) 


[4] Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. "Why a 
right to explanation of automated decision-making does not exist in the 
general data protection regulation." International Data Privacy Law 7, 
no. 2 (2017): 76-99.

[5] Kleinberg, Jon, Sendhil Mullainathan, and Manish Raghavan. "Inherent 
trade-offs in the fair determination of risk scores." arXiv preprint 
arXiv:1609.05807 (2016).



-------------- section suivante --------------
Une pièce jointe HTML a été nettoyée...
URL: <http://lists.lri.fr/pipermail/liste-proml/attachments/20180328/e5fada53/attachment-0001.html>


Plus d'informations sur la liste de diffusion Liste-proml