NLP & Interpretability Day

As European regulations on AI take effect, explainability and interpretability are becoming key issues. In natural language and speech processing and information retrieval, they help us understand how models work internally, identify biases and limitations, and design new, more reliable architectures.

This day organised by the GDR TAL invites you to share your work, experiences and perspectives on these issues.

We encourage exploratory contributions, feedback and work in progress. Proposals for presentations should be submitted in the form of a one-page abstract in text format with the structure below, to be submitted on this website. Submission may be in French or English. Please specify and justify your preferred presentation format (oral, poster, demonstration).

The topics covered during this event, which must focus on Natural Language Processing, are as follows:

  • Interpretability for bias detection: Study how explicative methods can highlight implicit biases in data or models and contribute to fairer and more responsible systems.
  • Interpretability and applications: Explore the use of explicative approaches in sensitive fields such as medicine, education, or social sciences, and discuss their practical and societal impact.
  • Speech, multimodality and interpretability: Specific challenges related to explaining models for speech recognition and generation, as well as for multimodal systems combining text, image, sound or video.
  • Interpretability by design of models: New approaches where explainability is integrated from the design stage of models, rather than added a posteriori. 
  • Interpretability and model design: Approaches where interpretability is used to design new models and architectures.
Loading... Loading...