Explainable Machine Learning (XAI)

 

In person  

Explainable and transparent metrics for interpreting the outputs and processes in AI/ML are highly relevant topics for high-risk domains where regulation and clarity is a core goal. An increasing need for explainability in model application for applied areas of data science has spurred a whole field of research to develop methodologies that can be traceable, trusted and interpreted in the built in mechanisms for machine learning models and AI. This course will cover the main ideas of model explainability and interpretability in machine learning within the wider umbrella for this domain (XAI). Core concepts in the course will include models which are inherently explainable by design and suited for transparent outputs, what the post hoc possibilities are for explainability in complex models (advances in this area) and some demonstrative use cases and examples noting LIME and Shapley Values (SHAP). The course will include core taught content and demonstrative practical sessions. 

 

This course will be taught by Somya Iqbal. 

 

After taking part in this event, you may decide that you need some further help in applying what you have learnt to your research. If so, you can book a Data Surgery meeting with one of our training fellows. 

More details about Data Surgeries. 

Those who have registered to take part will receive an email with full details on how to get ready for this course. 

If you’re new to this training event format, or to CDCS training events in general, read more on what to expect from CDCS training. Here you will also find details of our cancellation and no-show policy, which applies to this event. 

  

Level  

This workshop requires the following pre-knowledge:   

  • Foundational knowledge of common machine learning models and related tasks
  • Basic statistical knowledge
  • Familiarity with code notebooks such as Jupyter notebooks/Google Colab 

 

Suggested Pre-reading 

  • Rudin, C. (2019). Stop explaining black box machine learning models for high-stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215.
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38. 

 

Learning Outcomes 

  • An understanding of how explainability in a model can be defined and demonstrated
  • An exploration of relevant explainable models and associated methods
  • In-depth understanding of contexts where explainability is relevant and why it is an important area of research
  • Understanding which tools and methodologies can be used to add explainable features when applying machine learning algorithms

 

Skills  

  • An ability to break down and interpret explainable features
  • Working with models in an applied setting
  • Programmatic workflows for explainable machine learning 

 

Explore More Training

 

Return to the Training Homepage to see other available events 

 

 

 

Room 4.35, Edinburgh Futures Institute

This room is on Level 4, in the North East side of the building.

When you enter via the level 2 East entrance on Middle Meadow Walk, the room will be on the 4th floor straight ahead.

When you enter via the level 2 North entrance on Lauriston Place underneath the clock tower, the room will be on the 4th floor to your left.

When you enter via the level 0 South entrance on Porters Walk (opposite Tribe Yoga), the room will be on the 4th floor to your right.

You might be interested in

A collage image of historical material

Digital Method of the Month: Text Analysis

UoE archive image with title of the training event

Foundations of Machine Learning

An illustrative collage with & symbol and old graphs

Getting Started with Regression in R

An illustrative collage with & symbol and a historical item

Getting Started with Bayesian Statistics

An illustrative collage with & symbol and an old photograph

Building Personal and Project Websites

Thumbnail with title of the training

Comparing Sentiment Analysis Models in R

An illustrative collage with & symbol and an old photograph

Explainable Machine Learning (XAI)

A collage image of historical map and images

Processing Geographical Data in QGIS