Unverified black-box model is a path to failure. Opaqueness leads to distrust. Distrust leads to ignoration. Ignoration leads to rejection.
DrWhy.AI is the collection of tools for Explainable AI (XAI). It's based on shared principles and simple grammar for exploration, explanation and examination of predictive models. The main concept implies using model-agnostic post hoc explanations to visualize black-box model complex behaviour.
In the talk, I will provide a background for why XAI has become an integral part of the model development process. Later I plan to briefly showcase several tools implementing interfaces that enhance the model explanation process in Python and R. I will mainly focus on DALEX (>650 stars on GitHub).
Listeners will have a chance to get familiar with the basics of model explanation and bias detection on a practical use case.
Hubert Baniecki is a Data Science student at Faculty of Mathematics and Information Science, Warsaw University of Technology. Working as a Research Software Engineer at MI2 DataLab (research group lead by Przemyslaw Biecek). Developing tools for Explainable AI and contributing to the open-source community (R & Python packages). Researching ML in the context of interpretability, adversarial attacks and interactive model exploration.