We focus on methods for generating interpretable and explainable models, such as locally explainable models, and rule learning. Interpretability has gained more attention in recent years, since data science methods and models have started to be used to larger extend in both industry and society as a whole. It can be quantified at the model level, i.e., by providing a description of the whole model to the human or by instances, i.e., explaining for each decision the reasons and motivate behind the decision. There exist lot of aspects on models that relate to interpretability, such as model stability and size, dimensionality reduction, and visualization.
Measuring the Burden of (Un) fairness Using Counterfactuals
In Machine Learning and Principles and Practice of Knowledge Discovery in Databases: International Workshops of ECML PKDD 2022, Grenoble, France, September 19–23, 2022, Proceedings, Part I, 2023 |
|||||||||
Post-Hoc Explainability for Time Series Classification: Toward a signal processing perspective
IEEE Signal Processing Magazine, 2022 |
|||||||||
Evaluating Local Interpretable Model-Agnostic Explanations on Clinical Machine Learning Classification Models
In International Symposium on Computer-Based Medical Systems (CBMS), 2020 |
|||||||||
Example-Based Feature Tweaking Using Random Forests
In International Conference on Information Reuse and Integration for Data Science (IRI), 2019 |
|||||||||
Explainable Predictions of Adverse Drug Events from Electronic Health
Records Via Oracle Coaching
In International Conference on Data Mining Workshops (ICDMW), 2018 |