Details
The aix360 project's explainability subsystem is designed to provide a comprehensive suite of tools for generating and evaluating explanations for machine learning models. At its core, the system processes input data through specialized data handlers, which then feed into a diverse set of explainability algorithms. These algorithms, adhering to a common set of base classes and interfaces, produce various forms of explanations. The quality and characteristics of these explanations are subsequently assessed by a dedicated metrics component, ensuring the reliability and interpretability of the generated insights. This architecture promotes modularity, extensibility, and a clear separation of concerns, facilitating the development and integration of new explainability techniques.
aix360.algorithms.rule_induction.ripper.ripper.RipperExplainer
Responsible for generating explanations using rule induction techniques, fitting models, and making predictions based on learned rules.
aix360.algorithms.glance.counterfactual_tree.counterfactual_tree.CounterfactualTree
Implements the GLANCE explainability algorithm, encompassing explanation generation, model fitting, and prediction capabilities.
aix360.algorithms.imd.imd.IMDExplainer
Implements the IMD (Influence-based Model Debugging) explainability algorithm, responsible for explanation generation, model fitting, and prediction.
aix360.algorithms.nncontrastive.nncontrastive.NearestNeighborContrastiveExplainer
Implements the NNContrastive explainability algorithm, specifically designed for neural networks, handling explanation generation, model fitting, and prediction.
aix360.algorithms.rbm.boolean_rule_cg.BooleanRuleCG
Implements an explainability algorithm based on Restricted Boltzmann Machines (RBM), covering explanation generation, model fitting, and prediction.
aix360.algorithms.dipvae.dipvae.DIPVAEExplainer
Implements the DIP-VAE (Disentangled Inferred Prior Variational Autoencoder) explainability algorithm, responsible for explanation generation, model fitting, and prediction.
aix360.algorithms.ecertify.ecertify.CertifyExplanation
Implements the ECertify explainability algorithm, focusing on generating explanations, fitting models, and making predictions, potentially with an emphasis on robustness or certification.
aix360.algorithms.contrastive.CEM.CEMExplainer
Implements a general Contrastive explainability algorithm, responsible for explanation generation, model fitting, and prediction.
aix360.algorithms.ted.TED_Cartesian.TED_CartesianExplainer
Implements the TED explainability algorithm, encompassing explanation generation, model fitting, and prediction.
aix360.algorithms.tslime.tslime.TSLimeExplainer
Implements the TSLIME (Time Series LIME) explainability algorithm, specifically adapted for time-series data, handling explanation generation, model fitting, and prediction.
Explainability Metrics
Component responsible for evaluating and assessing the quality of explanations.
Data Handlers/Adapters
Component responsible for processing and adapting input datasets for explainability algorithms.
Base Classes/Interfaces
Common base classes and interfaces for explainability algorithms, ensuring a standardized API.