Welcome to TrustyAI 👋
TrustyAI is an open source Responsible AI toolkit supported by Red Hat and IBM. TrustyAI provides tools for a variety of responsible AI workflows, such as:
-
Local and global model explanations
-
Fairness metrics
-
Drift metrics
-
Text detoxification
-
Language model benchmarking
-
Language model guardrails
TrustyAI is a default component of Open Data Hub and Red Hat Openshift AI, and has integrations with projects like KServe, Caikit, and vLLM.
🗂️ Our Projects 🗂️
-
TrustyAI core, the core TrustyAI Java module, containing fairness metrics, AI explainers, and other XAI utilities.
-
TrustyAI service, TrustyAI-as-a-service, a REST service for fairness metrics and explainability algorithms including ModelMesh integration.
-
TrustyAI operator, a Kubernetes operator for TrustyAI service.
-
Python TrustyAI, a Python library allowing the usage of TrustyAI’s toolkit from Jupyter notebooks
-
KServe explainer, a TrustyAI side-car that integrates with KServe’s built-in explainability features.
-
LM-Eval, generative text model benchmark and evaluation service, leveraging lm-evaluation-harness and Unitxt
📖 Resources 📖
Documentation
The Components tab in the side bar provides documentation for a number of TrustyAI components. Also check out:
Tutorials
-
The Tutorials sidebar tab provides walkthroughs of a variety of different TrustyAI flows, like bias monitoring, drift monitoring, and language model evaluation.
-
trustyai-explainability-python-examples: Examples on how to get started with the Python TrustyAI library.
-
trustyai-odh-demos: Demos of the TrustyAI Service within Open Data Hub.
Development Notes
-
TrustyAI Reference provides scratch notes on various common development and testing flows
🤝 Join Us 🤝
Check out our community repository for discussions and our Community Meeting information.
The project roadmap offers a view on new tools and integration the project developers are planning to add.
TrustyAI uses the ODH governance model and code of conduct.
📖 Glossary 📖
XAI |
XAI refers to artificial intelligence systems designed to provide clear, understandable explanations of their decisions and actions to human users. |
Fairness |
AI fairness refers to the design, development, and deployment of AI systems in a way that ensures they operate equitably and do not include biases or discrimination against any individual or group. |