Welcome to TrustyAI 👋

Static

TrustyAI is an open source Responsible AI toolkit supported by Red Hat and IBM. TrustyAI provides tools for a variety of responsible AI workflows, such as:

  • Local and global model explanations

  • Fairness metrics

  • Drift metrics

  • Text detoxification

  • Language model benchmarking

  • Language model guardrails

TrustyAI is a default component of Open Data Hub and Red Hat Openshift AI, and has integrations with projects like KServe, Caikit, and vLLM.

🗂️ Our Projects 🗂️

  • TrustyAI core, the core TrustyAI Java module, containing fairness metrics, AI explainers, and other XAI utilities.

  • TrustyAI service, TrustyAI-as-a-service, a REST service for fairness metrics and explainability algorithms including ModelMesh integration.

  • TrustyAI operator, a Kubernetes operator for TrustyAI service.

  • Python TrustyAI, a Python library allowing the usage of TrustyAI’s toolkit from Jupyter notebooks

  • KServe explainer, a TrustyAI side-car that integrates with KServe’s built-in explainability features.

  • LM-Eval, generative text model benchmark and evaluation service, leveraging lm-evaluation-harness and Unitxt

📖 Resources 📖

Documentation

The Components tab in the side bar provides documentation for a number of TrustyAI components. Also check out:

Tutorials

Demos

  • Coming Soon

Development Notes

  • TrustyAI Reference provides scratch notes on various common development and testing flows

🤝 Join Us 🤝

The project roadmap offers a view on new tools and integration the project developers are planning to add.

TrustyAI uses the ODH governance model and code of conduct.

📖 Glossary 📖

XAI

XAI refers to artificial intelligence systems designed to provide clear, understandable explanations of their decisions and actions to human users.

Fairness

AI fairness refers to the design, development, and deployment of AI systems in a way that ensures they operate equitably and do not include biases or discrimination against any individual or group.