Site Logo TrustyAI
Home
Download

TrustyAI

    • Welcome to TrustyAI 👋
    • Features
      • Bias metrics
      • Drift metrics
      • Language Metrics
      • Local explainers
    • Tutorials
      • Installing on Open Data Hub
      • Bias Monitoring via TrustyAI in ODH
      • Data Drift Monitoring
      • Accessing Service from Python
      • Saliency explanations
        • Saliency explanations on ODH
        • Saliency explanations with KServe
      • Getting started with LM-Eval
        • Toxicity Measurement
      • Getting Started with GuardrailsOrchestrator
    • Components
      • TrustyAI service
      • TrustyAI operator
      • Python TrustyAI
      • TrustyAI core
      • KServe explainer
      • LM-Eval
      • GuardrailsOrchestrator
    • Reference
      • TrustyAI service API
TrustyAI main
  • TrustyAI
    • main
  • TrustyAI
  • Tutorials
  • Saliency explanations
Edit this Page

Saliency explanations

These tutorials will walk you through setting up and using TrustyAI to provide saliency explanations for model predictions, using the various integrations provided.

You will have instructions on producing saliency explanations using TrustyAI:

  • As a service running on Open Data Hub

  • As a pluggable explainer for KServe

This page is part of TrustyAI, which is licensed under the Apache License 2.0.

See the LICENSE file for more details.