KServe explainer
Deployment on KServe
The TrustyAI explainer can be added to KServe InferenceServices
and can be configured to use either LIME or SHAP explanation methods by modifying the YAML configuration.
When deployed, KServe manages the routing of requests to the appropriate container. Calls to /v1/models/model:predict
will be sent to the predictor container, while calls to /v1/models/model:explain
will be sent to the explainer container. The payloads for both endpoints are the same, but the :predict
endpoint returns the model’s prediction, while the :explain
endpoint returns an explanation of the prediction.
LIME Explainer
By default, the TrustyAI KServe explainer uses the LIME explainer. You can deploy the explainer by specifying the appropriate container image and any necessary configuration in the InferenceService
YAML.
SHAP Explainer
To use the SHAP explainer, you can deploy the explainer by specifying it as an environment variable in the InferenceService
YAML configuration.
Interacting with the Explainer
You can interact with the explainer using the :explain
endpoint. By sending a JSON payload containing the necessary input data, you can retrieve an explanation for the model’s prediction. The response structure includes the saliencies of each feature contributing to the prediction.
A full tutorial on how to deploy the TrustyAI KServe explainer is available at Saliency Explanations with KServe.