SDK Reference
Evaluate
Detect faults in LLM output
Create evaluations
Evaluations are automatically created by the Chat endpoint. The model output is what is evaluated, with the request used as context.
If a callback
is provided, the evaluation is passed to that function, and inference is not affected.
If no callback
is provided, and stream
is true
, then the evaluation is available on the last chunk.
If no callback
is provided and stream
is false
or none
, then the evaluation can be found on the completion response.
Evaluate Response
The Maitai identifier for the application.
The identifier of the evaluated Chat Completion request.
A list of individual Sentinel’s results
The unique identifier for the evaluation request.
Was this page helpful?