Skip to content

orca_sdk.telemetry#

FeedbackCategory #

A category of feedback for predictions.

Categories are created automatically, the first time feedback with a new name is recorded. The value type of the category is inferred from the first recorded value. Subsequent feedback for the same category must be of the same type. Categories are not model specific.

Attributes:

  • id (str) –

    Unique identifier for the category.

  • name (str) –

    Name of the category.

  • value_type (type[bool] | type[float]) –

    Type that values for this category must have.

  • created_at (datetime) –

    When the category was created.

all classmethod #

all()

Get a list of all existing feedback categories.

Returns:

drop classmethod #

drop(name)

Drop all feedback for this category and drop the category itself, allowing it to be recreated with a different value type.

Warning

This will delete all feedback in this category across all models.

Parameters:

  • name (str) –

    Name of the category to drop.

Raises:

ClassificationPrediction #

Bases: _Prediction

Labeled prediction result from a ClassificationModel

Attributes:

  • prediction_id

    Unique identifier of this prediction used for feedback

  • label (int) –

    Label predicted by the model

  • label_name (str) –

    Human-readable name of the label

  • confidence (str) –

    Confidence of the prediction

  • anomaly_score (str) –

    Anomaly score of the input

  • input_value (str) –

    The input value used for the prediction

  • expected_label (int | None) –

    Expected label for the prediction, useful when evaluating the model

  • expected_label_name (str | None) –

    Human-readable name of the expected label

  • memory_lookups (list[LabeledMemoryLookup]) –

    Memories used by the model to make the prediction

  • explanation (list[LabeledMemoryLookup]) –

    Natural language explanation of the prediction, only available if the model has the Explain API enabled

  • tags (list[LabeledMemoryLookup]) –

    Tags for the prediction, useful for filtering and grouping predictions

  • model (ClassificationModel) –

    Model used to make the prediction

  • memoryset (LabeledMemoryset) –

    Memoryset that was used to lookup memories to ground the prediction

explain #

explain(refresh=False)

Print an explanation of the prediction as a stream of text.

Parameters:

  • refresh (bool, default: False ) –

    Force the explanation agent to re-run even if an explanation already exists.

get classmethod #

get(prediction_id: str) -> Self
get(prediction_id: Iterable[str]) -> list[Self]
get(prediction_id)

Fetch a prediction or predictions

Parameters:

  • prediction_id (str | Iterable[str]) –

    Unique identifier of the prediction or predictions to fetch

Returns:

Raises:

  • LookupError

    If no prediction with the given id is found

Examples:

Fetch a single prediction:

1
2
3
4
5
6
7
8
9
>>> LabelPrediction.get("0195019a-5bc7-7afb-b902-5945ee1fb766")
LabelPrediction({
    label: <positive: 1>,
    confidence: 0.95,
    anomaly_score: 0.1,
    input_value: "I am happy",
    memoryset: "my_memoryset",
    model: "my_model"
})

Fetch multiple predictions:

>>> LabelPrediction.get([
...     "0195019a-5bc7-7afb-b902-5945ee1fb766",
...     "019501a1-ea08-76b2-9f62-95e4800b4841",
... ])
[
    LabelPrediction({
        label: <positive: 1>,
        confidence: 0.95,
        anomaly_score: 0.1,
        input_value: "I am happy",
        memoryset: "my_memoryset",
        model: "my_model"
    }),
    LabelPrediction({
        label: <negative: 0>,
        confidence: 0.05,
        anomaly_score: 0.2,
        input_value: "I am sad",
        memoryset: "my_memoryset", model: "my_model"
    }),
]

refresh #

refresh()

Refresh the prediction data from the OrcaCloud

add_tag #

add_tag(tag)

Add a tag to the prediction

Parameters:

  • tag (str) –

    Tag to add to the prediction

remove_tag #

remove_tag(tag)

Remove a tag from the prediction

Parameters:

  • tag (str) –

    Tag to remove from the prediction

record_feedback #

record_feedback(category, value, *, comment=None)

Record feedback for the prediction.

We support recording feedback in several categories for each prediction. A FeedbackCategory is created automatically, the first time feedback with a new name is recorded. Categories are global across models. The value type of the category is inferred from the first recorded value. Subsequent feedback for the same category must be of the same type.

Parameters:

  • category (str) –

    Name of the category under which to record the feedback.

  • value (bool | float) –

    Feedback value to record, should be True for positive feedback and False for negative feedback or a float between -1.0 and +1.0 where negative values indicate negative feedback and positive values indicate positive feedback.

  • comment (str | None, default: None ) –

    Optional comment to record with the feedback.

Examples:

Record whether a suggestion was accepted or rejected:

>>> prediction.record_feedback("accepted", True)

Record star rating as normalized continuous score between -1.0 and +1.0:

>>> prediction.record_feedback("rating", -0.5, comment="2 stars")

Raises:

  • ValueError

    If the value does not match previous value types for the category, or is a float that is not between -1.0 and +1.0.

delete_feedback #

delete_feedback(category)

Delete prediction feedback for a specific category.

Parameters:

  • category (str) –

    Name of the category of the feedback to delete.

Raises:

inspect #

inspect()

Display an interactive UI with the details about this prediction

Parameters:

  • **kwargs

    Additional keyword arguments to pass to the display function

Note

This method is only available in Jupyter notebooks.

update #

update(*, tags=UNSET, expected_label=UNSET)

Update the prediction.

Note

If a field is not provided, it will default to UNSET and not be updated.

Parameters:

  • tags (set[str] | None, default: UNSET ) –

    New tags to set for the prediction. Set to None to remove all tags.

  • expected_label (int | None, default: UNSET ) –

    New expected label to set for the prediction. Set to None to remove.

RegressionPrediction #

Bases: _Prediction

Score-based prediction result from a RegressionModel

Attributes:

  • prediction_id

    Unique identifier of this prediction used for feedback

  • score (float) –

    Score predicted by the model

  • confidence (float) –

    Confidence of the prediction

  • anomaly_score (float) –

    Anomaly score of the input

  • input_value (float) –

    The input value used for the prediction

  • expected_score (float | None) –

    Expected score for the prediction, useful when evaluating the model

  • memory_lookups (list[ScoredMemoryLookup]) –

    Memories used by the model to make the prediction

  • explanation (str) –

    Natural language explanation of the prediction, only available if the model has the Explain API enabled

  • tags (str) –

    Tags for the prediction, useful for filtering and grouping predictions

  • model (RegressionModel) –

    Model used to make the prediction

  • memoryset (ScoredMemoryset) –

    Memoryset that was used to lookup memories to ground the prediction

explanation property #

explanation

The explanation for this prediction. Requires lighthouse_client_api_key to be set.

explain #

explain(refresh=False)

Print an explanation of the prediction as a stream of text.

Parameters:

  • refresh (bool, default: False ) –

    Force the explanation agent to re-run even if an explanation already exists.

get classmethod #

get(prediction_id: str) -> Self
get(prediction_id: Iterable[str]) -> list[Self]
get(prediction_id)

Fetch a prediction or predictions

Parameters:

  • prediction_id (str | Iterable[str]) –

    Unique identifier of the prediction or predictions to fetch

Returns:

Raises:

  • LookupError

    If no prediction with the given id is found

Examples:

Fetch a single prediction:

1
2
3
4
5
6
7
8
9
>>> LabelPrediction.get("0195019a-5bc7-7afb-b902-5945ee1fb766")
LabelPrediction({
    label: <positive: 1>,
    confidence: 0.95,
    anomaly_score: 0.1,
    input_value: "I am happy",
    memoryset: "my_memoryset",
    model: "my_model"
})

Fetch multiple predictions:

>>> LabelPrediction.get([
...     "0195019a-5bc7-7afb-b902-5945ee1fb766",
...     "019501a1-ea08-76b2-9f62-95e4800b4841",
... ])
[
    LabelPrediction({
        label: <positive: 1>,
        confidence: 0.95,
        anomaly_score: 0.1,
        input_value: "I am happy",
        memoryset: "my_memoryset",
        model: "my_model"
    }),
    LabelPrediction({
        label: <negative: 0>,
        confidence: 0.05,
        anomaly_score: 0.2,
        input_value: "I am sad",
        memoryset: "my_memoryset", model: "my_model"
    }),
]

refresh #

refresh()

Refresh the prediction data from the OrcaCloud

add_tag #

add_tag(tag)

Add a tag to the prediction

Parameters:

  • tag (str) –

    Tag to add to the prediction

remove_tag #

remove_tag(tag)

Remove a tag from the prediction

Parameters:

  • tag (str) –

    Tag to remove from the prediction

record_feedback #

record_feedback(category, value, *, comment=None)

Record feedback for the prediction.

We support recording feedback in several categories for each prediction. A FeedbackCategory is created automatically, the first time feedback with a new name is recorded. Categories are global across models. The value type of the category is inferred from the first recorded value. Subsequent feedback for the same category must be of the same type.

Parameters:

  • category (str) –

    Name of the category under which to record the feedback.

  • value (bool | float) –

    Feedback value to record, should be True for positive feedback and False for negative feedback or a float between -1.0 and +1.0 where negative values indicate negative feedback and positive values indicate positive feedback.

  • comment (str | None, default: None ) –

    Optional comment to record with the feedback.

Examples:

Record whether a suggestion was accepted or rejected:

>>> prediction.record_feedback("accepted", True)

Record star rating as normalized continuous score between -1.0 and +1.0:

>>> prediction.record_feedback("rating", -0.5, comment="2 stars")

Raises:

  • ValueError

    If the value does not match previous value types for the category, or is a float that is not between -1.0 and +1.0.

delete_feedback #

delete_feedback(category)

Delete prediction feedback for a specific category.

Parameters:

  • category (str) –

    Name of the category of the feedback to delete.

Raises:

inspect #

inspect()

Display an interactive UI with the details about this prediction

Parameters:

  • **kwargs

    Additional keyword arguments to pass to the display function

Note

This method is only available in Jupyter notebooks.

update #

update(*, tags=UNSET, expected_score=UNSET)

Update the prediction.

Note

If a field is not provided, it will default to UNSET and not be updated.

Parameters:

  • tags (set[str] | None, default: UNSET ) –

    New tags to set for the prediction. Set to None to remove all tags.

  • expected_score (float | None, default: UNSET ) –

    New expected score to set for the prediction. Set to None to remove.