Skip to content

orca_sdk.telemetry#

FeedbackCategory #

A category of feedback for predictions.

Categories are created automatically, the first time feedback with a new name is recorded. The value type of the category is inferred from the first recorded value. Subsequent feedback for the same category must be of the same type. Categories are not model specific.

Attributes:

  • id (str) –

    Unique identifier for the category.

  • name (str) –

    Name of the category.

  • value_type (type[bool] | type[float]) –

    Type that values for this category must have.

  • created_at (datetime) –

    When the category was created.

all classmethod #

all()

Get a list of all existing feedback categories.

Returns:

drop classmethod #

drop(name)

Drop all feedback for this category and drop the category itself, allowing it to be recreated with a different value type.

Warning

This will delete all feedback in this category across all models.

Parameters:

  • name (str) –

    Name of the category to drop.

Raises:

LabelPrediction #

A prediction made by a model

Attributes:

  • prediction_id (str) –

    Unique identifier for the prediction

  • label (int) –

    Predicted label for the input value

  • label_name (str | None) –

    Name of the predicted label

  • confidence (float) –

    Confidence of the prediction

  • memory_lookups (list[LabeledMemoryLookup]) –

    List of memories used to ground the prediction

  • input_value (str | None) –

    Input value that this prediction was for

  • model (ClassificationModel) –

    Model that was used to make the prediction

  • memoryset (LabeledMemoryset) –

    Memoryset that was used to lookup memories to ground the prediction

  • expected_label (int | None) –

    Optional expected label that was set for the prediction

  • tags (set[str]) –

    tags that were set for the prediction

  • feedback (dict[str, bool | float]) –

    Feedback recorded, mapping from category name to value

get classmethod #

get(prediction_id: str) -> LabelPrediction
get(prediction_id: Iterable[str]) -> list[LabelPrediction]
get(prediction_id)

Fetch a prediction or predictions

Parameters:

  • prediction_id (str | Iterable[str]) –

    Unique identifier of the prediction or predictions to fetch

Returns:

Raises:

  • LookupError

    If no prediction with the given id is found

Examples:

Fetch a single prediction:

1
2
3
4
5
6
7
8
>>> LabelPrediction.get("0195019a-5bc7-7afb-b902-5945ee1fb766")
LabelPrediction({
    label: <positive: 1>,
    confidence: 0.95,
    input_value: "I am happy",
    memoryset: "my_memoryset",
    model: "my_model"
})

Fetch multiple predictions:

>>> LabelPrediction.get([
...     "0195019a-5bc7-7afb-b902-5945ee1fb766",
...     "019501a1-ea08-76b2-9f62-95e4800b4841",
... ])
[
    LabelPrediction({
        label: <positive: 1>,
        confidence: 0.95,
        input_value: "I am happy",
        memoryset: "my_memoryset",
        model: "my_model"
    }),
    LabelPrediction({
        label: <negative: 0>,
        confidence: 0.05,
        input_value: "I am sad",
        memoryset: "my_memoryset", model: "my_model"
    }),
]

refresh #

refresh()

Refresh the prediction data from the OrcaCloud

inspect #

inspect()

Open a UI to inspect the memories used by this prediction

update #

update(*, expected_label=UNSET, tags=UNSET)

Update editable prediction properties.

Parameters:

  • expected_label (int | None, default: UNSET ) –

    Value to set for the expected label, defaults to [UNSET] if not provided.

  • tags (set[str] | None, default: UNSET ) –

    Value to replace existing tags with, defaults to [UNSET] if not provided.

Examples:

Update the expected label:

>>> prediction.update(expected_label=1)

Add a new tag:

>>> prediction.update(tags=prediction.tags | {"new_tag"})

Remove expected label and tags:

>>> prediction.update(expected_label=None, tags=None)

add_tag #

add_tag(tag)

Add a tag to the prediction

Parameters:

  • tag (str) –

    Tag to add to the prediction

remove_tag #

remove_tag(tag)

Remove a tag from the prediction

Parameters:

  • tag (str) –

    Tag to remove from the prediction

record_feedback #

record_feedback(category, value, *, comment=None)

Record feedback for the prediction.

We support recording feedback in several categories for each prediction. A FeedbackCategory is created automatically, the first time feedback with a new name is recorded. Categories are global across models. The value type of the category is inferred from the first recorded value. Subsequent feedback for the same category must be of the same type.

Parameters:

  • category (str) –

    Name of the category under which to record the feedback.

  • value (bool | float) –

    Feedback value to record, should be True for positive feedback and False for negative feedback or a float between -1.0 and +1.0 where negative values indicate negative feedback and positive values indicate positive feedback.

  • comment (str | None, default: None ) –

    Optional comment to record with the feedback.

Examples:

Record whether a suggestion was accepted or rejected:

>>> prediction.record_feedback("accepted", True)

Record star rating as normalized continuous score between -1.0 and +1.0:

>>> prediction.record_feedback("rating", -0.5, comment="2 stars")

Raises:

  • ValueError

    If the value does not match previous value types for the category, or is a float that is not between -1.0 and +1.0.

delete_feedback #

delete_feedback(category)

Delete prediction feedback for a specific category.

Parameters:

  • category (str) –

    Name of the category of the feedback to delete.

Raises: