orca_sdk.classification_model#
ClassificationModel
#
A handle to a classification model in OrcaCloud
Attributes:
-
id
(str
) –Unique identifier for the model
-
name
(str
) –Unique name of the model
-
description
(str | None
) –Optional description of the model
-
memoryset
(LabeledMemoryset
) –Memoryset that the model uses
-
head_type
(RACHeadType
) –Classification head type of the model
-
num_classes
(int
) –Number of distinct classes the model can predict
-
memory_lookup_count
(int
) –Number of memories the model uses for each prediction
-
weigh_memories
(bool | None
) –If using a KNN head, whether the model weighs memories by their lookup score
-
min_memory_weight
(float | None
) –If using a KNN head, minimum lookup score memories have to be over to not be ignored
-
locked
(bool
) –Whether the model is locked to prevent accidental deletion
-
created_at
(datetime
) –When the model was created
last_prediction
property
#
Last prediction made by the model
Note
If the last prediction was part of a batch prediction, the last prediction from the
batch is returned. If no prediction has been made yet, a LookupError
is raised.
create
classmethod
#
create(
name,
memoryset,
head_type="KNN",
*,
description=None,
num_classes=None,
memory_lookup_count=None,
weigh_memories=True,
min_memory_weight=None,
if_exists="error"
)
Create a new classification model
Parameters:
-
name
(str
) –Name for the new model (must be unique)
-
memoryset
(LabeledMemoryset
) –Memoryset to attach the model to
-
head_type
(Literal['BMMOE', 'FF', 'KNN', 'MMOE']
, default:'KNN'
) –Type of model head to use
-
num_classes
(int | None
, default:None
) –Number of classes this model can predict, will be inferred from memoryset if not specified
-
memory_lookup_count
(int | None
, default:None
) –Number of memories to lookup for each prediction, by default the system uses a simple heuristic to choose a number of memories that works well in most cases
-
weigh_memories
(bool
, default:True
) –If using a KNN head, whether the model weighs memories by their lookup score
-
min_memory_weight
(float | None
, default:None
) –If using a KNN head, minimum lookup score memories have to be over to not be ignored
-
if_exists
(CreateMode
, default:'error'
) –What to do if a model with the same name already exists, defaults to
"error"
. Other option is"open"
to open the existing model. -
description
(str | None
, default:None
) –Optional description for the model, this will be used in agentic flows, so make sure it is concise and describes the purpose of your model.
Returns:
-
ClassificationModel
–Handle to the new model in the OrcaCloud
Raises:
-
ValueError
–If the model already exists and if_exists is
"error"
or if it is"open"
and the existing model has different attributes.
Examples:
Create a new model using default options:
Create a new model with non-default model head and options:
open
classmethod
#
Get a handle to a classification model in the OrcaCloud
Parameters:
-
name
(str
) –Name or unique identifier of the classification model
Returns:
-
ClassificationModel
–Handle to the existing classification model in the OrcaCloud
Raises:
-
LookupError
–If the classification model does not exist
exists
classmethod
#
all
classmethod
#
Get a list of handles to all classification models in the OrcaCloud
Returns:
-
list[ClassificationModel]
–List of handles to all classification models in the OrcaCloud
drop
classmethod
#
Delete a classification model from the OrcaCloud
Warning
This will delete the model and all associated data, including predictions, evaluations, and feedback.
Parameters:
-
name_or_id
(str
) –Name or id of the classification model
-
if_not_exists
(DropMode
, default:'error'
) –What to do if the classification model does not exist, defaults to
"error"
. Other option is"ignore"
to do nothing if the classification model does not exist.
Raises:
-
LookupError
–If the classification model does not exist and if_not_exists is
"error"
set
#
Update editable attributes of the model.
Note
If a field is not provided, it will default to UNSET and not be updated.
Parameters:
-
description
(str | None
, default:UNSET
) –Value to set for the description
-
locked
(bool
, default:UNSET
) –Value to set for the locked status
Examples:
Update the description:
Remove description:
Lock the model:
predict
#
Predict label(s) for the given input value(s) grounded in similar memories
Parameters:
-
value
(list[str] | str
) –Value(s) to get predict the labels of
-
expected_labels
(list[int] | int | None
, default:None
) –Expected label(s) for the given input to record for model evaluation
-
tags
(set[str] | None
, default:None
) –Tags to add to the prediction(s)
-
save_telemetry
(Literal['off', 'on', 'sync', 'async']
, default:'on'
) –Whether to save telemetry for the prediction(s). One of *
"off"
: Do not save telemetry *"on"
: Save telemetry asynchronously unless theORCA_SAVE_TELEMETRY_SYNCHRONOUSLY
environment variable is set. *"sync"
: Save telemetry synchronously *"async"
: Save telemetry asynchronously
Returns:
-
list[ClassificationPrediction] | ClassificationPrediction
–Label prediction or list of label predictions
Examples:
Predict the label for a single value:
Predict the labels for a list of values:
predictions
#
Get a list of predictions made by this model
Parameters:
-
limit
(int
, default:100
) –Optional maximum number of predictions to return
-
offset
(int
, default:0
) –Optional offset of the first prediction to return
-
tag
(str | None
, default:None
) –Optional tag to filter predictions by
-
sort
(list[tuple[PredictionSortItemItemType0, PredictionSortItemItemType1]]
, default:[]
) –Optional list of columns and directions to sort the predictions by. Predictions can be sorted by
timestamp
orconfidence
. -
expected_label_match
(bool | None
, default:None
) –Optional filter to only include predictions where the expected label does (
True
) or doesn’t (False
) match the predicted label
Returns:
-
list[ClassificationPrediction]
–List of label predictions
Examples:
Get the last 3 predictions:
Get second most confident prediction:
Get predictions where the expected label doesn’t match the predicted label:
evaluate
#
evaluate(
data,
*,
value_column="value",
label_column="label",
record_predictions=False,
tags={"evaluation"},
batch_size=100,
background=False
)
Evaluate the classification model on a given dataset or datasource
Parameters:
-
data
(Datasource | Dataset
) –Dataset or Datasource to evaluate the model on
-
value_column
(str
, default:'value'
) –Name of the column that contains the input values to the model
-
label_column
(str
, default:'label'
) –Name of the column containing the expected labels
-
record_predictions
(bool
, default:False
) –Whether to record
ClassificationPrediction
s for analysis -
tags
(set[str]
, default:{'evaluation'}
) –Optional tags to add to the recorded
ClassificationPrediction
s -
batch_size
(int
, default:100
) –Batch size for processing Dataset inputs (only used when input is a Dataset)
-
background
(bool
, default:False
) –Whether to run the operation in the background and return a job handle
Returns:
-
ClassificationMetrics | Job[ClassificationMetrics]
–EvaluationResult containing metrics including accuracy, F1 score, ROC AUC, PR AUC, and anomaly score statistics
Examples:
use_memoryset
#
Temporarily override the memoryset used by the model for predictions
Parameters:
-
memoryset_override
(LabeledMemoryset
) –Memoryset to override the default memoryset with
Examples:
record_feedback
#
Record feedback for a list of predictions.
We support recording feedback in several categories for each prediction. A
FeedbackCategory
is created automatically,
the first time feedback with a new name is recorded. Categories are global across models.
The value type of the category is inferred from the first recorded value. Subsequent
feedback for the same category must be of the same type.
Parameters:
-
feedback
(Iterable[dict[str, Any]] | dict[str, Any]
) –Feedback to record, this should be dictionaries with the following keys:
category
: Name of the category under which to record the feedback.value
: Feedback value to record, should beTrue
for positive feedback andFalse
for negative feedback or afloat
between-1.0
and+1.0
where negative values indicate negative feedback and positive values indicate positive feedback.comment
: Optional comment to record with the feedback.
Examples:
Record whether predictions were correct or incorrect:
Record star rating as normalized continuous score between -1.0
and +1.0
:
Raises:
-
ValueError
–If the value does not match previous value types for the category, or is a
float
that is not between-1.0
and+1.0
.