orca_sdk.regression_model#
RegressionModel
#
A handle to a regression model in OrcaCloud
Attributes:
-
id
(str
) –Unique identifier for the model
-
name
(str
) –Unique name of the model
-
description
(str | None
) –Optional description of the model
-
memoryset
(ScoredMemoryset
) –Memoryset that the model uses
-
head_type
(RARHeadType
) –Regression head type of the model
-
memory_lookup_count
(int
) –Number of memories the model uses for each prediction
-
locked
(bool
) –Whether the model is locked to prevent accidental deletion
-
created_at
(datetime
) –When the model was created
-
updated_at
(datetime
) –When the model was last updated
last_prediction
property
#
Last prediction made by the model
Note
If the last prediction was part of a batch prediction, the last prediction from the
batch is returned. If no prediction has been made yet, a LookupError
is raised.
create
classmethod
#
Create a regression model.
Parameters:
-
name
(str
) –Name of the model
-
memoryset
(ScoredMemoryset
) –The scored memoryset to use for prediction
-
memory_lookup_count
(int | None
, default:None
) –Number of memories to retrieve for prediction. Defaults to 10.
-
description
(str | None
, default:None
) –Description of the model
-
if_exists
(CreateMode
, default:'error'
) –How to handle existing models with the same name
Returns:
-
RegressionModel
–RegressionModel instance
Raises:
-
ValueError
–If a model with the same name already exists and if_exists is “error”
-
ValueError
–If the memoryset is empty
-
ValueError
–If memory_lookup_count exceeds the number of memories in the memoryset
open
classmethod
#
Get a handle to a regression model in the OrcaCloud
Parameters:
-
name
(str
) –Name or unique identifier of the regression model
Returns:
-
RegressionModel
–Handle to the existing regression model in the OrcaCloud
Raises:
-
LookupError
–If the regression model does not exist
exists
classmethod
#
all
classmethod
#
Get a list of handles to all regression models in the OrcaCloud
Returns:
-
list[RegressionModel]
–List of handles to all regression models in the OrcaCloud
drop
classmethod
#
Delete a regression model from the OrcaCloud
Warning
This will delete the model and all associated data, including predictions, evaluations, and feedback.
Parameters:
-
name_or_id
(str
) –Name or id of the regression model
-
if_not_exists
(DropMode
, default:'error'
) –What to do if the regression model does not exist, defaults to
"error"
. Other option is"ignore"
to do nothing if the regression model does not exist.
Raises:
-
LookupError
–If the regression model does not exist and if_not_exists is
"error"
set
#
Update editable attributes of the model.
Note
If a field is not provided, it will default to UNSET and not be updated.
Parameters:
-
description
(str | None
, default:UNSET
) –Value to set for the description
-
locked
(bool
, default:UNSET
) –Value to set for the locked status
Examples:
Update the description:
Remove description:
Lock the model:
predict
#
Make predictions using the regression model.
Parameters:
-
value
(str | list[str]
) –Input text(s) to predict scores for
-
expected_scores
(float | list[float] | None
, default:None
) –Expected score(s) for telemetry tracking
-
tags
(set[str] | None
, default:None
) –Tags to associate with the prediction(s)
-
save_telemetry
(Literal['off', 'on', 'sync', 'async']
, default:'on'
) –Whether to save telemetry for the prediction(s), defaults to
True
, which will save telemetry asynchronously unless theORCA_SAVE_TELEMETRY_SYNCHRONOUSLY
environment variable is set to"1"
. You can also pass"sync"
or"async"
to explicitly set the save mode.
Returns:
-
RegressionPrediction | list[RegressionPrediction]
–Single RegressionPrediction or list of RegressionPrediction objects
Raises:
-
ValueError
–If expected_scores length doesn’t match value length for batch predictions
predictions
#
Get a list of predictions made by this model
Parameters:
-
limit
(int
, default:100
) –Optional maximum number of predictions to return
-
offset
(int
, default:0
) –Optional offset of the first prediction to return
-
tag
(str | None
, default:None
) –Optional tag to filter predictions by
-
sort
(list[tuple[PredictionSortItemItemType0, PredictionSortItemItemType1]]
, default:[]
) –Optional list of columns and directions to sort the predictions by. Predictions can be sorted by
created_at
,confidence
,anomaly_score
, orscore
.
Returns:
-
list[RegressionPrediction]
–List of score predictions
Examples:
Get the last 3 predictions:
Get second most confident prediction:
evaluate
#
evaluate(
data,
*,
value_column="value",
score_column="score",
record_predictions=False,
tags={"evaluation"},
batch_size=100,
background=False
)
Evaluate the regression model on a given dataset or datasource
Parameters:
-
data
(Datasource | Dataset
) –Dataset or Datasource to evaluate the model on
-
value_column
(str
, default:'value'
) –Name of the column that contains the input values to the model
-
score_column
(str
, default:'score'
) –Name of the column containing the expected scores
-
record_predictions
(bool
, default:False
) –Whether to record
RegressionPrediction
s for analysis -
tags
(set[str]
, default:{'evaluation'}
) –Optional tags to add to the recorded
RegressionPrediction
s -
batch_size
(int
, default:100
) –Batch size for processing Dataset inputs (only used when input is a Dataset)
-
background
(bool
, default:False
) –Whether to run the operation in the background and return a job handle
Returns:
-
RegressionMetrics | Job[RegressionMetrics]
–RegressionMetrics containing metrics including MAE, MSE, RMSE, R2, and anomaly score statistics
Examples:
use_memoryset
#
Temporarily override the memoryset used by the model for predictions
Parameters:
-
memoryset_override
(ScoredMemoryset
) –Memoryset to override the default memoryset with
Examples:
record_feedback
#
Record feedback for a list of predictions.
We support recording feedback in several categories for each prediction. A
FeedbackCategory
is created automatically,
the first time feedback with a new name is recorded. Categories are global across models.
The value type of the category is inferred from the first recorded value. Subsequent
feedback for the same category must be of the same type.
Parameters:
-
feedback
(Iterable[dict[str, Any]] | dict[str, Any]
) –Feedback to record, this should be dictionaries with the following keys:
category
: Name of the category under which to record the feedback.value
: Feedback value to record, should beTrue
for positive feedback andFalse
for negative feedback or afloat
between-1.0
and+1.0
where negative values indicate negative feedback and positive values indicate positive feedback.comment
: Optional comment to record with the feedback.
Examples:
Record whether predictions were accurate:
Record star rating as normalized continuous score between -1.0
and +1.0
:
Raises:
-
ValueError
–If the value does not match previous value types for the category, or is a
float
that is not between-1.0
and+1.0
.