OrcaSDK Release Notes#
This document tracks notable changes to the OrcaSDK.
v0.1.13#
- Added methods to retrieve the currently used org ID, scopes, API key name, and config.
- Added progress bars for batch operations in
predict,insert,update, anddeletemethods. - Added a
"replace"option to theif_existsparameter in model creation methods to allow replacing existing models. - Enhanced explanation output to include input and prediction information.
- Refactored memory suggestion API to return a list of
LabeledMemorySuggestionobjects that can be directly inserted into a memoryset. - Replaced
head_typewithbalance_classesparameter in classification modelcreatemethod. - Fixed prediction explanation timeout issue.
- Aligned and fixed
__repr__methods across the SDK. - Fixed predictions not storing the correct memoryset when memoryset override is used.
- Fixed API health check in
OrcaSDKto properly validate responses and fail faster. - Removed default file type from datasource
download. - Added ability for root users to upload locally finetuned embedding models.
v0.1.12#
- Added a
partitionedproperty on memorysets to check whether a memoryset uses partitioning. - Added a
partitionedparameter to memorysetcreateto create empty partitioned memorysets. - Added ability to change whether a memoryset is partitioned during
clonevia thepartitionedparameter. - Added a static
computemethod to classification and regression metrics to calculate metrics from a list of predictions. - Added shared
loggerto be used throughout the SDK that can be customized. - Added a
consistencyparameter toget,query,search, andpredictmethods. - Added support for model
evaluatewith pandas data frames and generic iterables of dictionaries. - Fixed a bug where expected labels/scores were not being saved on predictions when telemetry was disabled.
- Removed
scikit-learnandnumpydependencies (metrics calculation now happens on the API server). - Removed
datasetsdependency (was only needed for types and torch data parsing which was refactored to not need it anymore).
v0.1.11#
- Added a
cascadeparameter todropmethod on memorysets and finetuned embedding models to allow deleting related resources in one call (avoids foreign-key errors). - Added
classification_modelsandregression_modelsproperties on memorysets to list the models associated with a memoryset. - Added support for updating all memories that match a filter via the memoryset
updatemethod. - Changed the memoryset
updatemethod to return the number of updated memories (instead of the updated objects) to reduce network usage. - Added support for deleting all memories that match a filter via the memoryset
deletemethod. - Added a
truncatemethod on memorysets to delete all memories, or only those in a specific partition (partition_iddefaults toUNSET; passingNonetruncates the global partition). - Removed partition parameters from the memoryset
deleteandquerymethods; usefilterto target one or more partitions, or usetruncateto clear a partition. - Fixed a bug where updating non-metadata fields on a memory could clear its metadata.
- Removed
torch,pandas, andpyarrowdependencies (they were only needed for typing). - Made
gradiooptional; install the notebook UI extras viaorca_sdk[ui].
v0.1.10#
- Removed deprecation warning from
OrcaCredentials.set_api_key.
v0.1.9#
- Added support for Python 3.14 including updating
datasetsto 4.4.2,pyarrowto 22.0.0,gradioto 6.3.0, and fixing several incompatibility issues. - Changed
predictionsto return all predictions by default when limit is None. - Changed
predictandapredictto automatically batch requests to reduce network overhead. - Fixed
evaluateto also include the confusion matrix, when running evaluate with a local dataset.
v0.1.7#
- Added confusion matrix to classification metrics
- Added ability to create empty memorysets
- Added stricter checks for
if_exists="open"during memoryset creation - Added support for running
distribution,duplicate,cluster, andprojectionanalyses onScoredMemoryset - Tweaked representation of predictive models and embedding models
- Fixed classification metrics calculation when test set classes don’t match model’s predicted classes.
v0.1.6#
- Fixed bug that could lead to division by zero during metrics calculation
v0.1.5#
- Added support for partitioned memorysets and models
v0.1.4#
- Added
use_gpuparameter to prediction methods to allow CPU-based predictions - Added support for using string columns as label columns
- Added
sampleparameter to memoryset creation methods and model evaluate methods to allow sampling of rows - Added
ignore_unlabeledparameter to prediction and evaluate methods - Added method to query datasource rows
- Added support to finetune embedding models for regression tasks
- Added support for querying prediction telemetry on memories
- Updated SDK to use new job endpoints
- Improved prediction caching
- Fixed dependency vulnerability
v0.1.3#
- Added async
ClassificationModel.apredictandMemoryset.ainsertmethods. - Added batching to
Memoryset.insert,Memoryset.update, andMemoryset.deletemethods to reduce network issues. - Renamed
"neighbor"analysis to"distribution"analysis. - Allowed injecting custom httpx clients via context to cleanly override api keys and controll client lifecycle.
- Fixed creation of orphaned datasources when using the
if_exists="open"option during memoryset creation. - Removed deprecated
Memoryset.run_embedding_evaluationmethod, useEmbeddingModel.evaluateinstead.
v0.1.2#
- Added support for None labels and scores to memorysets and models.
- Added automatic retrying of requests to improve mitigate transient network and service issues.
- Fixed bug when receiving additional field in API responses for metrics
- Updated dependencies to resolve vulnerabilities