orcalib.rac.rac#
RACModel
#
Bases: Module
predict
#
Predicts the label for the given input.
evaluate
#
Benchmarks the model on the given dataset returning a bunch of different metrics.
For dict-like or list of dict-like datasets, there must be a label
key and one of the following keys: text
, image
, or value
.
If there are only two keys and one is label
, the other will be inferred to be value
.
For list-like datasets, the first element of each tuple must be the value and the second must be the label.
evaluate_and_explain
#
Benchmarks the model on the given dataset returning a bunch of different metrics.
For dict-like or list of dict-like datasets, there must be a label
key and one of the following keys: text
, image
, or value
.
If there are only two keys and one is label
, the other will be inferred to be value
.
For list-like datasets, the first element of each tuple must be the value and the second must be the label.
inspect_last_run
#
Displays information about the memories accessed during the last run (incl weights etc.)
explain
#
Like predict
but instead of the prediction result, returns an explanation of the prediction (accessed memories etc.)
:param inpt: The input to explain
:param plot: If True, will display graphs via matplotlib
:param pretty_print: If True, will return a pretty printed string of the explanation. If False, will return a dictionary
:param interactive: DISABLED - If True, will display the explanation in an interactive way (e.g. with a GUI)