qxmt.evaluation.metrics.base module#

class qxmt.evaluation.metrics.base.BaseMetric(name)#

Bases: ABC

Base class for evaluation metric. This class is used to define the evaluation metric for the model and visualization. Provide a common interface within the QXMT library by absorbing differences between metrics.

Examples

>>> import numpy as np
>>> from qxmt.evaluation.base import BaseMetric
>>> class CustomMetric(BaseMetric):
...     def __init__(self, name: str) -> None:
...         super().__init__(name)
...
...     @staticmethod
...     def evaluate(actual: np.ndarray, predicted: np.ndarray, **kwargs: Any) -> float:
...         return np.mean(np.abs(actual - predicted))
...
>>> metric = CustomMetric("mean_absolute_error")
>>> metric.set_score(np.array([1, 3, 3]), np.array([1, 2, 3]))
>>> metric.output_score()
mean_absolute_error: 0.33
Parameters:

name (str)

__init__(name)#

Base class for evaluation metric.

Parameters:

name (str) – name of the metric. It is used for the column name of the output and DataFrame.

Return type:

None

abstract static evaluate(actual, predicted, **kwargs)#

define evaluation method for each metric.

Parameters:
  • actual (np.ndarray) – array of actual value

  • predicted (np.ndarray) – array of predicted value

  • **kwargs (dict) – additional arguments

Returns:

evaluated score

Return type:

float

output_score(logger=<Logger qxmt.evaluation.metrics.base (INFO)>)#

Output the evaluated score on standard output.

Parameters:

logger (Logger, optional) – logger object. Defaults to LOGGER.

Raises:

ValueError – if the score is not evaluated yet

Return type:

None

set_score(actual, predicted, **kwargs)#

Evaluated the score and set it to the score attribute.

Parameters:
  • actual (np.ndarray) – array of actual value

  • predicted (np.ndarray) – array of predicted value

  • **kwargs (dict) – additional arguments

Return type:

None