qxmt.experiment.evaluation_factory module#
- class qxmt.experiment.evaluation_factory.EvaluationFactory
Bases:
object
Factory class that instantiates and executes appropriate evaluation.
This class centralizes the mapping between (model_type, task_type) and concrete Evaluation implementation, decoupling Experiment/Executor classes from evaluation details and making it easy to add new metrics or model types.
- Class Attributes:
QKERNEL_MODEL_TYPE_NAME (str): Constant representing the qkernel model type. VQE_MODEL_TYPE_NAME (str): Constant representing the VQE model type.
- static evaluate(*, model_type, task_type, params, default_metrics_name=None, custom_metrics=None)
Perform evaluation and return result as plain dictionary.
- Parameters:
model_type (str) – Type of the model to evaluate. Must be either “qkernel” or “vqe”.
task_type (Optional[str]) – Type of the task. For qkernel, must be “classification” or “regression”. For vqe, must be None.
params (dict[str, Any]) – Arguments forwarded to the evaluation class. For supervised tasks, should include “actual” and “predicted”. For VQE, should include “cost_history”.
default_metrics_name (Optional[list[str]]) – List of default metric names to use.
custom_metrics (Optional[list[dict[str, Any]]]) – List of custom metrics to use.
- Returns:
Dictionary containing the evaluation results.
- Return type:
dict[str, Any]
- Raises:
ValueError – If the combination of model_type and task_type is invalid.
Examples
>>> result = EvaluationFactory.evaluate( ... model_type="qkernel", ... task_type="classification", ... params={"actual": [0, 1], "predicted": [0, 1]}, ... default_metrics_name=["accuracy"] ... )