QXMT reference#
Release: 0.2.0 (Date: September 11,2024)
Subpackages#
Submodules#
qxmt.configs module#
- class qxmt.configs.DatasetConfig(*, openml=None, file=None, generate=None, split, features=None, raw_preprocess_logic=None, transform_logic=None)#
Bases:
BaseModel
- Parameters:
openml (OpenMLConfig | None)
file (FileConfig | None)
generate (GenerateDataConfig | None)
split (SplitConfig)
features (list[str] | None)
raw_preprocess_logic (list[dict[str, Any]] | dict[str, Any] | None)
transform_logic (list[dict[str, Any]] | dict[str, Any] | None)
- features: list[str] | None#
- file: FileConfig | None#
- generate: GenerateDataConfig | None#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- openml: OpenMLConfig | None#
- raw_preprocess_logic: list[dict[str, Any]] | dict[str, Any] | None#
- split: SplitConfig#
- transform_logic: list[dict[str, Any]] | dict[str, Any] | None#
- class qxmt.configs.DeviceConfig(*, platform, device_name, backend_name=None, n_qubits, shots=None, random_seed=None, save_shots_results=False)#
Bases:
BaseModel
- Parameters:
platform (str)
device_name (str)
backend_name (str | None)
n_qubits (int)
shots (int | None)
random_seed (int | None)
save_shots_results (bool)
- backend_name: str | None#
- check_real_machine_setting()#
- Return type:
- check_save_shots()#
- Return type:
- classmethod check_shots(value)#
- Parameters:
value (int)
- Return type:
int
- device_name: str#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- n_qubits: int#
- platform: str#
- random_seed: int | None#
- save_shots_results: bool#
- shots: int | None#
- class qxmt.configs.EvaluationConfig(*, default_metrics, custom_metrics=None)#
Bases:
BaseModel
- Parameters:
default_metrics (list[str])
custom_metrics (list[dict[str, Any]] | None)
- custom_metrics: list[dict[str, Any]] | None#
- default_metrics: list[str]#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class qxmt.configs.ExperimentConfig(*, path='', description='', global_settings, dataset, device, feature_map=None, kernel=None, model, evaluation)#
Bases:
BaseModel
- Parameters:
path (Path | str)
description (str)
global_settings (GlobalSettingsConfig)
dataset (DatasetConfig)
device (DeviceConfig)
feature_map (FeatureMapConfig | None)
kernel (KernelConfig | None)
model (ModelConfig)
evaluation (EvaluationConfig)
- __init__(**data)#
Initialize the experiment configuration.
- Case 1:
Load the configuration from a file path. This case the data is a dictionary with a single key “path”.
- Case 2:
Load the configuration from a dictionary. This case the data is a dictionary with the configuration data.
- Parameters:
data (Any)
- Return type:
None
- dataset: DatasetConfig#
- description: str#
- device: DeviceConfig#
- evaluation: EvaluationConfig#
- feature_map: FeatureMapConfig | None#
- global_settings: GlobalSettingsConfig#
- kernel: KernelConfig | None#
- load_from_path(path)#
- Parameters:
path (str)
- Return type:
dict[str, Any]
- model: ModelConfig#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- path: Path | str#
- class qxmt.configs.FeatureMapConfig(*, module_name, implement_name, params=None)#
Bases:
BaseModel
- Parameters:
module_name (str)
implement_name (str)
params (dict[str, Any] | None)
- implement_name: str#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- module_name: str#
- params: dict[str, Any] | None#
- class qxmt.configs.FileConfig(*, data_path, label_path, label_name)#
Bases:
BaseModel
- Parameters:
data_path (Path | str)
label_path (Path | str | None)
label_name (str | None)
- data_path: Path | str#
- label_name: str | None#
- label_path: Path | str | None#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(_FileConfig__context)#
Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.
- Parameters:
_FileConfig__context (dict[str, Any])
- Return type:
None
- class qxmt.configs.GenerateDataConfig(*, generate_method, params={})#
Bases:
BaseModel
- Parameters:
generate_method (Literal['linear'])
params (dict[str, Any] | None)
- generate_method: Literal['linear']#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- params: dict[str, Any] | None#
- class qxmt.configs.GlobalSettingsConfig(*, random_seed, task_type)#
Bases:
BaseModel
- Parameters:
random_seed (int)
task_type (Literal['classification', 'regression'])
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- random_seed: int#
- task_type: Literal['classification', 'regression']#
- class qxmt.configs.KernelConfig(*, module_name, implement_name, params=None)#
Bases:
BaseModel
- Parameters:
module_name (str)
implement_name (str)
params (dict[str, Any] | None)
- implement_name: str#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- module_name: str#
- params: dict[str, Any] | None#
- class qxmt.configs.ModelConfig(*, name, params, feature_map=None, kernel=None)#
Bases:
BaseModel
- Parameters:
name (str)
params (dict[str, Any])
feature_map (FeatureMapConfig | None)
kernel (KernelConfig | None)
- feature_map: FeatureMapConfig | None#
- kernel: KernelConfig | None#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- name: str#
- params: dict[str, Any]#
- class qxmt.configs.OpenMLConfig(*, name=None, id=None, return_format='numpy', save_path=None)#
Bases:
BaseModel
- Parameters:
name (str | None)
id (int | None)
return_format (str)
save_path (Path | str | None)
- id: int | None#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(_OpenMLConfig__context)#
Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.
- Parameters:
_OpenMLConfig__context (dict[str, Any])
- Return type:
None
- name: str | None#
- return_format: str#
- save_path: Path | str | None#
- class qxmt.configs.SplitConfig(*, train_ratio, validation_ratio=0.0, test_ratio, shuffle=True)#
Bases:
BaseModel
- Parameters:
train_ratio (float)
validation_ratio (float)
test_ratio (float)
shuffle (bool)
- check_ratio()#
- Return type:
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- shuffle: bool#
- test_ratio: float#
- train_ratio: float#
- validation_ratio: float#
qxmt.constants module#
qxmt.decorators module#
- qxmt.decorators.notify_long_running(func)#
Decorator to notify the user that a function is running.
- Parameters:
func (Callable)
- Return type:
Callable
- qxmt.decorators.retry_on_exception(retries, delay)#
Decorator to retry a function if an exception is raised.
- Parameters:
retries (int)
delay (float)
- Return type:
Callable
qxmt.exceptions module#
- exception qxmt.exceptions.DeviceSettingError#
Bases:
Exception
- exception qxmt.exceptions.ExperimentNotInitializedError#
Bases:
Exception
- exception qxmt.exceptions.ExperimentRunSettingError#
Bases:
Exception
- exception qxmt.exceptions.ExperimentSettingError#
Bases:
Exception
- exception qxmt.exceptions.IBMQSettingError#
Bases:
Exception
- exception qxmt.exceptions.InputShapeError#
Bases:
Exception
- exception qxmt.exceptions.InvalidConfigError#
Bases:
Exception
- exception qxmt.exceptions.InvalidFileExtensionError#
Bases:
Exception
- exception qxmt.exceptions.InvalidModelNameError#
Bases:
Exception
- exception qxmt.exceptions.InvalidPlatformError#
Bases:
Exception
- exception qxmt.exceptions.InvalidQunatumDeviceError#
Bases:
Exception
- exception qxmt.exceptions.JsonEncodingError#
Bases:
Exception
- exception qxmt.exceptions.ModelSettingError#
Bases:
Exception
- exception qxmt.exceptions.ReproductionError#
Bases:
Exception
qxmt.logger module#
- qxmt.logger.set_default_logger(logger_name)#
- Parameters:
logger_name (str)
- Return type:
Logger
qxmt.types module#
Module contents#
- class qxmt.DatasetConfig(*, openml=None, file=None, generate=None, split, features=None, raw_preprocess_logic=None, transform_logic=None)#
Bases:
BaseModel
- Parameters:
openml (OpenMLConfig | None)
file (FileConfig | None)
generate (GenerateDataConfig | None)
split (SplitConfig)
features (list[str] | None)
raw_preprocess_logic (list[dict[str, Any]] | dict[str, Any] | None)
transform_logic (list[dict[str, Any]] | dict[str, Any] | None)
- features: list[str] | None#
- file: FileConfig | None#
- generate: GenerateDataConfig | None#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- openml: OpenMLConfig | None#
- raw_preprocess_logic: list[dict[str, Any]] | dict[str, Any] | None#
- split: SplitConfig#
- transform_logic: list[dict[str, Any]] | dict[str, Any] | None#
- class qxmt.DeviceConfig(*, platform, device_name, backend_name=None, n_qubits, shots=None, random_seed=None, save_shots_results=False)#
Bases:
BaseModel
- Parameters:
platform (str)
device_name (str)
backend_name (str | None)
n_qubits (int)
shots (int | None)
random_seed (int | None)
save_shots_results (bool)
- backend_name: str | None#
- check_real_machine_setting()#
- Return type:
- check_save_shots()#
- Return type:
- classmethod check_shots(value)#
- Parameters:
value (int)
- Return type:
int
- device_name: str#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- n_qubits: int#
- platform: str#
- random_seed: int | None#
- save_shots_results: bool#
- shots: int | None#
- exception qxmt.DeviceSettingError#
Bases:
Exception
- class qxmt.EvaluationConfig(*, default_metrics, custom_metrics=None)#
Bases:
BaseModel
- Parameters:
default_metrics (list[str])
custom_metrics (list[dict[str, Any]] | None)
- custom_metrics: list[dict[str, Any]] | None#
- default_metrics: list[str]#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class qxmt.Experiment(name=None, desc=None, auto_gen_mode=False, root_experiment_dirc=PosixPath('/home/runner/work/qxmt/qxmt/experiments'), llm_model_path='microsoft/Phi-3-mini-128k-instruct', logger=<Logger qxmt.experiment.experiment (INFO)>)#
Bases:
object
Experiment class for managing the experiment and each run data. The Experiment class provides methods for initializing the experiment, running the experiment, saving the experiment data, and reproducing the model.
All experiment data is stored in the ExperimentDB instance. It is save in local directory as a json file (root_experiment_dirc/experiments/your_exp_name/experiment.json).
Experiment can be initialized and strated from scratch by calling the init() method. Anthoer way is to load the existing experiment data from the json file (experiment.json) by calling the load_experiment() method.
The Experiment class can be used in two ways: 1. Provide config_path: This method accepts the path to the config file or config instance. It is more flexible but requires a YAML base config file. This method tracks the experiment settings, result and can reproduce the model. Officially, we recommend using the config file method.
2. Directly provide dataset and model instance: This method directly accepts dataset and model instances. It is easy to use but does “NOT” track the experiment settings. This method is useful for adhoc experiments, quick testing or debugging.
Examples
>>> import qxmt >>> exp = qxmt.Experiment( ... name="my_qsvm_algorithm", ... desc="""This is a experiment for new qsvm algorithm. ... This experiment is applied and evaluated on multiple datasets. ... """, ... auto_gen_mode=True, ... ).init() >>> config_path = "../configs/template.yaml" >>> artifact, result = exp.run( ... config_source=config_path) >>> exp.runs_to_dataframe() run_id accuracy precision recall f1_score 0 1 0.45 0.53 0.66 0.59
- Parameters:
name (str | None)
desc (str | None)
auto_gen_mode (bool)
root_experiment_dirc (str | Path)
llm_model_path (str)
logger (Logger)
- __init__(name=None, desc=None, auto_gen_mode=False, root_experiment_dirc=PosixPath('/home/runner/work/qxmt/qxmt/experiments'), llm_model_path='microsoft/Phi-3-mini-128k-instruct', logger=<Logger qxmt.experiment.experiment (INFO)>)#
Initialize the Experiment class. Set the experiment name, description, and other settings such as auto_gen_mode, root_experiment_dirc and logger. auto_gen_mode controls whether to use the DescriptionGenerator by LLM. If use, set environemnt variable “USE_LLM” to True. root_experiment_dirc is the root directory to save the experiment data. Each artifact and result store in the subdirectory of the root directory.
- Parameters:
name (Optional[str], optional) – experiment name. If None, generate by execution time. Defaults to None.
desc (Optional[str], optional) – description of the experiment. The purpose is search, memo, etc not used in the code. Defaults to None.
auto_gen_mode (bool, optional) – whether to use the DescriptionGenerator for generating the description of each run. Defaults to USE_LLM.
root_experiment_dirc (str | Path, optional) – root directory to save the experiment data. Defaults to DEFAULT_EXP_DIRC.
llm_model_path (str, optional) – path to the LLM model. Defaults to LLM_MODEL_PATH.
logger (Logger, optional) – logger instance for warning or error messages. Defaults to LOGGER.
- Return type:
None
- get_run_record(runs, run_id)#
Get the run record of the target run_id.
- Parameters:
run_id (int) – target run_id
runs (list[RunRecord])
- Raises:
ValueError – if the run record does not exist
- Returns:
target run record
- Return type:
RunRecord
- init()#
Initialize the experiment directory and DB.
- Returns:
initialized experiment
- Return type:
- load(exp_dirc, exp_file_name=PosixPath('experiment.json'))#
Load existing experiment data from a json file.
- Parameters:
exp_dirc (str | Path) – path to the experiment directory
exp_file_name (str | Path)
- Raises:
FileNotFoundError – if the experiment file does not exist
ExperimentSettingError – if the experiment directory does not exist
- Returns:
loaded experiment
- Return type:
- reproduce(run_id, check_commit_id=False)#
Reproduce the target run_id model from config file. If the target run_id does not have a config file path, raise an error. Reoroduce method not supported for the run executed from the instance.
- Parameters:
run_id (int) – target run_id
check_commit_id (bool, optional) – whether to check the commit_id. Defaults to False.
- Returns:
artifact and record of the reproduced run_id
- Return type:
tuple[RunArtifact, RunRecord]
- Raises:
ReproductinoError – if the run_id does not have a config file path
- run(task_type=None, dataset=None, model=None, config_source=None, default_metrics_name=None, custom_metrics=None, n_jobs=2, desc='', repo_path=None, add_results=True)#
Start a new run for the experiment.
The run() method can be called in two ways:
1. Provide dataset and model instance: This method directly accepts dataset and model instances. It is easy to use but less flexible and does “NOT” track the experiment settings.
2. Provide config_path: This method accepts the path to the config file or config instance. It is more flexible but requires a config file.
- Parameters:
task_type (str, optional) – type of the task (classification or regression). Defaults to None.
dataset (Dataset) – the dataset object.
model (BaseMLModel) – the model object.
config_source (ExperimentConfig, str | Path, optional) – config source can be either an ExperimentConfig instance or the path to a config file. If a path is provided, it loads and creates an ExperimentConfig instance. Defaults to None.
default_metrics_name (list[str], optional) – list of default metrics names. Defaults to None.
custom_metrics (list[dict[str, Any]], optional) – list of user defined custom metric configurations. Defaults to None.
n_jobs (int, optional) – number of jobs for parallel processing. Defaults to DEFAULT_N_JOBS.
desc (str, optional) – description of the run. Defaults to “”.
repo_path (str, optional) – path to the git repository. Defaults to None.
add_results (bool, optional) – whether to add the run record to the experiment. Defaults to True.
- Returns:
Returns a tuple containing the artifact and run record of the current run_id.
- Return type:
tuple[RunArtifact, RunRecord]
- Raises:
ExperimentNotInitializedError – Raised if the experiment is not initialized.
- run_evaluation(task_type, actual, predicted, default_metrics_name, custom_metrics)#
Run evaluation for the current run.
- Parameters:
actual (np.ndarray) – array of actual values
predicted (np.ndarray) – array of predicted values
default_metrics_name (Optional[list[str]]) – list of default metrics name
custom_metrics (Optional[list[dict[str, Any]]]) – list of user defined custom metric configurations
task_type (str)
- Returns:
evaluation result
- Return type:
dict
- runs_to_dataframe()#
Convert the run data to a pandas DataFrame.
- Returns:
DataFrame of run data
- Return type:
pd.DataFrame
- Raises:
ExperimentNotInitializedError – if the experiment is not initialized
- save_experiment(exp_file=PosixPath('experiment.json'))#
Save the experiment data to a json file.
- Parameters:
exp_file (str | Path, optional) – name of the file to save the experiment data.Defaults to DEFAULT_EXP_DB_FILE.
- Raises:
ExperimentNotInitializedError – if the experiment is not initialized
- Return type:
None
- class qxmt.ExperimentConfig(*, path='', description='', global_settings, dataset, device, feature_map=None, kernel=None, model, evaluation)#
Bases:
BaseModel
- Parameters:
path (Path | str)
description (str)
global_settings (GlobalSettingsConfig)
dataset (DatasetConfig)
device (DeviceConfig)
feature_map (FeatureMapConfig | None)
kernel (KernelConfig | None)
model (ModelConfig)
evaluation (EvaluationConfig)
- __init__(**data)#
Initialize the experiment configuration.
- Case 1:
Load the configuration from a file path. This case the data is a dictionary with a single key “path”.
- Case 2:
Load the configuration from a dictionary. This case the data is a dictionary with the configuration data.
- Parameters:
data (Any)
- Return type:
None
- dataset: DatasetConfig#
- description: str#
- device: DeviceConfig#
- evaluation: EvaluationConfig#
- feature_map: FeatureMapConfig | None#
- global_settings: GlobalSettingsConfig#
- kernel: KernelConfig | None#
- load_from_path(path)#
- Parameters:
path (str)
- Return type:
dict[str, Any]
- model: ModelConfig#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- path: Path | str#
- exception qxmt.ExperimentNotInitializedError#
Bases:
Exception
- exception qxmt.ExperimentRunSettingError#
Bases:
Exception
- exception qxmt.ExperimentSettingError#
Bases:
Exception
- class qxmt.FeatureMapConfig(*, module_name, implement_name, params=None)#
Bases:
BaseModel
- Parameters:
module_name (str)
implement_name (str)
params (dict[str, Any] | None)
- implement_name: str#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- module_name: str#
- params: dict[str, Any] | None#
- class qxmt.FileConfig(*, data_path, label_path, label_name)#
Bases:
BaseModel
- Parameters:
data_path (Path | str)
label_path (Path | str | None)
label_name (str | None)
- data_path: Path | str#
- label_name: str | None#
- label_path: Path | str | None#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(_FileConfig__context)#
Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.
- Parameters:
_FileConfig__context (dict[str, Any])
- Return type:
None
- class qxmt.GenerateDataConfig(*, generate_method, params={})#
Bases:
BaseModel
- Parameters:
generate_method (Literal['linear'])
params (dict[str, Any] | None)
- generate_method: Literal['linear']#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- params: dict[str, Any] | None#
- class qxmt.GlobalSettingsConfig(*, random_seed, task_type)#
Bases:
BaseModel
- Parameters:
random_seed (int)
task_type (Literal['classification', 'regression'])
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- random_seed: int#
- task_type: Literal['classification', 'regression']#
- exception qxmt.IBMQSettingError#
Bases:
Exception
- exception qxmt.InputShapeError#
Bases:
Exception
- exception qxmt.InvalidFileExtensionError#
Bases:
Exception
- exception qxmt.InvalidModelNameError#
Bases:
Exception
- exception qxmt.InvalidPlatformError#
Bases:
Exception
- exception qxmt.InvalidQunatumDeviceError#
Bases:
Exception
- exception qxmt.JsonEncodingError#
Bases:
Exception
- class qxmt.KernelConfig(*, module_name, implement_name, params=None)#
Bases:
BaseModel
- Parameters:
module_name (str)
implement_name (str)
params (dict[str, Any] | None)
- implement_name: str#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- module_name: str#
- params: dict[str, Any] | None#
- class qxmt.ModelConfig(*, name, params, feature_map=None, kernel=None)#
Bases:
BaseModel
- Parameters:
name (str)
params (dict[str, Any])
feature_map (FeatureMapConfig | None)
kernel (KernelConfig | None)
- feature_map: FeatureMapConfig | None#
- kernel: KernelConfig | None#
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid', 'frozen': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- name: str#
- params: dict[str, Any]#
- exception qxmt.ModelSettingError#
Bases:
Exception
- exception qxmt.ReproductionError#
Bases:
Exception
- class qxmt.SplitConfig(*, train_ratio, validation_ratio=0.0, test_ratio, shuffle=True)#
Bases:
BaseModel
- Parameters:
train_ratio (Annotated[float, Ge(ge=0.0), Le(le=1.0)])
validation_ratio (Annotated[float, Ge(ge=0.0), Le(le=1.0)])
test_ratio (Annotated[float, Ge(ge=0.0), Le(le=1.0)])
shuffle (bool)
- check_ratio()#
- Return type:
- model_config: ClassVar[ConfigDict] = {'extra': 'forbid'}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- shuffle: bool#
- test_ratio: float#
- train_ratio: float#
- validation_ratio: float#