optuna_transformers

See also

Full documentation with examples can be found here: documentation page

Integration of Optuna and Transformers.

class hpoflow.optuna_transformers.OptunaMLflowCallback(trial, log_training_args=True, log_model_config=True)[source]

Bases: TrainerCallback

Integration of Optuna and Transformers.

Class based on transformers.TrainerCallback; integrates with OptunaMLflow to send the logs to MLflow and Optuna during model training.

Constructor.

Parameters:
  • trial (OptunaMLflow) – The OptunaMLflow object.

  • log_training_args (bool) – Whether to log all Transformers TrainingArguments as MLflow params.

  • log_model_config (bool) – Whether to log the Transformers model config as MLflow params.

on_log(args, state, control, logs, model=None, **kwargs)[source]

Event called after logging the last logs.

Log all metrics from Transformers logs as MLflow metrics at the appropriate step.

Parameters:
  • args (TrainingArguments) –

  • state (TrainerState) –

  • control (TrainerControl) –

  • logs (Dict[str, Number]) –

  • model (Optional[Union[PreTrainedModel, TFPreTrainedModel]]) –

on_train_begin(args, state, control, model=None, **kwargs)[source]

Event called at the beginning of training.

Call setup if not yet initialized.

Parameters:
  • args (TrainingArguments) –

  • state (TrainerState) –

  • control (TrainerControl) –

  • model (Optional[Union[PreTrainedModel, TFPreTrainedModel]]) –

Return type:

None

on_train_end(args, state, control, **kwargs)[source]

Event called at the end of training.

Log the training output as MLflow artifacts if logging artifacts is enabled.

Parameters:
  • args (TrainingArguments) –

  • state (TrainerState) –

  • control (TrainerControl) –

setup(args, state, model)[source]

Setup the optional MLflow integration.

You can set the environment variable HF_MLFLOW_LOG_ARTIFACTS. It is to use mlflow.log_artifacts() to log artifacts. This only makes sense if logging to a remote server, e.g. s3 or GCS. If set to True or 1, will copy whatever is in TrainerArgument’s output_dir to the local or remote artifact storage. Using it without a remote storage will just copy the files to your artifact location.

Parameters:
  • args (TrainingArguments) –

  • state (TrainerState) –

  • model (Optional[Union[PreTrainedModel, TFPreTrainedModel]]) –