oumi.core.trainers#

Core trainers module for the Oumi (Open Universal Machine Intelligence) library.

This module provides various trainer implementations for use in the Oumi framework. These trainers are designed to facilitate the training process for different types of models and tasks.

Example

>>> from oumi.core.trainers import Trainer
>>> trainer = Trainer(model=my_model, dataset=my_dataset) 
>>> trainer.train() 

Note

For detailed information on each trainer, please refer to their respective

class documentation.

class oumi.core.trainers.BaseTrainer[source]#

Bases: ABC

abstract save_model(config: TrainingConfig, final: bool = True) None[source]#

Saves the model’s state dictionary to the specified output directory.

Parameters:
  • config (TrainingConfig) – The Oumi training config.

  • final (bool) – Whether this is the final model being saved during training.

Returns:

None

abstract save_state() None[source]#

Saves the Trainer state.

Under distributed environment this is done only for a process with rank 0.

abstract train(resume_from_checkpoint: str | None) None[source]#

Trains a model.

class oumi.core.trainers.HuggingFaceTrainer(hf_trainer: Trainer, processor: BaseProcessor | None = None)[source]#

Bases: BaseTrainer

save_model(config: TrainingConfig, final: bool = True) None[source]#

Saves the model’s weights to the specified output directory.

Parameters:
  • config – The Oumi training config.

  • final – Whether this is the final model being saved during training. - Applies optimizations for the final model checkpoint. - In the case of FSDP, this will always save the FULL_STATE_DICT instead of the default STATE_DICT.

Returns:

None

save_state() None[source]#

See base class.

Saves the Trainer state, since Trainer.save_model saves only the tokenizer with the model.

HuggingFace normally writes state into “trainer_state.json” under output_dir.

train(resume_from_checkpoint: str | None = None) None[source]#

Trains a model.

class oumi.core.trainers.Trainer(model: Module, tokenizer: PreTrainedTokenizerBase | None, args: TrainingParams, train_dataset: Dataset, processor: BaseProcessor | None = None, eval_dataset: Dataset | None = None, callbacks: list[TrainerCallback] | None = None, data_collator: Callable | None = None, config: TrainingConfig | None = None, **kwargs)[source]#

Bases: BaseTrainer

evaluate() dict[str, float][source]#

Evaluates the model on the evaluation dataset.

log(message: str)[source]#

Logs a message if the process is the local process zero.

log_metrics(metrics: dict[str, Any], step: int) None[source]#

Logs metrics to wandb and tensorboard.

save_model(config: TrainingConfig, final: bool = True) None[source]#

Saves the model.

save_state()[source]#

Saves the training state.

train(resume_from_checkpoint: str | None = None)[source]#

Trains the model.