Acknowledgments

Acknowledgments#

Oumi makes use of several libraries and tools from the open-source community. 🚀

We would like to acknowledge these projects:

Project

Usage in Oumi

Accelerate

Distributed training and mixed precision computations

bitsandbytes

Quantization and efficient optimizers for QLora

Datasets

Dataset loading and processing

jsonlines

JSON Lines data format handling

LLama.cpp

Efficient inference capabilities for quantized models

LM Evaluation Harness

Comprehensive suite for evaluating language models

NumPy

Numerical operations and data manipulation

OmegaConf

Configuration system for managing model and training parameters

Pandas

Data manipulation and analysis

Peft

Parameter-efficient fine-tuning techniques

Pydantic

Ensures type checking and validation for configuration objects

PyTorch

Primary deep learning framework for model training and inference

SkyPilot

Cloud-agnostic deployment and management of training jobs

TensorBoard

Training visualization and monitoring

Tqdm

Progress bars to enhance user experience during long-running operations

Transformers

Core model architectures and utilities for working with transformer-based models

TRL

SFT and DPO training implementations

Typer

Command-line interface for Oumi commands

vllm

Fast inference for large language models

Weights & Biases

Experiment tracking and visualization

We are grateful to the developers and maintainers of these projects for their valuable contributions to the open-source community. 🙏