Acknowledgments#
Oumi makes use of several libraries and tools from the open-source community. 🚀
We would like to acknowledge these projects:
Project |
Usage in Oumi |
---|---|
Distributed training and mixed precision computations |
|
Quantization and efficient optimizers for QLora |
|
Dataset loading and processing |
|
JSON Lines data format handling |
|
Efficient inference capabilities for quantized models |
|
Comprehensive suite for evaluating language models |
|
Numerical operations and data manipulation |
|
Configuration system for managing model and training parameters |
|
Data manipulation and analysis |
|
Parameter-efficient fine-tuning techniques |
|
Ensures type checking and validation for configuration objects |
|
Primary deep learning framework for model training and inference |
|
Cloud-agnostic deployment and management of training jobs |
|
Training visualization and monitoring |
|
Progress bars to enhance user experience during long-running operations |
|
Core model architectures and utilities for working with transformer-based models |
|
SFT and DPO training implementations |
|
Command-line interface for Oumi commands |
|
Fast inference for large language models |
|
Experiment tracking and visualization |
We are grateful to the developers and maintainers of these projects for their valuable contributions to the open-source community. 🙏