Average Ratings 0 Ratings
Average Ratings 0 Ratings
Description
Axolotl is an innovative open-source tool crafted to enhance the fine-tuning process of a variety of AI models, accommodating numerous configurations and architectures. This platform empowers users to train models using diverse methods such as full fine-tuning, LoRA, QLoRA, ReLoRA, and GPTQ. Additionally, users have the flexibility to customize their configurations through straightforward YAML files or by employing command-line interface overrides, while also being able to load datasets in various formats, whether custom or pre-tokenized. Axolotl seamlessly integrates with cutting-edge technologies, including xFormers, Flash Attention, Liger kernel, RoPE scaling, and multipacking, and it is capable of operating on single or multiple GPUs using Fully Sharded Data Parallel (FSDP) or DeepSpeed. Whether run locally or in the cloud via Docker, it offers robust support for logging results and saving checkpoints to multiple platforms, ensuring users can easily track their progress. Ultimately, Axolotl aims to make the fine-tuning of AI models not only efficient but also enjoyable, all while maintaining a high level of functionality and scalability. With its user-friendly design, it invites both novices and experienced practitioners to explore the depths of AI model training.
Description
Tinker is an innovative training API tailored for researchers and developers, providing comprehensive control over model fine-tuning while simplifying the complexities of infrastructure management. It offers essential primitives that empower users to create bespoke training loops, supervision techniques, and reinforcement learning workflows. Currently, it facilitates LoRA fine-tuning on open-weight models from both the LLama and Qwen families, accommodating a range of model sizes from smaller variants to extensive mixture-of-experts configurations. Users can write Python scripts to manage data, loss functions, and algorithmic processes, while Tinker autonomously takes care of scheduling, resource distribution, distributed training, and recovery from failures. The platform allows users to download model weights at various checkpoints without the burden of managing the computational environment. Delivered as a managed service, Tinker executes training jobs on Thinking Machines’ proprietary GPU infrastructure, alleviating users from the challenges of cluster orchestration and enabling them to focus on building and optimizing their models. This seamless integration of capabilities makes Tinker a vital tool for advancing machine learning research and development.
API Access
Has API
API Access
Has API
Integrations
Qwen
Cerebras
Comet
Docker
Gemma
Hugging Face
Latitude
Llama 3
Llama 3.1
Llama 3.2
Integrations
Qwen
Cerebras
Comet
Docker
Gemma
Hugging Face
Latitude
Llama 3
Llama 3.1
Llama 3.2
Pricing Details
Free
Free Trial
Free Version
Pricing Details
No price information available.
Free Trial
Free Version
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Deployment
Web-Based
On-Premises
iPhone App
iPad App
Android App
Windows
Mac
Linux
Chromebook
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Customer Support
Business Hours
Live Rep (24/7)
Online Support
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Types of Training
Training Docs
Webinars
Live Training (Online)
In Person
Vendor Details
Company Name
Axolotl
Country
United States
Website
axolotl.ai/
Vendor Details
Company Name
Thinking Machines Lab
Country
United States
Website
thinkingmachines.ai/tinker/