Together AI Enhances Fine-Tuning Platform with Larger Models and Hugging Face Integration

Lawrence Jengar
Sep 10, 2025 19:13
Together AI unveils major upgrades to its Fine-Tuning Platform, including support for 100B+ parameter models, extended context lengths, and improved integration with Hugging Face Hub.
Together AI has announced significant upgrades to its Fine-Tuning Platform, aiming to streamline the model customization process for AI developers. The latest enhancements include the ability to train models with over 100 billion parameters, extended context lengths, and enhanced integration with the Hugging Face Hub, according to Together AI.
Expanding Model Capacity
The platform now supports a range of new large models, such as DeepSeek-R1, Qwen3-235B, and Llama 4 Maverick. These models are designed to perform complex tasks, sometimes rivaling proprietary models. The platform’s engineering optimizations allow for efficient training of these large-scale models, reducing both costs and time investments.
Longer Context Lengths
Responding to the growing need for long-context processing, Together AI has overhauled its training systems to support increased context lengths. Developers can now utilize context lengths of up to 131k tokens for certain models, enhancing the platform’s capability to handle complex and lengthy data inputs.
Integration with Hugging Face Hub
The integration with Hugging Face Hub allows developers to fine-tune a wide array of models hosted on the platform. This feature enables users to start with a pre-adapted model and further customize it for specific tasks. Additionally, outputs from training runs can be directly saved into a repository on the Hub, facilitating seamless model management.
Advanced Training Objectives
Together AI has also expanded its support for Preference Optimization with new training objectives, such as length-normalized DPO and SimPO, offering more flexibility in training on preference data. The platform now supports the maximum batch size setting, optimizing the training process across different models and modes.
These enhancements are part of Together AI’s commitment to provide cutting-edge tools for AI researchers and engineers. With these new features, the Fine-Tuning Platform is positioned to support even the most demanding AI development tasks, making it a cornerstone for innovation in machine learning.
Image source: Shutterstock