

TranslateGemma is a new collection of open translation models built on Gemma 3, available in 4B, 12B, and 27B parameter sizes. It represents a significant step forward in open translation, helping people communicate across 55 languages, no matter where they are or what device they own.
The models support translation across 55 language pairs covering major languages and low-resource languages. TranslateGemma retains strong multimodal capabilities, showing improved ability to translate text within images even without specific multimodal fine-tuning. The models are available in three sizes optimized for different deployment environments.
TranslateGemma achieves its efficiency through a specialized two-stage fine-tuning process. First, supervised fine-tuning uses a diverse dataset of parallel data including human-translated texts and synthetic translations from Gemini models. Second, reinforcement learning uses an ensemble of reward models to guide the models toward contextually accurate translations.
The models provide high-fidelity translation quality using fewer parameters than baseline models, allowing for higher throughput and lower latency without sacrificing accuracy. They considerably reduce error rates compared to baseline Gemma models across all tested languages.
TranslateGemma is designed for researchers and developers working on translation-related tasks. It serves as a robust foundation for further adaptation and fine-tuning for specific language pairs or low-resource languages.
admin
TranslateGemma is designed for researchers and developers working on translation-related tasks. It provides powerful and adaptable tools for breaking down language barriers and fostering greater understanding across cultures. The models serve as a robust foundation for further adaptation, making them ideal for researchers to fine-tune their own state-of-the-art models for specific language pairs or to improve quality for low-resource languages.