The French company Mistral AI announced the release of the updated open AI model Mistral Small 3.2-24B Instruct-2506. The new version is based on the previous Mistral Small 3.1 and focuses on improving instruction accuracy, response stability, and enhancing the “function calling” feature. The model’s architecture remains unchanged, but developers have made several improvements that have impacted both internal assessments and external testing results.
Introducing Mistral Small 3.2, a small update to Mistral Small 3.1 to improve:
— Mistral AI (@MistralAI) June 20, 2025
– Instruction following: Small 3.2 is better at following precise instructions
– Repetition errors: Small 3.2 produces less infinite generations or repetitive answers
– Function calling: Small… pic.twitter.com/cYptesvpFY
According to Mistral AI, the Small 3.2 model better adheres to given instructions and less frequently generates infinite or repetitive responses, which was an issue in the previous version when dealing with long or unclear queries. The model can operate on a single Nvidia A100 or H100 GPU with 80 GB of memory, expanding implementation possibilities for companies with limited resources.
Compared to version 3.1, Small 3.2’s instruction accuracy in internal tests increased from 82.75% to 84.78%. On the external Wildbench v2 set, the score rose by nearly 10 percentage points, and on Arena Hard v2, it more than doubled. The share of infinite responses decreased from 2.11% to 1.29%. The model showed improvements in text and code tests HumanEval Plus, MBPP Pass@5, and SimpleQA, while the average result on visual tasks remained almost unchanged.
Mistral Small 3.2 is distributed under the Apache 2.0 license and is available on the Hugging Face platform. The model supports the vLLM and Transformers frameworks, requiring about 55 GB of video memory in bf16 or fp16 modes.