Mistral AI has announced the release of an updated version of its programming model, Codestral 25.01. This model is aimed at improving software development workflows. The new release builds on the success of the previous Codestral version, which was widely used by developers for code completion, bug fixing, and test generation.
Codestral 25.01 demonstrates several improvements: the model is now about twice as fast thanks to architectural optimizations and a more efficient tokenizer. It supports larger codebases and more complex tasks with a context window of 256,000 tokens. In performance tests, the model outperforms leading models with up to 100 billion parameters in fill-in-the-middle tasks for various programming languages. For example, it achieves an average score of 95.3% on FIM tasks, surpassing competitors such as OpenAI’s GPT-3.5 Turbo API.
This release is available to developers through IDE plugins for platforms such as Visual Studio Code and JetBrains. For enterprise users who require on-premises deployment or data retention, the model is offered via the Continue platform. The API is available on Google Cloud’s Vertex AI, and private previews are on Azure AI Foundry, with plans to launch on Amazon Bedrock.