The Chinese startup MiniMax has introduced a new open language model MiniMax-M1 for working with large volumes of text and complex logical tasks. The model supports a context of up to one million tokens and has a “thinking budget” of up to 80,000 tokens, allowing it to analyze long texts. MiniMax-M1 operates based on an efficient reinforcement approach, making the model smaller in size compared to other open counterparts.
MiniMax-M1 is available for free under the Apache-2.0 license and can be found in two versions on the Hugging Face platform. During testing, particularly on OpenAI MRCR, which evaluates complex multi-step reasoning, the model showed better results than DeepSeek-R1-0528 and Qwen3-235B-A22B, and approached the level of the closed model Gemini 2.5 Pro.
MiniMax was founded in Shanghai at the end of 2021 and received support from investors, including Alibaba. The company specializes in language and multimodal AI. Earlier this year, it released MiniMax-Text-01, which can work with a context of up to four million tokens, as well as the MiniMax-VL-01 system for processing text and images.