The Chinese company Moonshot AI has introduced the open language model Kimi-K2, which has one trillion parameters and is designed to perform complex tasks without a special logical thinking module. The model is built on the “mixture-of-experts” principle, activating 32 billion parameters for each request. Kimi-K2 is available in two versions: “Kimi-K2-Base” for research and customization, and “Kimi-K2-Instruct” for communication and agent scenarios.
In standard tests for language models, Kimi-K2-Instruct showed results that equal or exceed those of closed products. For example, on the SWE-bench Verified, the model achieved 65.8% in agent mode, surpassing GPT-4.1 and approaching Claude Sonnet 4. Kimi-K2 also leads in the LiveCodeBench and OJBench tests, which assess the ability to solve programming tasks.
The model demonstrates strong results in mathematics and natural sciences, outperforming competitors in tests such as AIME, GPQA-Diamond, and MATH-500. In an unofficial test, Kimi-K2 was able to create a detailed SVG drawing, handling a task that often becomes challenging for other AIs.
Moonshot AI claims that Kimi-K2 is optimized for agent scenarios: it can execute commands, invoke external tools, write and debug code, and solve complex multi-step tasks without human intervention. The model proved its effectiveness during demonstrations, where it independently analyzed data, performed statistical calculations, and created interactive pages.
The model is available via API, compatible with OpenAI, and for local deployment using popular engines. The cost of use starts at $0.15 for processing a million input tokens for cached requests. Deploying Kimi-K2 requires powerful hardware, but for research purposes, instructions for local deployment are available. The license is based on MIT and includes a requirement to display the name “Kimi K2” for very large projects.