Anthropic introduced Claude Haiku 4.5 — a new compact AI model now available to all users worldwide. The model is aimed at developers, businesses, and those needing real-time processing for tasks such as chat assistants, customer support, or code prototyping. Claude Haiku 4.5 can be used through Claude API, Claude Code, as well as on Amazon Bedrock and Google Cloud Vertex AI platforms.
The model is priced at $1 per million input tokens and $5 per million output tokens, making it cost-effective for users with large query volumes. According to Anthropic, Haiku 4.5 delivers performance on par with more complex models like Claude Sonnet 4.5 but operates more than twice as fast and costs three times less. In programming and tool usage tests, the model showed results close to flagship systems and significantly outperformed the previous version in response speed.
Claude Haiku 4.5 is particularly suitable for integration into workflows with high-speed demands, such as in chats, support automation, or collaborative coding. The model can work alongside larger Claude series systems, allowing tasks to be quickly distributed among multiple instances for parallel execution. This enables the creation of flexible solutions for businesses and developers without excessive computational resource load.
According to testing results, Claude Haiku 4.5 achieved high scores in tasks using Python tools, solving mathematical problems, and programming. Users already note its ability to quickly execute complex work scenarios and provide real-time feedback.
Anthropic announced improvements in safety metrics in the new model — Claude Haiku 4.5 received “AI Safety Level 2,” which means fewer restrictions compared to more powerful systems. The model demonstrates a lower level of undesirable behavior than previous versions and is considered the safest among the company’s solutions to date.

