Google introduced Gemini 3 Flash — a new version of its AI model, which has become the main one in the Gemini app and will soon appear in Google Search. Gemini 3 Flash combines fast query processing with high-quality handling of complex tasks while maintaining a moderate cost for users.
The model is available to millions of people through the Gemini app, AI mode in search, as well as through Gemini API, Google AI Studio, Vertex AI, Gemini Enterprise, Gemini CLI, and Android Studio. It is designed for developers, businesses, and regular users who use AI for programming, text, image, and video analysis.
Gemini 3 Flash operates faster than the previous version Gemini 2.5 Flash and provides accurate answers using 30% fewer tokens for daily operations. The processing cost is $0.50 per million input tokens and $3 per million output tokens, making the model accessible for various tasks.
It showed high results on GPQA Diamond, MMMU Pro, and SWE-bench Verified tests, indicating its strong skills in programming and multifunctional tasks. The first companies to integrate Gemini 3 Flash were JetBrains, Bridgewater Associates, and Figma, which noted the speed, quality of reasoning, and efficiency of the model.

