Black Forest Labs announced the launch of a new line of generative image models FLUX.1 Kontext. The PRO and MAX models allow users to create and edit images using text prompts, as well as use reference photos for precise result tuning. Users can change clothing, scene style, or rewrite text directly on the image without losing character integrity and artistic style.
Today we're releasing FLUX.1 Kontext – a suite of generative flow matching models that allow you to generate and edit images.
— Black Forest Labs (@bfl_ml) May 29, 2025
Unlike traditional text-to-image models, Kontext understands both text AND images as input, enabling true in-context generation and editing. pic.twitter.com/zleJGuXDge
Kontext [pro] allows for multiple refinements of the result, which is important for preserving details and consistency. The Kontext [max] model focuses on speed, stability, and precise execution of user instructions. Both models operate faster than most competitors — according to internal tests, the image generation time has been reduced eightfold compared to previous versions.
The new features are already available through the Black Forest Labs API and partner services, including KreaAI, Freepik, Lightricks, Leonardo, Replicate, FAL, Runware, and Together. For those who want to try the models without registration, a web platform called Playground has been created, where new users receive two hundred credits — enough to generate approximately twelve images in Kontext [pro] mode.
For researchers and developers, the company has opened a private beta version of the Kontext [dev] model with open weights, which they plan to publish on Hugging Face after completing security checks. Users are already noting the clarity of editing and character stability in multi-step scenarios, as well as lower costs compared to alternative solutions, such as OpenAI gpt-image-1.