It seems more accurate than 4o image generation in terms of preserving original details. If I give it my 3D animal character and ask it for a minor change like changing the lighting, 4o will completely mangle the face of my character, it will change the body and other details slightly. This Flux model keeps the visible geometry almost perfectly the same even when asked to significantly change the pose or lighting
gpt-image-1 (aka "4o") is still the most useful general purpose image model, but damn does this come close.
I'm deep in this space and feel really good about FLUX.1 Kontext. It fills a much-needed gap, and it makes sure that OpenAI / Google aren't the runaway victors of images and video.
Prior to gpt-image-1, the biggest problems in images were:
Fine tunes, LoRAs, and IPAdapters fixed a lot of this, but they were a real pain in the ass. ControlNets solved for pose, but it was still awkward and ugly. ComfyUI was an orchestrator of this layer of hacks that kind of got the job done, but it was hacky and unmaintainable glue. It always felt like a fly-by-night solution.OpenAI's gpt-image-1 solved all of these things with a single multimodal model. You could throw out ComfyUI and all the other pre-AI garbage and work directly with the model itself. It was magic.
Unfortunately, gpt-image-1 is ridiculously slow, insanely expensive, highly censored (you can't use a lot of copyrighted characters or celebrities, and a lot of totally SFW prompts are blocked). It can't be fine tuned, so you're suck with the "ChatGPT style" and (as is called by the community) the "piss filter" (perpetually yellowish images).
And the biggest problem with gpt-image-1 is because it puts image and text tokens in the same space to manipulate, it can't retain the exact precise pixel-precise structure of reference images. Because of that, it cannot function as an inpainting/outpainting model whatsoever. You can't use it to edit existing images if the original image mattered.
Even with those flaws, gpt-image-1 was a million times better than Flux, ComfyUI, and all the other ball of wax hacks we've built up. Given the expense of training gpt-image-1, I was worried that nobody else would be able to afford to train the competition and that OpenAI would win the space forever. We'd be left with only hyperscalers of AI building these models. And it would suck if Google and OpenAI were the only providers of tools for artists.
Black Forest Labs just proved that wrong in a big way! While this model doesn't do everything as well as gpt-image-1, it's within the same order of magnitude. And it's ridiculously fast (10x faster) and cheap (10x cheaper).
Kontext isn't as instructive as gpt-image-1. You can't give it multiple pictures and ask it to copy characters from one image into the pose of another image. You can't have it follow complex compositing requests. But it's close, and that makes it immediately useful. It fills a much-needed gap in the space.
Black Forest Labs did the right thing by developing this instead of a video model. We need much more innovation in the image model space, and we need more gaps to be filled:
Something like this that works in real time would literally change the game forever.Please build it, Black Forest Labs.
All of those feature requests stated, Kontext is a great model. I'm going to be learning it over the next weeks.
Keep at it, BFL. Don't let OpenAI win. This model rocks.
Now let's hope Kling or Runway (or, better, someone who does open weights -- BFL!) develops a Veo 3 competitor.
I need my AI actors to "Meisner", and so far only Veo 3 comes close.
When I first saw gpt-image-1 I was equally scared that OpenAI had used its resources to push so far ahead that more open models would be left completely in the dust for the significant future.
Glad to see this release. It also puts more pressure onto OpenAI to make their model less lobotomized and to increase its output quality. This is good for everyone.
>Given the expense of training gpt-image-1, I was worried that nobody else would be able to afford to train the competition
OpenAI models are expensive to train because it’s beneficial for OpenAI models to be expensive and there is no incentive to optimize when they’re gonna run in a server farm anyway.
Probably a bunch of teams never bothered trying to replicate Dall-E 1+2 because the training run cost millions, yet SD1.5 showed us comparable tech can run on a home computer and be trained from scratch for thousands or fine tuned for cents.
Your comment is def why we come to HN :)
Thanks for the detailed info
Thought the SAME thing
this breakdown made my day thank you!
Im building a web based paint/image editor with ai inpainting etc
and this is going to be a great model to use price wise and capability wise
completely agree so happy its not any one of these big co’s controlling the whole space!
What are you building? Ping me if you want a tester of half-finished breaking stuff
Thanks for the detailed post!
anything is more accurate than the llms at generating images. chatgpt, google gemini, all of them... they're not optimized for image generation. it's why veo is an entirely different model from google for example. and even veo isn't the best video model either. people dedicated to images and video are just spending more time here (such as black forest labs). as a result, those specialized models are better.
What's better than veo?