gpt-image-1 (aka "4o") is still the most useful general purpose image model, but damn does this come close.
I'm deep in this space and feel really good about FLUX.1 Kontext. It fills a much-needed gap, and it makes sure that OpenAI / Google aren't the runaway victors of images and video.
Prior to gpt-image-1, the biggest problems in images were:
- prompt adherence
- generation quality
- instructiveness (eg. "put the sign above the second door")
- consistency of styles, characters, settings, etc.
- deliberate and exact intentional posing of characters and set pieces
- compositing different images or layers together
- relighting
Fine tunes, LoRAs, and IPAdapters fixed a lot of this, but they were a real pain in the ass. ControlNets solved for pose, but it was still awkward and ugly. ComfyUI was an orchestrator of this layer of hacks that kind of got the job done, but it was hacky and unmaintainable glue. It always felt like a fly-by-night solution.OpenAI's gpt-image-1 solved all of these things with a single multimodal model. You could throw out ComfyUI and all the other pre-AI garbage and work directly with the model itself. It was magic.
Unfortunately, gpt-image-1 is ridiculously slow, insanely expensive, highly censored (you can't use a lot of copyrighted characters or celebrities, and a lot of totally SFW prompts are blocked). It can't be fine tuned, so you're suck with the "ChatGPT style" and (as is called by the community) the "piss filter" (perpetually yellowish images).
And the biggest problem with gpt-image-1 is because it puts image and text tokens in the same space to manipulate, it can't retain the exact precise pixel-precise structure of reference images. Because of that, it cannot function as an inpainting/outpainting model whatsoever. You can't use it to edit existing images if the original image mattered.
Even with those flaws, gpt-image-1 was a million times better than Flux, ComfyUI, and all the other ball of wax hacks we've built up. Given the expense of training gpt-image-1, I was worried that nobody else would be able to afford to train the competition and that OpenAI would win the space forever. We'd be left with only hyperscalers of AI building these models. And it would suck if Google and OpenAI were the only providers of tools for artists.
Black Forest Labs just proved that wrong in a big way! While this model doesn't do everything as well as gpt-image-1, it's within the same order of magnitude. And it's ridiculously fast (10x faster) and cheap (10x cheaper).
Kontext isn't as instructive as gpt-image-1. You can't give it multiple pictures and ask it to copy characters from one image into the pose of another image. You can't have it follow complex compositing requests. But it's close, and that makes it immediately useful. It fills a much-needed gap in the space.
Black Forest Labs did the right thing by developing this instead of a video model. We need much more innovation in the image model space, and we need more gaps to be filled:
- Fast
- Truly multimodal like gpt-image-1
- Instructive
- Posing built into the model. No ControlNet hacks.
- References built into the model. No IPAdapter, no required character/style LoRAs, etc.
- Ability to address objects, characters, mannequins, etc. for deletion / insertion.
- Ability to pull sources from across multiple images with or without "innovation" / change to their pixels.
- Fine-tunable (so we can get higher quality and precision)
Something like this that works in real time would literally change the game forever.Please build it, Black Forest Labs.
All of those feature requests stated, Kontext is a great model. I'm going to be learning it over the next weeks.
Keep at it, BFL. Don't let OpenAI win. This model rocks.
Now let's hope Kling or Runway (or, better, someone who does open weights -- BFL!) develops a Veo 3 competitor.
I need my AI actors to "Meisner", and so far only Veo 3 comes close.
When I first saw gpt-image-1 I was equally scared that OpenAI had used its resources to push so far ahead that more open models would be left completely in the dust for the significant future.
Glad to see this release. It also puts more pressure onto OpenAI to make their model less lobotomized and to increase its output quality. This is good for everyone.
>Given the expense of training gpt-image-1, I was worried that nobody else would be able to afford to train the competition
OpenAI models are expensive to train because it’s beneficial for OpenAI models to be expensive and there is no incentive to optimize when they’re gonna run in a server farm anyway.
Probably a bunch of teams never bothered trying to replicate Dall-E 1+2 because the training run cost millions, yet SD1.5 showed us comparable tech can run on a home computer and be trained from scratch for thousands or fine tuned for cents.
Your comment is def why we come to HN :)
Thanks for the detailed info
Thought the SAME thing
this breakdown made my day thank you!
Im building a web based paint/image editor with ai inpainting etc
and this is going to be a great model to use price wise and capability wise
completely agree so happy its not any one of these big co’s controlling the whole space!
What are you building? Ping me if you want a tester of half-finished breaking stuff
Thanks for the detailed post!