I'm building my personal home right now. The AI image models have been a game-changer in designing the look of the house. My architect did an OK job, but the details that Nano Banana added really bring the house up a notch. I just do hundreds of renders from the basic 3D models and I find looks that I like and iterate from there. We are implementing the renders from Nano Banana over our Interior Designers designs. We would not have hired the Interior Designers again after using Nano Banana to do our interiors.

I think part of the issue with architects and designers today is that they use CAD too much. It's easy to design boxes and basic roof lines in CAD. It's harder to put in curves and more craftsman features. Nano Banana's renders have more organic design features IMO.

Our house is looking great and we're very happy how it's going so far with a lot of the thanks to Nano Banana.

Part of the job of interior design is delivering the promised images in … yknow, physical reality? How are you going from nano banana images to actual plans, materials, finishes, products, paint codes, … ?

I just gave the renders to the cabinet makers and they had no problems recreating.

Interesting. I model interior architecture as "here's $xxxK, make it nice" and they do a bunch of work to figure out what you mean by nice, and a bunch more work to codify your definition of nice into, like, SKUs of sconces and so on. Seems like NB helped you figure out your definition of nice, and your subcontractor had a good designer on staff to execute on that.

A sufficiently detailed render won't require a designer to figure out the materials. Any (reasonably competent) contractor can take a sufficiently detailed render with him to the store and find matching products. At least assuming the thing in the render actually exists.

He can also send back a picture of the real product for approval. I think the primary difference here is the level of involvement. A quick consult and then the professional "makes it all work" versus hands on design with the client figuring out all the details for himself.

A designer knows things from experience and would be aware of small details that if not designed correctly, become very apparent when built in reality.

The interior designer doesn't really do squat. They can do plan drawings and have some off the shelf cupboards and furniture. They don't implement anything

Presumably you give the render to a designer and they recreate it using real materials.

not the op, but this is what i did too and bypassed the designer. I iterated with nano banana and gave the result to the company that builds the kitchen. the middleman is gone now.

interesting! Discovered any prompting best practices while iterating with nano banana?

This is what I would do too

Is any of this intrinsically a strength of Nano Banana, while not of other models/generative tools? Have you tried doing the same with say Klein, ZIT, etc.?

NB Pro can do some seriously impressive edits around interior decorating - see the prompt that replaces the window with a mirror which correctly reflects the room. It's not perfect, but it's still damn impressive.

https://mordenstar.com/blog/edits-with-nanobanana

I'm deeply impressed, especially with the "replace window by mirror". It did not only do the window thing right, it also changed the illumination of the whole room while keeping all the other details unchanged.

Right? That part kind of blew my mind too - the multimodal model actually altered the overall lighting in the room by eliminating all the reflections and specular highlights when the natural light was taken away WITHOUT being asked in the prompt.

Same! I redid my backyard entirely and needed ideas. Gemini took a pile of dirt and gave me countless ideas, improved my plans, recommended materials, etc. a designer gave me two out of the box ideas that Gemini didn’t come up with, but it did everything else perfectly. (Designer said, put a patio out in the yard and put your table there, and take your ugly shed and make it the center of attention, since you’ll never succeeed trying to hide it)

Same thing here. I took a picture of some gravel/grass and asked it to show me what it'd look like with tiles. I showed it another part of the property, and asked it to show me what it would look like with a raised lawn. Super impressive to be able to see a cloudy idea in the physical realm like that.

Did you do this in Gemini or Nano Banana? Should I give multiple view points and top view of the back yard? I'm trying to see how much info to give.

Related: I asked AI to find me a house to buy and went with the first recommendation. It did a better job searching than I did.

Curious on this - what was the prompt like? How did you give it access to listings?

I actually built an app to accomplish this exact thing as I was finishing building my home and was clueless when it came to interior design. I'm genuinely astonished by the capabilities of these models with regards to this, and it feels vastly underutilized by the general populace. Being able to try out multiple paint colors in seconds, or add real furniture or wall decor from Ikea, or move objects around instantly - it still blows my mind.

Can you write a bit more about your workflow? I've been thinking about doing the same, but since I'm very non-interior-design minded have struggled to ask the right things.

Like... What are your inputs to the model? Empty renders of the space, or more fully decorated views/ photos? Do you have a light harness around this to help you discover the style you like and then stay consistent with it?

Do you find that giving a lot of context around the space you're designing helps (it hasn't in my attempts)?

I started with sketchup to make basic floor plans and house shapes. I had a rough idea of the style of the home. I picked "Transitional English Estate" since the build site is out on a farm that sorta looks like the Cotswolds. I used AI in this process to get rough renders and feedback on the floorplan. I then took that basic floorplan and house dimensions to a Draftsman who did a lot of tweaking to get it up to code and fix issues. I got his plans and took it to a Sketchup Pro on Fivver . They made a detailed sketchup model. I then took that model and took screenshots from different perspectives and tweaked the prompt to get renders I liked. These changes were reencorprated into the blueprints. I did the same thing with the interior. Took screenshots from sketchup and put them into AI and tweaked the prompt. https://imgur.com/a/lSIYTYr

super interesting - can you share some of the other elements. screenshots of the sketchup model, the AI image output, etc?

would you recommend this workflow to others, or just noting that it is what you did? any regrets, road blocks, frustrations?

a ball park price would also be interesting: total cost of sketchup license + ai token cost + fivver modeler + draftsman etc. I assume under $1k?

Mine was far more lightweight, but u just uploaded pics of my yard and prompted manually a bunch of times. Sometimes id find reference images to give as context, draw on the image to call out specific areas, etc.

It wouldn’t show me the exact things I wanted, but got close enough that I could test ideas and iterate quickly.

Out of curiosity: what is your input to the model? A CAD file or a drawing?

I find it does a good job at isometric views from floor plans. However, I needed Gemini 3.1 Pro to be able to have a chance at rendering 3D human point of view images from floor plans.

Any chance you'd be willing to share an album? I've considered doing this for my own home and I'd be psyched to study practical examples. Honestly this would make one helluva blog post (imo).

Did you have to change anything based on cost and what the contractors can actually do?

What tooling are you using to use this and manage it?