Honestly you could just come up with a basic wireframe in any design software (MS paint would work) and a screen shot of a website with a design you like and tell it "apply the aesthetic from the website in this screenshot to the wireframe" and it would probably get 80% (probably more) of the way there. Something that would have taken me more than a day in the past.
I've been in web design since images were first introduced to browsers and modern designs for the majority of sites are more templated than ever. AI can already generate inspiration, prototypes and designs that go a long way to matching these, then juice them with transitions/animations or whatever else you might want.
The other day I tested an AI by giving it a folder of images, each named to describe the content/use/proportions (e.g., drone-overview-hero-landscape.jpg), told it the site it was redesigning, and it did a very serviceable job that would match at least a cheap designer. On the first run, in a few seconds and with a very basic prompt. Obviously with a different AI, it could understand the image contents and skip that step easily enough.
Honestly you could just come up with a basic wireframe in any design software (MS paint would work) and a screen shot of a website with a design you like and tell it "apply the aesthetic from the website in this screenshot to the wireframe" and it would probably get 80% (probably more) of the way there. Something that would have taken me more than a day in the past.
I've been in web design since images were first introduced to browsers and modern designs for the majority of sites are more templated than ever. AI can already generate inspiration, prototypes and designs that go a long way to matching these, then juice them with transitions/animations or whatever else you might want.
The other day I tested an AI by giving it a folder of images, each named to describe the content/use/proportions (e.g., drone-overview-hero-landscape.jpg), told it the site it was redesigning, and it did a very serviceable job that would match at least a cheap designer. On the first run, in a few seconds and with a very basic prompt. Obviously with a different AI, it could understand the image contents and skip that step easily enough.