I use AI agents to build UI features daily. The thing that kept annoying me: the agent writes code but never sees what it actually looks like in the browser. It can’t tell if the layout is broken or if the console is throwing errors.

I give agent either a simple browser or Playwright access to proper browsers to do this. It works quite well, to the point where I can ask Claude to debug GLSL shaders running in WebGL with it.

Agreed. Anthropic added a plugin accessible under `/plugins` to CC to make it even easier to add MCP Playwright to your project. It automatically handles taking screenshots.

It's not perfect though - I've personally found CC's VL to be worse than others such as Gemini but its nice to have it completely self contained.

This project desperately needs a "What does this do differently?" section because automated LLM browser screenshot diffing has been a thing for a while now.

+1

All the power to you if you build a product out of this, I don't wanna be that guy that says that dropbox is dead because you can just setup ftp. But with Codex/Claude Code, I was able to achieve this very result just from prompting.

I mean, this is a free and open source project, so I don't think they're trying to make it into a product

Do you use Chrome DevTools MCP or how does it work?

Playwright mcp has screenshotting built in

Likewise, and often the playwright skill will verify using DOM API instead of wasting tokens on screenshots

> often the playwright skill will verify using DOM API instead of wasting tokens on screenshots

So... Bypassing the whole "sees what it actually looks like in the browser. It can’t tell if the layout is broken" parent commentator is talking about? Seems worse, not better.