I get my best coding ideas when I'm nowhere near my desk — usually right
  as I'm falling asleep. I got tired of losing that momentum, so I built
  LazyGravity.

  It's a local Discord bot that hooks up Antigravity to your phone. I can
  ship fixes, kick off long implementation tasks, or start whole features
  from bed, the train, wherever. Send a message in Discord, Antigravity
  executes it on your home PC, results come back as rich embeds you can
  reply to for follow-up instructions.

  How it works: it drives the Antigravity UI directly via Chrome DevTools
  Protocol over WebSocket (Runtime.evaluate on the Electron shell's DOM).
  No private API hacking — no risk of account bans like with tools that
  reverse-engineer proprietary APIs.

  A few things I care about:

  - Local-first: your code never leaves your machine. No exposed ports,
    no cloud relays, no intermediate server.
  - Secure: whitelist-based access — only your Discord ID can trigger
    commands. (I recommend a dedicated server to keep things private.)
  - Context threading: reply to any result embed to continue the
    conversation with full context preserved.

  What you can actually do from your phone:

  - Route local projects to Discord categories, sessions to channels
    — automatic workspace management
  - Toggle LLM models or modes (Plan/Code/Architect) with /model and /mode
  - /screenshot to see exactly what's happening on your desktop in real-time
  - One-click prompt templates for common tasks
  - Auto-detect and approve/deny file change dialogs from Discord

  Still early alpha (v0.1.0), but it's been a game-changer for my own
  workflow. Looking for folks to try it out, roast the architecture,
  , add new features and help squash bugs.

    npm install -g lazy-gravity
    lazy-gravity setup
  Demo video in Readme:
  https://github.com/tokyoweb3/LazyGravity

One thing that doesn't get discussed enough: the gap between people who can use AI tools effectively and those who can't is widening fast.

The engineers who thrive are the ones who understand how the underlying models actually work. They know when to trust the output, when to add constraints, and how to structure prompts for reliable results across different contexts.

The ones struggling tend to treat LLMs as magic boxes. They get inconsistent results and don't know why.

This is creating a real challenge for hiring managers. Standard coding interviews do not surface which category a candidate falls into at all.

Well great idea. CDP over WebSocket to drive the Electron shell is a clean approach, no reverse engineering needed and you stay within supported boundaries.