A big loss for the Emacs community! emacs-aio is great!

I see the author is spring cleaning:

> I've turned over a new leaf (no more Openbox, Tridactyl, Xorg, xterm), and so some of these things I no longer use. On Linux I now use KDE on Wayland with a minimally-configured browser. I miss the power user features, but I do not miss the friction and constant maintenance.

https://github.com/skeeto/dotfiles/commit/df275005769b654618...

> I am no longer using Mutt nor running my own mail server. In general less terminal stuff for me.

https://github.com/skeeto/dotfiles/commit/e331e367c75f66aaa9...

LLMs have inspired a similar change in me: with a big change in how I work, I feel I can and should be more flexible with adopting new tech, which involving freeing myself of previous choices.

> LLMs have inspired a similar change in me

FWIW, the age of LLMs made me build a deeper, more intimate relationship with Emacs, because it's a Lisp REPL loop with a built-in editor, not the other way around. When you give an LLM a closed loop system where it can evaluate code in a live REPL and observe the results, it stops guessing and starts reasoning empirically.

LLM that I run inside Emacs can fully control the active Emacs instance. I can make it change virtually any aspect of it. To load-test things, I even made it play Tetris in Emacs. And not just simply run it, but to actually play it without losing. It was insane.

Also, Emacs is all about plain text - you can easily extract text from anything - from the browser, terminal, CLI apps, Slack, Jira, etc., and you can do that on your own terms - context can appear in a buffer, in your clipboard, become a file or series of API requests. That is really hard to beat.

Asolutely. It doesn't have to be an either-or. I use gptel and org mode when I was to be really hands on driving the development. It's a very different mode of interacting with models, and the way newer models are trained to play nice with harnesses makes them very obedient.

https://poyo.co/note/20260202T150723/

Interesting. Tnx.

In case anyone else wondered about using gptel to edit thinking (eg vis Qwen3.6's `preserve thinking`), [1] explains:

> In a multi-turn request, from the time you run `gptel-send`, everything the LLM sends is passed back to it [...during tool calls...] includes multiple reasoning blocks. [...But...] subsequent gptel-send calls read their input from the buffer contents (or active region, etc), so the reasoning blocks in the buffer will not [] be sent as "reasoning_content".

But in org mode, those are apparently `#+being_reasoning` blocks (`gptel-include-reasoning`?), so editable thought might be an easy addition?

A caution, fwiw, that any llms which respond with interleaved content and reasoning blocks, currently only work when not streaming, and fixing that is non-trivial.[also 1]

[1] https://github.com/karthink/gptel/issues/1282

Is this your site? I cannot find an RSS feed for it. I'd like to subscribe.

Same for me!

My .emacs config has improved and I wrote my own Emacs based coding agent https://github.com/mark-watson/coding-agent

Same here. Emacs has been the stable editor for all kinds of language changes, tool changes, and IDE changes. Emacs is great with LLM, as LLM is mostly text related and Emacs is great in capturing and dealing with text.

So much this. Lisp can do things other languages have a hard time with. I think a resurgence is in order.

Can't agree more. Lisp was discovered/invented for the purpose of AI research. Of course, modern neural nets and transformers is a big departure from McCarthy's vision of AI - logical, interpretable, symbolic. However, if the current wave of AI hits a wall - and many serious researchers think it will, or already has at the margins - there's growing interest in neurosymbolic approaches that combine neural nets with symbolic reasoning. That's closer to McCarthy's original vision, and Lisps are genuinely well-suited for it.

Let's be honest: Lisp probably won't ever get bigger than Python, unless Python for whatever reason starts dying on its own. But if AI ever gets serious about interpretability, formal reasoning, program synthesis - all the stuff Lisp was built for - it just might quietly become relevant again in research contexts, without ever reclaiming mainstream status.

Scicloj has been building out a serious ML stack in Clojure - noj, metamorph.ml, scicloj.ml.tribuo, libpython-clj for Python interop. Beside that, people been proving that 'code is data' is exactly what makes it a better target for LLMs. Clojure is most token efficient PL - it's been proven. There are some recent interesting clj projects in relevance:

https://github.com/realgenekim/clj-surgeon

https://clojure.getpando.ai

https://github.com/yogthos/chiasmus

Clojure? Forget it, SBCL would be better for that task. Just look what could be done with Coalton.

Well, this is because "normal" programming languages are one step above AST. So LLM has to work with program text, which is much easier than regular human text, as it is constrained to well defined number of keywords and grammar, but still this is pretty variable. Lisp is just AST, so it is one level lower. I guess that at some point LLM-s will stop writing human-readable code, as this is additional obstacle, they will work directly with binaries or virtual machines code (like in Java), because this will be easier and eat less tokens.

Can you describe your setup on how you use LLMs within Emacs?

Of course.

I've tried different AI packages and currently gptel and ECA remain the main ingredients. This is a quickly changing landscape, and things may change, but for now it feels very good.

I like gptel because it's enormously extendable and exploitable - it allows me to send LLM requests from just about anywhere - I could be typing a message (like this very one) and suddenly in need of ideas for how to phrase something better, or explain simply, or fact-check my assumptions, whatever. Quick & dirty interaction that gets discarded in the same buffer. For longer investigations and research I would use a dedicated gptel buffer. Those get automatically saved.

I don't use gptel as a coding assistant, even though you can do that, it's not really optimized for that kind of work. I use ECA. It works much better for me than every other alternative I tried, and I tried more than a few. What's crazy that I sometimes would type a prompt in ECA, then ask gptel (with a different model) to make it more "AI-friendly" changing the prompt in-place and then send it.

All my MCPs are coded in Clojure (mostly babashka)¹ - because (like I said) giving an AI a Lisp REPL makes much more sense (maybe even more than using a statically typed language). I had to employ a few tricks so all the tools, skills and instructions can be shared between gptel, eca-emacs, ECA Desktop, Claude Code CLI, Claude Desktop App, and Copilot CLI. Even though I mostly use gptel and ECA, it's good to keep other options around, just in case. All the AI-related Emacs settings are in my config².

Is this helpful, or you want some more concrete examples?

¹ https://github.com/agzam/death-contraptions

² https://github.com/agzam/.doom.d/tree/main/modules/custom/ai

Big same. I have been doing a lot of clojure development, and hooking up my app to a live REPL has given me an absolutely fantastic feedback loop for the LLM. I don't think a lot of people understand what they're missing.

> I don't think a lot of people understand what they're missing

Very true. There's an enormous tacit knowledge gap. Check this out:

I have to use Mac for work. My WM is Yabai, which is controlled via Hammerspoon (great tool on its own), which means I can use Fennel, which means I can have a Lisp REPL. MCP connected to that REPL can query and inspect every single window I have on my screen. It can move them around, it can resize them, it can extract some properties of them. It's figuring out stuff like: "pick a selected Slack thread from the app and send it into an Emacs buffer", or "make my app windows work like Emacs buffers" - pick from the list and swap it in place. Or "find the HN thread about retiring from Emacs among my browser tabs and summarize the content"...

Never in my life have I been more grateful to my younger self for grokking the philosophy of Lisp. Recent months have only reinforced my firm belief that this 70-year-old tech is truly everlasting. Thank you, John McCarthy, for the great gift to humanity, even though so weirdly underappreciated.

I am really loving working on a fun Elisp project with pi, a minimal and very extensible agent. I have the agent use emacsclient to control my session, showing me code, running magit ediff for me, testing, formatting, reloading -- it's all working great.

I'm still exploring all the ways the agent and I can collaborate using Emacs as a shared medium, but at the moment am super optimistic about it.

> LLM that I run inside Emacs can fully control the active Emacs instance ... > you can easily extract text from anything

This is what gives me the most pause.

Care to explain? Why is it? You think it's dangerous or some other reasons?

It's definitely dangerous.

Do you have credentials anywhere within reach of that session? Can you open your bank account in a browser ... within reach of that session? Are your contacts available within reach of that session? What about personal notes/emails/goals or other sensitive information? That people think these can't be added together in one very socially/monetarily destructive fell swoop is ... telling.

Ignoring obvious bad-actor concerns from just giving root to your whole life to an LLM running on someone else's server, LLMs themselves can act in ways that are extremely counterproductive to their organization/host/etc.

A quote/warning I learned in the late 90s is just as relevant today, "Computers make very fast, very accurate mistakes."

Emacs has full system access with arbitrary execution so full emacs access -> full system access.

What? You run emacs as root?

Anything an LLM does on your computer should happen it its own account. No sudo config of course, or at most one that is strictly limited to what you want to allow it to do (risk here, as many programs have non-obvious paths to general command execution).

It should have zero access to your private home directory or your system configs. You can have access to its files of course. That's the beauty of separate accounts and permissions.

The RCE vulnerabilities especially with community flavors of Emacs that come with access control out of the box.

So? My terminal has the same full system access. If I didn't use Emacs, I'd be using Claude code in it. It's contained locally on my computer, I don't see any problem here. I use Emacs like my OS-layer. Why would I complain that my OS has access to something? It would be weird and annoying if it's the opposite.

You have to give Claude Code access to every shell command individually unless you run in yolo mode.

I don't think it's very reasonable to use claude code on a computer that have credentials without some kind of sandboxing or validing every command it does, at which point I'd rather do things manually

Yeah, that's incredibly unsafe. You made a footgun machine and you're firing it with no shoes on. Don't run that on any machine with credentials you care about.

At the very least, run it in Docker. It's not a security tool, but it's at least some kind of guardrail against data loss and exfiltration.

Ah come on, guys, let's talk pragmatically. "Malleable editor as an OS layer" has benefits beyond subjective reasoning. Emacs has had M-x shell-command and arbitrary elisp eval forever. A metacircular MCP isn't some new capability class. Even if I didn't use Emacs - my shell, my editor, my browser extensions, my npm install, my VSCode plugins, my curl | bash from yesterday - they all have the same access. Singling out the LLM in this context is like selection bias.

Of course, reasonable mitigations are a must - just like for any other tool. Narrowing MCP scope - tool routing rules, read-only git defaults, etc. "Docker or nothing" is a lazy answer - Docker-for-everything has real costs: friction, broken integrations, worse ergonomics.

Practical security is all about staying in the goldilocks zone. You shouldn't get relaxed about the basics - sandboxing, 2FA, password managers - they are worth doing, and you can get so paranoid about so many things, and yet against a targeted, well-resourced attacker, your sandboxing posture is mostly irrelevant. The interesting attacks bypass the threat model entirely. Read about Ben Nassi's team research¹ - pretty cool example. There are multitudes of other ways and your Docker container won't stop them. Defend against the boring 99%, and accept that the 1% is someone else's problem (or a much bigger problem than your dev environment)

¹ https://www.nassiben.com/video-based-crypta

TLDR LLM Summary: Researchers showed that a device's power LED subtly flickers in brightness and color while the CPU performs cryptographic work, and these flickers leak information about the secret key. By pointing an ordinary video camera (an iPhone or an internet-connected security camera) at the LED and exploiting the camera's rolling shutter, they boosted the effective sampling rate from 60 to 60,000 measurements per second, enough to do cryptanalysis. Using only this video footage, they recovered full ECDSA and SIKE keys from a smartcard reader and a Samsung Galaxy S8, with no malware on the target devices.

There are many better sandboxing options than docker (in terms of security and/or ease of use), and it sounded like you weren't doing sandboxing.

It's your computer and you can do whatever yolo nonsense you want, my dude, but put those goalposts back where they were.

"Don't run that shit on a credentialed box with data you care about" is addressing real threats, not some goofy nation state thing or abstract security research.

If you let the footgun machine constantly generate new code and run it on your computer, you're just asking for data loss and bad shit to happen.

Docker isn't a great solution but it at least doesn't let yolo code delete files or access env vars or read the contents of .ssh/

> my browser extensions, my npm install, my VSCode plugins, my curl | bash

Yeah, and you shouldn't yolo those, either lol. If they didn't come from a trusted source, you need to read through them. If you don't want to, don't use them. That's not paranoia, that's, like, normal.

> If you let the footgun machine constantly generate new code

Are you talking about autonomous LLM projects that automatically write code? Yeah, no shit, I wouldn't run anything like that directly on any machine without sandboxing. My typical LLM use inside my editor is never in self-driving mode, there's not even cruise-control - I tell it exactly when to write, where to write and how to do it. Automated scripts never get run by LLM and don't get to run at all without prior precise and meticulous inspection. I'm not moving goalposts - at worst we're in disagreement on the level of pragmatics vs. paranoia, that's all.

I don't even get why people are so crazy about LLMs generating code - on both sides. LLMs for me personally are such a great tool for investigating things, for finding things, for bridging the gaps - the stuff that happens 10K feet above code writing. By the time I'm done gathering the details, code generation becomes an almost insignificant touch of the whole endeavor.

I wonder what friction/maintenance he found with Tridactyl

For me the friction always comes when I try to use the internet without it

We're talking about https://addons.mozilla.org/en-US/firefox/addon/tridactyl-vim...?

One example: it disables the default Ctrl-F search function but its own search function is subpar (no match counts/hlsearch, e.g.) and often clashes with website's built-in search (on Github, e.g.).

It doesn't work on the default newtab either, and changing the default newtab somehow makes opening a new tab slower (that's FF's fault, I guess)…

You can type /phrase and then press ctrl-F for the full search bar. A more annoying problem is that some websites capture / presses, making it harder to initiate a page search. Then you have to shift-esc ctrl-f to search.

cool to see you in the wild, for me, it does work out of the box however, some sites will break or have too complex of a navigation, especially with iframes. and will have to swap to a mouse which is a bummer, which I understand is an inherent limitation of the tech, since web is not built today to do that.

solid extension, big fan

I'm not the author, but I recently gave up on Firefox, sadly.

Since I needed to keep around a Chromium anyway, and I already am forced to use one for work, it became simpler to just use solely use a Chromium.

In the process I dropped some extensions.

It's been great.

To be honest I find the use of a separate browser at work a good way of forcing separation - all "work stuff" is done in one browser, and all "personal stuff" is done in a different one.

This time around I'm using Chromium for personal stuff, and Firefox for work-stuff. I do more work-related browsing, so having the vertical tabs in firefox meant that was the better browser to use for official stuff.

(In my previous job I used safari for work, and firefox for personal.)

I used Firefox for 20 years, loved it, defended it. But they just kept removing features that I was used to, and I ran into some bugs with popular websites and decided to hang it up. Currently on Brave and fully convinced it's the new Firefox.

I am running Ubuntu as my desktop operating system. I would never do this without an LLM to do the work of keeping it functional for me. Today, Rise of Nations wouldn't launch. Never had that problem before. Seems the driver for 32-bit games and my Nvidia GPU weren't getting along after an update. Codex was called in and solved the problem for me in about 5 minutes. I just copied and pasted the Steam log and let it tell me what to do. Tadah.

I'm actually excited about the potential for a future where local agents help improve the operating system experience as I go by making changes based on my use case. All local, of course. I do not want to trust a cloud provider with my use cases/behavior on my computer so they can sell me more ads...

LLM discourse inspires me to do a cleaning of my browser tabs every hour.

Does anyone else not understand what people mean when they refer to the "friction" supposedly inherent to these power user tools? Almost none of the configs/scripts/etc I use for my heavily-customized and terminal-heavy setup get changed for years at a time.

If you are frequently having to use other computers, a heavily customized setup has much more friction either to setup the machine like you want, or remember how to do things without all the customization (if you can't customize or it isn't worth the time).

When I graduated college I used Dvorak and Emacs on Linux. Six months of having to use shared Windows lab computers extensively beat me down to surrender all of those points - my brain just couldn't handle switching, so I conformed my desktop to match. Then later I switched jobs to a group that was all Unix, but of many varieties most of which only had vi, not Emacs. And so I learned vi. Sometimes minimizing friction means going with the flow.

A heavily-customised setup is very comfortable.

It's so comfortable that it acts as an impediment to change, since some types of change are uncomfortable.

This can feel like friction to me.

When I remove customisation, I am more "open to experience", and often find preferable tooling.

Arguably NixOS is the most config heavy platform but it solves the pain point of having to reconfigure on different systems. Especially in the LLM era where I can configure Emacs and my OS decoratively.

How do you nixify your Emacs configuration? I've looked into it but at the time the advice was to specify dependencies both in Nix and in .emacs.d, which seemed redundant to me. Is there something like callCabal2Nix for Emacs?

Edit: Or do you mean "declaratively" in the sense of using something like straight.el?

> heavily-customized and terminal-heavy setup

this exactly. most people can’t set it up that well.

[flagged]

"more flexible with adopting new tech" and "freeing myself of previous choices" are completely unrelated to what you just wrote.

Especially ridiculous because old-school bash CLI scripts is the only usable protocol for interacting with LLM agents.

How on earth did you get that from the segment of text that you chose to quote?

Our lives are much more than our computing environments. By surrendering a bit of control of our computing environments we free up our brains to devote to other things in life: loved ones, pets, gardening, home maintenance, other hobbies and sports...

Millions of happy Apple users can't be wrong on this.

They're coming for that stuff next

Millions of Apple users can't even have grabable corners. Enjoy.

What if computing environments is our job?

Maybe, but for some of us, the peace of mind comes from stability and minimal friction with our tools.

Whenever I touch my config is because I get frustrated with one operation and tries to see if it can be done faster. If you use your computer like a toaster, then you wouldn’t care that much about power usage. But for me it’s a creative lab and I don’t want a generic cubicle.