Seeing the confusion in the comments I want to provide some examples of situations where this might come up in a security or CTF context:

* You have a restricted shell or other way to execute a restricted set of commands or binaries, often with arbitrary parameters. You can use GTFOBins in interesting ways to read files, write files, or even execute commands and ultimately break out of your restricted context into a shell.

* Someone allowed sudo access or set the SUID bit on a GTFOBin. Using these tricks, you may be able to read or write sensitive files or execute privileged commands in a way the person configuring sudo did not know about.

This is pretty relevant for things like claude-code, which has a fairly rudimentary way of dealing with permissions with block-lists and allow-lists.

I once accidentally gave my claude "powershell" permissions in one session, and after that any time it found it was blocked from using a tool, e.g. git, it would write a powershell script that did the same thing and execute the script to work around the blocked permission.

Obviously no sane system would have "powershell" in a generic allow-list, but you could imagine some discrepancy in allowed levels between tools which can be worked around with the techniques on this page.

Power Shell or Python scripts to work around restrictions are the go to for LLMs.

And it doesn't stop there.

Yesterday I was trying to figure out some icons issue in KDE plasma (I know nothing about KDE). Both Claude and Codex would run complex bus and debug queries and write and execute QML scripts with more and more tools thrown into the mix.

There's no way to properly block them with just allow- and block lists

> There's no way to properly block them with just allow- and block lists

Especially not when some harnesses rely on the reliability of the LLM to determine what's allowed or not, pretty much "You shouldn't do thing X" and then asking the LLM to itself evaluate if it should be able to do it or not when it comes up. Bananas.

Only right and productive way to run an agent on your computer is by isolating it properly somehow then running it with "--sandbox danger-full-access --dangerously-bypass-approvals-and-sandbox" or whatever, I myself use docker containers, but there are lots of solutions out there.

You have to be extremely careful when you set up a dev container, lock down file access, do not give the agent the power to start other containers or "docker compose up", restrict network access to an allow-list etc. Just running the agent in a container does little to protect you. (Maybe you know this, but a lot of people don't!)

Most of those things are what happens by default. Sure, be careful, but by default it's secure enough to prevent most potential issues. No need to lock down file access for example, by default it only has access to files inside the container, and of course by default containers don't have access to start other containers, and so on.

Good word of caution though, make sure you actually isolate when you set out to isolate something :)

I've just discovered and started using smolmachines^1 which actually have the requisite isolation.

1. https://smolmachines.com

As mentioned, "podman/docker run -it $my-image codex" also actually has the requisite isolation by default, no need for special software. Biggest risk is accidental deletion of stuff, easily solved without running an entire VM, which "smol" machines seems to be. No doubt VMs have their uses too, but for simple isolation like this I personally rather use already existing tooling.

In a previous employer, they block the chmod command. I took the habit to python -c "import os; os.chmod('my_file',744)".

Glad to see LLM re-discover this trick.

> to see LLM re-discover

I imagine someone probably wrote very specifically about it in the training data that underwent lossy compression, and the LLM is decompressing that how-to.

So I'd say it's more like "surfacing" or "retrieving" than "re-discovering".

They scraped everything on Stackoverflow, likely IRC logs from Freenode, and every book written in the modern era courtesy of Sci-Hub / Library Genesis / Anna's Archive / Z Library.

RIP Aaron Swartz, they're generating trillions in shareholder value from the spiritual successors to the work they were going to imprison you for.

Indeed, I check and the solution was already on stack overflow https://askubuntu.com/a/1483248

For the LLM it's a probabilistic set of strings that achieves the outcome, the highest probability set didn't work, try the next one until success or threshold met. A human sees the implicit difference between the obvious thing not working indicating someone doesn't want you to do it, but an LLM unless guided doesn't seen that sub-text.

So chmod +x file didn't work, now try python -c "import os; os.chmod('file',744)"

Humans and LLMs both only see that when given the right context. A tool not working in a corporate environment may be anything from oversight, malfunction all the way to security block. Knowing which one it is takes a lot of implicit knowledge. Most people fail to provide this level of context to their LLMs and then wonder why they act so generic. But they are trained to act in the most generic way unless given context that would deviate from it.

[dead]

> * Someone allowed sudo access or set the SUID bit on a GTFOBin. Using these tricks, you may be able to read or write sensitive files or execute privileged commands in a way the person configuring sudo did not know about.

Some enterprise security software that is designed to "mediate privilege elevation" includes an allowlist configured by the administrators. My experience seeing this rolled out at one company was that software on the allowlist no longer required a password to run with `sudo`. The allowlist initially included, of course, all kinds of broadly useful software that made its way onto this list (e.g., vim, bash).

I worked from home at this company, and I remember thinking it was a good thing, because this software deployed to "secure" my computer made it drastically weaker to someone walking up to it and trying to run something if I stepped away from the keyboard for a moment and forgot to lock it.

Concrete example:

A few years back, our support team needed to do some network capture with tcpdump. The quick and natural way to allow that was to add a sudo rule for it, with opened arguments (I know it's a bit risky, but tcp port and nic could change).

Looks good enough? Well no...

With tcpdump, you can specify a compress command with the "-z" option. But nothing prevents you from running a "special" compress command and completely take over the server:

> sudo tcpdump -i any -z '/home/despicable_me/evil_cmd.sh' -w /tmp/dontcare.pcap -G 1 -Z root

This seems trivial, but that the kind of stuff which are really easy to miss. Even if these days, security layers like apparmor mitigate this risk (causing a few headaches along the way), it's still relatively easy to mess it up.

And here I thought this is a curated list so AI can learn how to bypass sandboxes.