Figured as much, anyone opening a database to any sort of potentially hostile input should know to restrict the permissions.
I'm more focused on the AI side of things. Like, if it's done as a part of the (system) prompt, it should eventually be possible to evict the command tokens when the context window becomes too large?
Or is it possible the LLM did try to run `DELETE FROM hackernews.full`, was denied, and then is prompted to return the response you saw?
The error message came instantaneously, plus when asking a "legitimate" input ("what does user mschuster91 write about") it not just struggled to write legitimate SQL but explicitly said so in its response, so I think this is either seriously reinforced during training to not ever run a DELETE or otherwise destructive operation or there's some sort of firewall.