Four months ago, someone at AWS asked me how Bedrock prevents LLMs from seeing customer data.Casual office conversation. Nothing serious. But it turned into a real discussion AWS's entire business is built on data protection, and here were all these giants calling AI agents the next big thing. Agents that, by default, see everything.I said AWS hasn't built that layer yet. But they will.That gap became astra.

Your agent gets tokens instead of real data. It reasons, decides, acts real values only resolve at execution. PHI, PCI, PII never touch the model context. Two lines of code, works with whatever you're already running.codeastra.dev That's what i am working on , did some test but need more customers review on that.

This idea is one I floated for my agent framework; it can get a little complicated if the model needs to reason about the redacted data. If not, excellent.

The other half of this equation is correctly marking PII/etc. This is a problem I'm relatively familiar with, at least as far as brute-forcing from raw files. I'd be curious to hear about how you managed this. Or is that something that AWS handles for you?