In the strongest interpretation of that it would offer only data which the user is allowed to access. Why do you assume that them implementing a feature to prevent PII being accessed that they then turn around and return data which the user is not supposed to access?
If it's PII data the best thing for them to do is not even allow the AI to have access to it. They're admitting to that so I doubt they've gone through the effort to forward the user's auth token to the downstream database.
And with security it's always best to assume the worst case (unless you're certain that something is safe) because that would lead you to add more safeguards rather than less.
To be fair to them, the architecture description said that each datasource had a unique agent, so the orchestrator AI didn't have direct data access, and that they specifically only allow access to data the user has permissions for.
Unclear if each datasource agent is ALSO AI based though, in which case it has just pushed the same concern down the line one hop.