> An app should have absolutely no way of knowing what kind of device it’s running on or what changes the user has made to the system.

and therefore the app cannot give a reasonable guarantee that it is not running in an adversarial environment that actively tries to break the app's integrity. Thus, the app cannot be used as a verified ID with governmental level of trust.

There's a difference between needing to lock down the whole OS and just the secure element. The secure hardware component can sign a challenge and prove possession of a private key without you being able to extract it. Smartcards have done this for decades (most people here will know an implementation under the name Yubikey).

Conveying authentic information across untrusted channels (your phone screen, say) has been a solved problem since asymmetric cryptography was invented back before I was born

All the more reason to not be requiring such things in the first place.

And that it is not required. Physical ID is still accepted

Still. Until you have to prove your age to social media websites, for which you'll be nudged to use a digital id.

Unless you'll want to make your face available to third party verification services.

If your app needs to be protected from harm, it cannot protect the user from said harm. I hoped software engineering culture was lucky to not have the same precepts that make lockpicking a crime in the real world, that we successfully make it into common knowledge that you can't grant any trust to the client, but it seems "trusted computing" is making some of us unlearn that lesson.

While this is HEAVILY off-topic i just have to say it.

"common knowledge that you can't grant any trust to the client" is the exact reason it annoys me so much when peoples solution to cheaters in video games is basically just "Rootkit my pc please"

As long as the anticheat is Client sided, you shouldnt put trust in it.

[deleted]

You do not have to trust the device if you can verify the information it provides, either cryptographically or by checking with an authoritative trusted server.

> governmental level of trust

This made me laugh out loud. Not because it's a meaningless phrase (where does "governmental" rank on a scale of fully to least trusted?), but because it seems to imply that governments do not have a miserable track record when it comes to IT security.

Though I suppose considering a security model sound because it uses security through obscurity like a blackbox integrity check would be very... governmental.

Does that mean "govermental level of trust" ranks somewhere between "snake-oil" and "cope"?

> an adversarial environment that actively tries to break the app's integrity

Can you elaborate on what this means? Who is the adversary? What kind of 'integrity'? This sounds like the kind of vague language DRM uses to try to obscure the fact that it sees the users as the enemy. An XBox is 'compromised' when it obeys its owner, not Microsoft.

The app is running in a virtual environment intercepting its system calls and designed to patch app‘s memory to fake an ID.

> governmental level of trust

For most governments that is a very low bar.