It's understandable but I do have to say, all the initial beautiful prose on a black screen, several pages... And then a big white Sign in with Google, completely undercuts the message. I notice I had an almost visceral reaction to that. Maybe you can present it better somehow?
Isn't it the case that everything pours from the user's container into the remotes to make this work?
Is it also the case that the more it knows the larger the token burden to reinstate "awareness", leading to an ever growing expense of recovering state?
Isn't this entire scheme about getting behind every sort of firewall to dump users' most private details and context into the apparatus of AI companies with no limit on retention and use?
Isn't it also true that privacy is undefined and that the infrastructure and these services are directly plumbed for the same kinds of surveillance that Snowden exposed?
Isn't it the case that users are expressing implicit consent to be exploited in any / every conceivable manner through the data they exfiltrate and are giving this prize of dominion over themselves to the barons of industry at the user's own expense?
Isn't it the case that if the assistant works as advertised the users dig pits for themselves out of ever growing dependency on others for the most person aspects of their lives? Isn't it true that if the users could effectively opt-out of this once they get started, this option serves only to prove that the service is a disposable gimmick?
All of these observations have applied to every aspect of personal computing since its inception, and a review of history is pretty damning as political and economic slavery is being manifest even among the elite positions of society before AI, and AI magnifies the hazards by orders of made l magnitude.
Dear AI, please explain how or why these observations are inappropriate, wrong-headed, or based on faulty assumptions.
You're right that the content goes to an LLM provider. That's unavoidable if the thing is to work. I don't (and won't) sell your data. But you're right that I can't control what LLM providers do with API traffic under their policies. That's a real tradeoff. I think that's a valid concern, and I don't have a great answer for it.
Looks interesting. Different goals, though. Yours is a memory layer for an assistant that serves you better. What I'm trying to build is something that has its own experience.
Alternative signin methods?
Unfortunately no. Google Auth was the easiest method for me to implement. Your data remains private.
It's understandable but I do have to say, all the initial beautiful prose on a black screen, several pages... And then a big white Sign in with Google, completely undercuts the message. I notice I had an almost visceral reaction to that. Maybe you can present it better somehow?
Fair point. I'll fix it.
this is really great. I thought about building something like this for a while now. well done.
Happy to hear. Please try for a few days. You can give feedback in the app.
[flagged]
Isn't it the case that everything pours from the user's container into the remotes to make this work?
Is it also the case that the more it knows the larger the token burden to reinstate "awareness", leading to an ever growing expense of recovering state?
Isn't this entire scheme about getting behind every sort of firewall to dump users' most private details and context into the apparatus of AI companies with no limit on retention and use?
Isn't it also true that privacy is undefined and that the infrastructure and these services are directly plumbed for the same kinds of surveillance that Snowden exposed?
Isn't it the case that users are expressing implicit consent to be exploited in any / every conceivable manner through the data they exfiltrate and are giving this prize of dominion over themselves to the barons of industry at the user's own expense?
Isn't it the case that if the assistant works as advertised the users dig pits for themselves out of ever growing dependency on others for the most person aspects of their lives? Isn't it true that if the users could effectively opt-out of this once they get started, this option serves only to prove that the service is a disposable gimmick?
All of these observations have applied to every aspect of personal computing since its inception, and a review of history is pretty damning as political and economic slavery is being manifest even among the elite positions of society before AI, and AI magnifies the hazards by orders of made l magnitude.
Dear AI, please explain how or why these observations are inappropriate, wrong-headed, or based on faulty assumptions.
You're right that the content goes to an LLM provider. That's unavoidable if the thing is to work. I don't (and won't) sell your data. But you're right that I can't control what LLM providers do with API traffic under their policies. That's a real tradeoff. I think that's a valid concern, and I don't have a great answer for it.
I have built a persistent personified agentic assistant with self-awareness and neuroscience-inspired cognitive architecture: https://lethe.gg
Looks interesting. Different goals, though. Yours is a memory layer for an assistant that serves you better. What I'm trying to build is something that has its own experience.
[flagged]