Please, both hackernews readers and the author take a closer look to see the cracks on the site. I'm fine with using AI generation, but it needs human review especially for legally binding stuff like the privacy policy.

1. Look at the concepts pie on the home page. The text in the pie is unreadable. Its overlapping and overflowing, white text clipping onto a white background with terms like "topic tagging" that are not an actual example. Its like no human looked at the image before putting it on the website. Maybe just a slip up, we all make mistakes, let's keep looking.

2. I didn't understand the data storage/privacy from the video, so let's look at the privacy policy. At one point the policy says "Do we receive any information from third parties?

No, we do not receive any information from third parties."

Right before later saying:

"journal entries or project-related text that you select are sent to the ChatGPT-5 thinking nano API operated by OpenAI."

Open AI *is a third party*! The answer is "Yes we send data to Open AI under these conditions". That's bad.

3. Lets look deeper. The privacy policy says they store 3 things with the first bullet point (in full) being "A unique user ID number that cannot be used to identify you." You're telling me a literal Identification (ID) Number can't identify me? Why does it exist? That is borderline nonsensical.

4. The video has similar vague stuff saying the data is processed locally after saying its going to chatGPT 5.

I'm giving harsh feedback because I want a project like this to exist, be done right, and succeed. I understand "ship fast and iterate". You're going too fast and you're not shipping an MVP, there is lots of feature creep.

Even when everything looks good, people should be hella skeptical about an app that wants to (potentially) harvest extremely personal daily journal logs. When every page smells like "I generated this and didn't fully check it" it makes me imagine how many hidden problems there are in the codebase.

- The kinda-rough AI video tells everyone "I don't have time to record a 5 min video of my own project". If you want me to believe you care, at least hire a narrator on fiver for $20 if you don't like speaking and/or showing your face. Why should I trust what you say you'll do with my most personal data when you don't even show yourself/show a human?

- There's only three important things: pricing, privacy, and the data analysis / coach. Leading with price is good/solved. What's missing is clarity about privacy. The hackernews post is much more clear, the website is not. I don't need more words, I need to know when the data is and is not shared and I need to be convinced you're responsible. Right now stuff like "Dlog’s private AI model" makes it confusing what's local and what's shipped to OpenAI.

- Even when explained clearly, privacy is going to be a problem. Let me use me use my own model/token/url. It's easy to point to a local URL that responds with data in the exact same format as GPT 5. That kind of feature is 10x more important than changing the color of the background.

- I'm not getting a coaching app because it has a good theme engine. Finish talking about coaching/analysis before going into themes and calndars etc. I don't even care how data is entered into the app, until after I know the useful things its doing. Give a real example of insight that changed your daily choices.

- I think you can do it, and I'm glad to see someone trying to meet this usecase.

Appreciate the detailed look. A few clarifications and immediate fixes: • Concepts pie: noted. It’s a minor visualizer and not a current priority; there are dozens of other charts in the Dlog Lab tab. I’ll queue a fix, but I’m focusing elsewhere first. • Privacy policy: you’re right—OpenAI is a third-party processor. I’m correcting the policy to say exactly when data is sent, what is sent, and under what controls (Enterprise API with no training/retention). I’ll also add a simple data-flow diagram. • Local vs cloud: journals live on-device; the SEM runs locally. Scoring only happens when you explicitly choose to score—there’s no background upload. I’m adding a per-journal “Include in Coach analyses” toggle, and a simple “Remove names” anonymizer (ships in the next few days) so names are stripped before any scoring call. • BYO endpoint: not on the near-term roadmap. I’m prioritizing clear privacy controls and product focus over supporting custom model URLs right now. • Copy/video: I’ll tighten the site copy to lead with privacy and analysis, and make the local vs cloud boundary crisp.

Thanks for pushing on clarity—fixes are in motion.