Thanks for sharing that story, great cautionary tale about apps that drift from “helpful” into prescriptive/social enforcement.
Dlog takes the opposite approach: • Agency first: The Coach proposes; you decide. No phone trees, no task assignments, no social mechanics. You can ignore, not use or set a low-cadence to the guidance. • Explainability over prescriptions: Suggestions come with a brief “why” based on your SEM reports (which factors moved and by how much) plus charts so you can sanity-check against lived experience. • Local-first privacy: Journals live on-device (EventKit). Scoring + SEM run locally. By default no raw text leaves the device unless you use the coach, this is optional, but there is a bit of a leap of faith here with OpenAI; until I enable on device LLMs in due course. • No hidden incentives: No affiliate nudges, upsells, or growth hacks. It doesn’t decide what you wear/eat or route calls to you; it surfaces patterns (e.g., “energy dips after external calls”) so you can choose actions that fit your constraints.
If that story raised a specific worry: loss of autonomy, privacy creep, or community spam, then does the above address it? I’m especially interested in whether the “why” behind recommendations is clear enough or needs to be tighter.