I don't think that LLMs are trustworthy companions in managing a complex metabolic disease like diabetes - especially if you deviate (ever so slightly) from the norm (very lean, very active, strict diet, etc.)!

I'm a T1D myself and like to experiment with ChatGPT (or Opus). My experiences are mixed

LLMs are overly cautious when it comes to correcting with insulin. They regularly advise against correcting before going to bed, even if this means that my blood glucose remains above 140 mg/dl for the whole night.

I am following a low to medium carb diet (<100g a day). ChatGPT always nudges me to consume more carbohydrates, even though I have a TIR of 90% (70-150 mg/dl). Why would I change my diet if it currently works very well for me? Still, most LLMs seem to favor carbs for some reason.

I am using Fiasp as my fast acting insulin. Typically, I inject around 1 to 4 IUs of Fiasp. Its glucose-lowering effect typically lasts for roughly 2-3 hours. Therefore, I know that it is safe to re-inject after three hours without risking insulin stacking. But ChatGPT regularly advises against that and wants me to wait another 1-2 hours.

I am not against automating diabetes management. In fact, I really appreciate projects that help with that. But I don't consider LLMs to be helpful in this regard. Their combination of training data bias, liability aversion, lack of context, and one-size-fits-all thinking disqualifies them from such tasks.

I understand this instinct, but I can see the appeal of capabilities that are well within the limits of a well-designed agentic system.

Imagine asking such a system, "look at my postprandial response to dosing for the past week and make ratio suggestions for breakfast, lunch, and dinner." This is genuinely helpful, saves time, and well within the reasoning limits. You could spot check if you like.

Is it worth setting up such an assistant for the value you'd get out of it? I guess that's on the user and how many similar use cases exist.

> look at my postprandial response to dosing for the past week and make ratio suggestions for breakfast, lunch, and dinner

I'm not so sure about that. A patient absolutely must critically evaluate the LLM's suggestions. A naive user risks severe complications. A user with that kind of competence, however, doesn't need an LLM for such trivial adjustments - they're obvious

[flagged]