My side project - https://macrosforhumans.com - is a traditional mobile macro tracker with first class support for voice (and soon image and text blob) inputs for your recipes, ingredients, measurements, units, etc. Kind of a neat project that may never make it too far off the ground considering I am not a mobile dev but it's been fun to build so far with the help of claude code. It's built with flutter and a fastapi backend.

In the AI macro food logging world, there's really only Cal AI which estimates macros based on an image. I use cronometer personally, and it's super annoying to have to type everything in manually, so it makes sense why folks reach for something like Cal AI. However, the problem with something like Cal AI is accuracy. It's at best a guess based on the image. Macros for humans tries to be more of a traditional weigh your food, log it, etc kind of app, while updating the main interface for how users input that info into something more friendly.

I set myself a hard deadline to present a live demo at a local showcase/pitch event thing at the end of the month. I bet the procrastination will kick in hard enough to get the backend hosted with a proper database and a bit more UI polish running on my phone. :-)

Here's a really early demo video I recorded a few weeks ago. I had just spoken the recipe on the left and when I stop recording you can see my backend streams the objects out as they're parsed from the LLM https://www.youtube.com/watch?v=K4wElkvJR7I

I believe Macrofactor has had these features for quite a while now.