Any suggestions for a simple tool to set up your own local evals?

Just ask LLM to write one on top of OpenRouter, AI SDK and Bun To take your .md input file and save outputs as md files (or whatever you need) Take https://github.com/T3-Content/auto-draftify as example

My "tool" is just prompts saved in a text file that I feed to new models by hand. I haven't built a bespoke framework on top of it.

...yet. Crap, do I need to now? =)

Yeah I’ve wondered about the same myself… My evals are also a pile of text snippets, as are some of my workflows. Thought I’d have a look to see what’s out there and found Promptfoo and Inspect AI. Haven’t tried either but will for my next round of evals

Well you need to stop them from getting incorporated into its training data

_Brain backlog project #77 created_

[deleted]