I'm currently learning Janet and using ChatGPT as my tutor is absolutely awful. "So what is the difference between local and var if they are both local and not global variables (as you told me earlier)?" "Great question, and now you are really getting to the core of it, ... " continues to hallucinate.
It's a great tutor for things it knows, but it really needs to learn its own limits
>It's a great tutor for things it knows
Things well-represented in its training datasets. Basically React todo list, bootstrap form, tic-tac-toe in vue
> I'm currently learning Janet and using ChatGPT as my tutor is absolutely awful
Yeah, slightly outside of "mainstream" technologies tend to have lots of hallucinated material. The only thing I've found to work is O3 + "Search" enabled for those cases, tends to help a lot, what models and tools are/were you using?
For reference, this is what O3 + Search + My custom prompt gets as a result, which seems good and informative: https://chatgpt.com/share/688a0134-0f5c-8001-815d-b18c92875a...
Another note, as it seems like your example is part of a longer conversation, try to restart the conversation instead of adding more messages when something is wrong. All models I've tried (even Pro-mode on ChatGPT) degrades quickly past the first message, like a lot. Only workaround to that issue is restarting the conversation with new context rather than continuing with a context that contains errors.
For these unfortunately you should dump most of the guide/docs into its context
I'm using ChatGPT to practice Chinese, and it's at the same time perfect and maddening. It can generate endless cloves and other exercises, and it has sentence structures that even the Chinese Grammar Wiki lacks, but then it has the occasional:
Is deep seek any better? Just curious.
I have never tried it but it's a good idea.
It is like a tutor that desperately needs the money, which maybe isn't so inaccurate for OpenAI and all the money they took from petrostates.