This is an excellent example of local LLM application [1].

It's an AI-driven chat system designed to support students in the Introduction to Computing course (ECE 120) at UIUC, offering assistance with course content, homework, or troubleshooting common problems.

It serves as an educational aid integrated into the course’s learning environment using UIUC Illinois Chat system [2].

Personally I've found it's really useful that it provides the details portions of course study materials for examples slides that's directly related to the discussions so the students can check the sources veracity of the answers provided by the LLM.

It seems to me that RAG is the killer feature for local LLM [3]. It directly addressed the main pain point of LLM hallucinations and help LLMs stick to the facts.

[1] Introduction to Computing course (ECE 120) Chatbot:

https://www.uiuc.chat/ece120/chat

[2] UIUC Illinois Chat:

https://uiuc.chat/

[3] Retrieval-augmented generation [RAG]:

https://en.wikipedia.org/wiki/Retrieval-augmented_generation

Does this actually need to be local? Since the chat bot is open to the public and I assume the course material used for RAG all on this page (https://canvas.illinois.edu/courses/54315/pages/exam-schedul...) all stays freely accessible - I clicked a few links without being a student - I assume a pre-prompted larger non-local LLM would outperform the local instance. Though, you can imagine an equivalent course with all of its content ACL-gated/'paywalled' could benefit from local RAG, I guess.