I got the most use out of it on an airplane with no wifi. It let me keep working on a coding solution without the internet because I could ask it quick questions. Magic.
I use it for personal entertainment, both writing and roleplaying. I put quite a bit of effort into my own responses and actively edit the output to get decent results out of the larger 30B and 70B models. Trying out different models and wrangling the LLM to write what you want is part of the fun.
Experimenting, as well as a cheaper alternative to cloud/paid models. Local models don't have the encyclopaedic knowledge as huge models such as GPT 3.5/4, but they can perform tasks well.
I use it to compare outputs from different models (along with OpenAI, MistralAI) and pick-and-choose-and-compose those outputs. I wrote an app[1] that facilitates this. This also allows me to work offline mode and not having to worry about sharing client's data to OpenAI or Mistral AI
I built myself a hacky alternative to the chat UI from openAI and implemented ollama to test different models locally. Also, openAI chat sucks, the API doesn't seem to suck as much. Chat is just useless for coding at this point.
I'm hoping someone will write a tool to do project estimations. Like instead of my manager asking me "how long would it take you to implement X,Y,Z ...", he could use the LLM instead.
It doesn't even need to be very accurate because my own estimations aren't either :)
I used them to extract data from relatively unstructured reports into structured csv format. For privacy/gdpr reasons it was not something I could use an online model for. Saved me from a lot of manual work, and it did not hallucinate stuff as far as I could see.
I got the most use out of it on an airplane with no wifi. It let me keep working on a coding solution without the internet because I could ask it quick questions. Magic.
I use it for personal entertainment, both writing and roleplaying. I put quite a bit of effort into my own responses and actively edit the output to get decent results out of the larger 30B and 70B models. Trying out different models and wrangling the LLM to write what you want is part of the fun.
Experimenting, as well as a cheaper alternative to cloud/paid models. Local models don't have the encyclopaedic knowledge as huge models such as GPT 3.5/4, but they can perform tasks well.
I use it to compare outputs from different models (along with OpenAI, MistralAI) and pick-and-choose-and-compose those outputs. I wrote an app[1] that facilitates this. This also allows me to work offline mode and not having to worry about sharing client's data to OpenAI or Mistral AI
[1]: https://msty.app
I built myself a hacky alternative to the chat UI from openAI and implemented ollama to test different models locally. Also, openAI chat sucks, the API doesn't seem to suck as much. Chat is just useless for coding at this point.
/e: https://github.com/ChristianSch/theta
I'm hoping someone will write a tool to do project estimations. Like instead of my manager asking me "how long would it take you to implement X,Y,Z ...", he could use the LLM instead.
It doesn't even need to be very accurate because my own estimations aren't either :)
I used them to extract data from relatively unstructured reports into structured csv format. For privacy/gdpr reasons it was not something I could use an online model for. Saved me from a lot of manual work, and it did not hallucinate stuff as far as I could see.