Option 4: Blegh!
Add my vote to Option 4 too! ![]()
Ditto!
Choose your poison:
- Peckās Anchovette
- Redro
- Home made
- Other, comment below

Yaāll make it seem like Iām one dodgy-mofo.
Well, the poll was kinda fishyā¦
What? I actually voted this time!
Love me some fish paste. Was so sad when they stopped making it and rejoiced when it came back!
Yeah I struggled when they took it off the shelves. Thatās when I learned to make it myself, the only downside is the even shorter shelf life.
I am curious. What is your AI model of choice?
- ChatGPT
- Claude
- Gemini
- Copilot
- DeepSeek
- Grok
- Perplexity
- Other, comment below.
- I have a paid plan for one or more, comment below.
I want to expand my knowledge and understanding of these tools and use them more effectively. Also, considering if a paid option is going to add much more value, as they are pricey.
That being said, I do make use of Capacities.io as my preferred PKN and note-taking space, which I do pay for for access to their AI integration.
Other:
Ollama running Qwen 30b model locally ![]()
What quant are you running? And how much VRAM do you have? Iām still on my RTX3080 10GB and canāt run more than a 7B or 14B parameter model on a 3 or 4 bit quant without performance dipping to unusable levels.
Iāve reverted to smaller models for the most part, using a Qwen Coder model (around 1.5B parameters IIRC) for the most part.
Edit: To add, LMStudio is much easier for personal / home use to setup. Ollama needs more technical know-how to run, especially if you want run a chatbot GUI on top of it. But it can be more efficient, resource wise.
Well, we got a 3090 with 24GB VRAM specifically to run this in the office. We needed a model with tools so the smart developers can tap into it properly and create their own interfaces and APIās. The idea is for the model to eventually be able to āknowā our DB and how to answer human queries with the results sent back in human language again.
The devs have big plansā¦
I set up the Ollama for them and they go in via the Ollama API so I have no idea how much harder they make it for themselves. I have installed OpenWebUI before and used it but they moaned I tapped into their precious resources⦠So I put on Unigen Heaven Benchmark on repeat and they could not figure out why it ran so slowly ![]()
What kind of person are you?
- Sock sock, shoe shoe
- Sock shoe, sock shoe
Shoe shoe, sock sock.