Topic of Polls

Option 4: Blegh!

3 Likes

Add my vote to Option 4 too! :nauseated_face:

3 Likes

Ditto!

2 Likes

Choose your poison:

  • Peck’s Anchovette
  • Redro
  • Home made
  • Other, comment below
0 voters

Sad Jim Carrey GIF

2 Likes

Ya’ll make it seem like I’m one dodgy-mofo.

1 Like

Well, the poll was kinda fishy…

3 Likes

What? I actually voted this time!

2 Likes

Love me some fish paste. Was so sad when they stopped making it and rejoiced when it came back!

1 Like

Yeah I struggled when they took it off the shelves. That’s when I learned to make it myself, the only downside is the even shorter shelf life.

2 Likes

I am curious. What is your AI model of choice?

  • ChatGPT
  • Claude
  • Gemini
  • Copilot
  • DeepSeek
  • Grok
  • Perplexity
  • Other, comment below.
  • I have a paid plan for one or more, comment below.
0 voters

I want to expand my knowledge and understanding of these tools and use them more effectively. Also, considering if a paid option is going to add much more value, as they are pricey.

That being said, I do make use of Capacities.io as my preferred PKN and note-taking space, which I do pay for for access to their AI integration.

Other:
Ollama running Qwen 30b model locally :wink:

1 Like

What quant are you running? And how much VRAM do you have? I’m still on my RTX3080 10GB and can’t run more than a 7B or 14B parameter model on a 3 or 4 bit quant without performance dipping to unusable levels.

I’ve reverted to smaller models for the most part, using a Qwen Coder model (around 1.5B parameters IIRC) for the most part.

Edit: To add, LMStudio is much easier for personal / home use to setup. Ollama needs more technical know-how to run, especially if you want run a chatbot GUI on top of it. But it can be more efficient, resource wise.

2 Likes

Well, we got a 3090 with 24GB VRAM specifically to run this in the office. We needed a model with tools so the smart developers can tap into it properly and create their own interfaces and API’s. The idea is for the model to eventually be able to ā€œknowā€ our DB and how to answer human queries with the results sent back in human language again.

The devs have big plans…

I set up the Ollama for them and they go in via the Ollama API so I have no idea how much harder they make it for themselves. I have installed OpenWebUI before and used it but they moaned I tapped into their precious resources… So I put on Unigen Heaven Benchmark on repeat and they could not figure out why it ran so slowly :smiley:

2 Likes

What kind of person are you?

  • Sock sock, shoe shoe
  • Sock shoe, sock shoe
0 voters
1 Like

Shoe shoe, sock sock.

4 Likes