

10·
1 day agoDepending what OP was using before but going from something like GPT5.2 to LLama 3 8B will be a massive difference (Although OP says to use it only for basic tasks so that does offset it)
LLama 3 already being a very old model doesn’t help either
I run Qwen3.5-35B-A3B-AWQ-4bit which while leagues ahead of LLama 3 8B still is a very noticeable difference.
This is not to say open source is bad, if one had the resources to run something like Qwen3.5-397B-A17B it would also be up there.
I’m running 2x4090, the 35B fits very comfortable in that.
For large models like the 397B without a ton of money there are several ways, ive seen posts of people using arrays of used 3090s with good results.
The other option is CPU inference although with current RAM prices that is less cost effective.
I was looking at maybe an array of Milk-V JUPITER2 since vllm added riscv support which could be very cost effective.