My office computer has a Ryzen 7 5700, RX 580x, and 32gb of ram. Running ollama with deepseekv2 or llama3 is much slower than chatgpt in the browser. Same with my newer, more powerful home computer.
What kind of hardware do you need to run with comparable responsiveness to chatgpt? How much does it cost? Presuming such hardware is commercial, where do you find it?
If all you care about is response times, you can easily do that by just using a smaller model. The quality of responses will be poor though, and it’s not feasible to self host a model like chatgpt on consumer hardware.
For some quick math, a small Llama model is 7 billion parameters. Unquantized that’s 4 bytes per parameter (32 bit floats), meaning it requires 28 billion bytes (28 gb) of memory. You can get that to fit in less memory with quantization, basically reducing quality for lower memory usage (use less than 32 bits per param, reducing both precision and memory usage)
Inference performance will still vary a lot depending on your hardware, even if you manage to fit it all in VRAM. A 5090 will be faster than an iPhone, obviously.
… But with a model competitive with ChatGPT, like Deepseek R1 we’re talking about 671 billion parameters. Even if you quantize down to a useless 1 bit per param, that’d be over 83gb of memory just to fit the model in memory (unquantized it’s ~2.6TB). Running inference over that many parameters would require serious compute too, much more than a 5090 could handle. This gets into specialized high end architectures to achieve that performance, and it’s not something a typical prosumer would be able to build (or afford).
So the TL; DR is no
At this point, retail devices capable of 96 GB memory aren’t too difficult to find, if pocket allows, but how can one enter TB zone?
96 GB+ of RAM is relatively easy, but for LLM inference you want VRAM. You can achieve that on a consumer PC by using multiple GPUs, although performance will not be as good as having a single GPU with 96GB of VRAM. Swapping out to RAM during inference slows it down a lot.
On archs with unified memory (like Apple’s latest machines), the CPU and GPU share memory, so you could actually find a system with very high memory directly accessible to the GPU. Mac Pros can be configured with up to 192GB of memory, although I doubt it’d be worth it as the GPU probably isn’t powerful enough.
Also, the 83GB number I gave was with a hypothetical 1 bit quantization of Deepseek R1, which (if it’s even possible) would probably be really shitty, maybe even shittier than Llama 7B.
but how can one enter TB zone?
Data centers use NVLink to connect multiple Nvidia GPUs. Idk what the limits are, but you use it to combine multiple GPUs to pool resources much more efficiently and at a much larger scale than would be possible on consumer hardware. A single Nvidia H200 GPU has 141 GB of VRAM, so you could link them up to build some monster data centers.
Nivida also sells prebuilt machines like the HGX B200 which can have 1.4TB of memory in a single system. That’s less than the 2.6TB for unquantized deepseek, but for inference only applications, you could definitely quantize it enough to fit within that limit with little to no quality loss… so if you’re really interested and really rich, you could probably buy one of those for your home lab.
It’s all dependent on VRAM. If you can load the distilled models with your GPU without maxing out your VRAM it will run just as fast as any server farm.
RX 580x
It looks like your video card only has 8 GB of VRAM. That will be your bottleneck.
Also no ROCm support afaik, so it’s running completely on CPU
yah that’ll do it too. ive got a 6800xt which isn’t technically supported but it works well.
Install LocalAI and ensure it’s using acceleration. It’s one of the best solutions we have at the moment.
Are you sure you’re not running these small models off of CPU and no acceleration? Because I’m running these small models pretty quickly. Nearly instant responses using a NVIDIA titanXP from a gaming rig I built in 2017 ish.
AMD is quite awful in this regard. Rn with my rx6650xt using Vulcan acceleration, I get the same speed as running on my r5 7600
you did not specify what type of model your trying to run. like deepseekr1 has various models if your trying to run the massive ones its not gonna work. You need to use a smaller model. I have a RX 6600 and run the 14b parameter model it does well.
to be clear btw your CPU basically doesnt matter as far as i know. Just the GPU should be getting used any old CPU works. You CAN run it on a CPU but its gonna be very slow. But yeah the RX 6600 was decently cheap i got it for like 150$ so its not super expensive to run one of these models.
deleted by creator
You’d need basically a small server rack filled with datacenter GPUs. Expect mid to high 5 digit numbers.
But: running smaller models on a typical gaming GPU is quite doable.
deleted by creator
What kind of hardware do you need to run with comparable responsiveness to chatgpt?
Generally you need between $8-10,000 worth of equipment to get a relative responsiveness from a self-hosted LLM.
Anyone downvoting clearly doesn’t understand the hardware requirements to be able to run an LLM with a significant model that rivals ChatGPT. ChatGPT is a multi-billion dollar AI cluster…
OP specifically asked what kind of hardware you need to run a similar AI model with the same relative responsiveness, and GPT4 has 1.8 trillion parameters… Why would you lie and pretend like you can run a model like that on a fucking raspberry pi? You’re living in a dream world… Offline models like that require 128 GB of RAM which is $900-1200 in RAM alone…
It depends on what you mean by “relative responsiveness”, but you can absolutely get ~4 tokens/sec of performance on R1 671b (Q4 quantized) from a system costing a fraction of the number you quote.
This is the point everyone downvoting me seems to be missing. OP wanted something comparable to the responsiveness of chat.chatgpt.com… Which is simply not possible without insane hardware. Like sure, if you don’t care about token generation you can install an LLM on incredibly underpowered hardware and it technically works, but that’s not at all what OP was asking for. They wanted a comparable experience. Which requires a lot of money.
Yeah I definitely get your point (and I didn’t downvote you, for the record). But I will note that ChatGPT generates text way faster than most people can read, and 4 tokens/second, while perhaps slower than reading speed for some people, is not that bad in my experience.
Excellent, I’ll try the $8 option
deleted by creator