There is some improvement going from 4-bit to 8-bit quantization, but if you have VRAM to spare for that, you usually see more benefit from running a 2x larger model at 4-bit. So in scenarios where an LM already fits the existing VRAM budget, I would expect larger models instead.
The other thing is that VRAM is used not just for the weights, but also for prompt processing, and this last part grows proportionally as you increase the context size. For example, for the aforementioned QwQ-32, with base model size of ~18Gb at 4-bit quantization, the full context length is 32k, and you need ~10Gb extra VRAM on top of weights if you intend to use the entirety of that context. So in practice, while 30b models fit into 24Gb (= a single RTX 3090 or 4090) at 4-bit quantization, you're going to run out of VRAM once you get past 8k context. Thus the other possibility is that VRAM saved by tricks like sparse models can be used to push that further - for many tasks, context size is the limiting factor.
For readability I'm using the same convention that is generally used for these applications, where if you see "-Nb" after a model name, it always refers to the number of parameters. I have never once seen "p" for "parameter", never mind terms like "giga-parameter". Most certainly if you go searching for models on HuggingFace etc, you'll have to deal with "30b" etc terminology whether you like it or not.
With VRAM, this quite obviously refers to the actual amount that high-end GPUs have, and I even specifically listed which ones I have in mind, so you can just look up their specs if you genuinely don't know the meaning in this context.
If a 32B model@4bit normally requires 16 GB VRAM, at half the size, it could be run @8bit with 16 GB VRAM?
Isn't that tradeoff a great improvement? I assume the improved bit precision will more than compensate for the loss related to removal?