Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am not a PyTorch user, but there is already Intel-supplied ARC acceleration for PyTorch: https://www.intel.com/content/www/us/en/developer/articles/t...

Having half the number of GPUs in a workstation/local server setup to have same amount of VRAM might make up for whatever slowdown there would be if you had to use less-optimized code. For instance running or training a model that required 192GB of VRAM would take 4x48GB VRAM but 8x24GB VRAM GPUs.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: