I'm not Linux and i have the same sentiment. Fuck nvidia. If rather give my money to some other company. I am using an nvidia gpu now, but it's the 1050ti I got an age ago. I'll run it into the ground before i upgrade and i won't be getting an nvidia one when i do
AMD's support for AI is just fine, you just have to choose a path - if you're on Linux, use their CUDA translation software (ROCm), if you're on Windows, use DirectML.
This is quite untrue, especially if you do actual research and not just run other people’s models. For example, ROCm is missing in many sparse autograd frameworks, e.g. pytorch_sparse, or having a viable alternative to Nvidias MinkowskiEngine. This is needed if you do any state-of-the-art convnets with attention-like sparsity.
Nvidia? Ew. Put it back in the chest.
Good format though ty.
The Linux users seeing an Nvidia GPU
I'm not Linux and i have the same sentiment. Fuck nvidia. If rather give my money to some other company. I am using an nvidia gpu now, but it's the 1050ti I got an age ago. I'll run it into the ground before i upgrade and i won't be getting an nvidia one when i do
AMD for life!
Afaik AMD still doesn't have the same kind of support for AI software.
TBH I haven't had any issues gaming with my Nvidia card either
AMD's support for AI is just fine, you just have to choose a path - if you're on Linux, use their CUDA translation software (ROCm), if you're on Windows, use DirectML.
This is quite untrue, especially if you do actual research and not just run other people’s models. For example, ROCm is missing in many sparse autograd frameworks, e.g. pytorch_sparse, or having a viable alternative to Nvidias MinkowskiEngine. This is needed if you do any state-of-the-art convnets with attention-like sparsity.