• filister@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    6 months ago

    How is the situation with ROCm using consumer GPUs for AI/DL and pytorch? Is it usable or should I stick to NVIDIA? I am planning to buy a GPU in the next 2-3 months and so far I am thinking of getting either 7900XTX or the 4070 Ti Super, and wait to see how the reviews and the AMD pricing will progress.

    • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      6 months ago

      Works out of the box on my laptop (the export below is to force ROCm to accept my APU since it’s not officially supported yet, but the 7900XTX should have official support):

      Last year only compiling and running your own kernels with hipcc worked on this same laptop, the AMD devs are really doing god’s work here.

      • filister@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        6 months ago

        Anything that is still broken or works better on CUDA? It is really hard to get the whole picture on how things are on ROCm as the majority of people are not using it and in the past I did some tests and it wasn’t working well.

        • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          6 months ago

          Hard to tell as it’s really dependent on your use. I’m mostly writing my own kernels (so, as if you’re doing CUDA basically), and doing “scientific ML” (SciML) stuff that doesn’t need anything beyond doing backprop on stuff with matrix multiplications and elementwise nonlinearities and some convolutions, and so far everything works. If you want some specific simple examples from computer vision: ResNet18 and VGG19 work fine.