If even half of Intel’s claims are true, this could be a big shake up in the midrange market that has been entirely abandoned by both Nvidia and AMD.

  • @[email protected]
    link
    fedilink
    English
    44
    edit-2
    1 day ago

    If they double up the VRAM with a 24GB card, this would be great for a “self hosted LLM” home server.

    3060, 3090 prices have been rising like crazy because Nvidia is vram gouging and AMD inexplicably refuses to compete. Even ancient P40s (double vram 1080 TIs with no display) are getting expensive. 16GB on the A770 is kinda meager, but 24GB is the point where you can fit the Qwen 2.5 32B models that are starting to perform like the big corporate API ones.

    And if they could fit 48GB with new ICs… Well, it would sell like mad.

    • @[email protected]
      link
      fedilink
      English
      221 day ago

      I always wondered who they were making those mid- and low-end cards with a ridiculous amount of VRAM for… It was you.

      All this time I thought they were scam cards to fool people who believe that bigger number always = better.

      • @[email protected]
        link
        fedilink
        English
        101 day ago

        Yeah, AMD and Intel should be running high VRAM SKUs for hobbyists. I doubt it’ll cost them that much to double the RAM, and they could mark them up a bit.

        I’d buy the B580 if it had 24GB RAM, at 12GB, I’ll probably give it a pass because my 6650 XT is still fine.

      • @[email protected]
        link
        fedilink
        English
        91 day ago

        Also “ridiculously” is relative lol.

        The Llm/workstation crowd would buy a 48GB 4060 without even blinking, if that were possible. These workloads are basically completely vram constrained.

      • @[email protected]
        link
        fedilink
        English
        11 day ago

        Like the 3060? And 4060 TI?

        Its ostensibly because they’re “too powerful” for their vram to be cut in half (so 6GB on the 3060 and 8GB on the 4060 TI), but yes, more generally speaking these are sweetspot for vram heavy workstation/compute workloads. Local LLMs are just the most recent one.

        Nvidia cuts vram at the high end to protect their server/workstation cards, AMD does it… Just because?

        • @[email protected]
          link
          fedilink
          English
          1
          edit-2
          1 day ago

          More like back in the day when you would see vendors slapping 1GB on a card like the Radeon 9500, when the 9800 came with 128MB.

          • @[email protected]
            link
            fedilink
            English
            3
            edit-2
            1 day ago

            Ah yeah those were the good old days when vendors were free to do that, before AMD/Nvidia restricted them. It wasn’t even that long ago, I remember some AMD 7970s being double VRAM.

            And, again, I’d like to point out how insane this restriction is for AMD given their market struggles…

    • @Fedegenerate
      link
      English
      2
      edit-2
      1 day ago

      An LLM card with quicksync would be the kick I need to turn my n100 mini into a router. Right now, my only drive to move is that my storage is connected via usb. SATA is just not enough value for a whole new box. £300 for Ollama, much faster ml in immich etc and all the the transcodes I could want would be a “buy now figure the rest out later” moment.

      • @[email protected]
        link
        fedilink
        English
        41 day ago

        Oh also you might look at Strix Halo from AMD in 2025?

        Its IGP is beefy enough for LLMs, and it will be WAY lower power than any dGPU setup, with enough vram to be “sloppy” and run stuff in parallel with a good LLM.

      • @[email protected]
        link
        fedilink
        English
        21 day ago

        You could get that with 2x B580s in a single server I guess, though yoi could have already done that with the A770s.

        • @Fedegenerate
          link
          English
          31 day ago

          … That’s nuts. I only just graduated to a mini from a pi, I didnt consider a dual GPU setup. Arbitrary budget aside, I should have added an “idle power” constraint too. Reasonable to assume that as soon as LLMs get involved all concept of “power efficient” goes out the window. Don’t mind me, just wishing for a unicorn.

          • @[email protected]
            link
            fedilink
            English
            31 day ago

            Strix Halo is your unicorn, idle power should be very low (assuming AMD VCE is OK over quicksync)