74
submitted 9 months ago* (last edited 9 months ago) by GlowHuddy@lemmy.world to c/linux@lemmy.ml

I have currently a RX 6700XT and I'm quite happy with it when it comes to gaming and regular desktop usage, but was recently doing some local ML stuff and was just made aware of huge gap NVIDIA has over AMD in that space.

But yeah, going back to NVIDIA (I used to run 1080) after going AMD... seems kinda dirty for me ;-; Was very happy to move to AMD and be finally be free from the walled garden.

I thought at first to just buy a second GPU and still use my 6700XT for gaming and just use NVIDIA for ML, but unfortunately my motherboard doesn't have 2 PCIe slots I could use for GPUs, so I need to choose. I would be able to buy used RTX 3090 for a fair price, since I don't want to go for current gen, because of the current pricing.

So my question is how is NVIDIA nowadays? I specifically mean Wayland compatibility, since I just recently switched and would suck to go back to Xorg. Other than that, are there any hurdles, issues, annoyances, or is it smooth and seamless nowadays? Would you upgrade in my case?

EDIT: Forgot to mention, I'm currently using GNOME on Arch(btw), since that might be relevant

all 32 comments
sorted by: hot top controversial new old
[-] arcidalex@lemmy.world 32 points 9 months ago* (last edited 9 months ago)

Better, but still shit. The main holdup right now to what I see is wayland-protocols and the WMs adding Explicit Sync support as the proprietary driver does not have implicit sync support. Its part of a larger move for the graphics stack to move to explicit sync:

https://gitlab.freedesktop.org/wayland/wayland-protocols/-/merge_requests/90

Once this is in, the flickering issues will be solved and NVIDIA wayland being a daily driver in most situations

[-] GlowHuddy@lemmy.world 7 points 9 months ago

Yeah, was just reading about it and it kind of sucks, since one of the main reasons I wanted to go Wayland was multi-monitor VRR and I can see it is also an issue without explicit sync :/

[-] arcidalex@lemmy.world 13 points 9 months ago* (last edited 9 months ago)

Yeah. I have a multi-monitor VRR setup as well and happened to have a 3090 and not being able to take advantage of Wayland really sucks. And its not like Xorg is any good in that department either so you’re just stuck between a rock and a hard place until explicit sync is in

Lets see what will happen first- me getting a 7900xtx or this protocol being merged

[-] GlowHuddy@lemmy.world 4 points 9 months ago

Now I'm actually considering that one as well. Or I'll wait a generation I guess, since maybe by then Radeon will at least be comparable to NVIDIA in terms of compute/ML.

Damn you NVIDIA

[-] filister@lemmy.world 2 points 9 months ago

Yes, I am pretty much in the same boat, running Linux as a daily driver and currently having an ancient AMD GPU and was thinking to buy NVIDIA for exactly ML but I really don't want to give them my money, as I dislike the company and their management but AMD is subpar in that department, so not much of a choice

[-] russjr08@bitforged.space 2 points 9 months ago

I haven't kept up with the explicit sync support since I eventually did migrate over to AMD in October after the 545 Nvidia driver came out and didn't impress me at all - however I did hear in passing that you can get the explicit sync patch already in some ways, just a quick search reveals that Arch has this in the AUR already as xorg-xwayland-explicit-sync-git and that Nobara might already have it (I can't find official confirmation on this).

I noticed there was also some debate as to whether you would need a patched version of the compositor as well - but someone claims that just the XWayland patch worked for them (links to Reddit, as a heads up).

So your mileage may vary and it might require a varying level of work depending on what distro you run, however it might be worth looking into a bit more.

[-] arcidalex@lemmy.world 1 points 9 months ago* (last edited 9 months ago)

The patches exist for Wayland/Xwayland but the Compositor itself has to be patched as well for it to completely work. KDE does not have its fix for explicit sync merged in yet so its not patched as of Plasma 6.0.1. It did make things slightly better but all it did was make the flickering less frequent, but its still there.

[-] russjr08@bitforged.space 1 points 9 months ago

I see, well hopefully it'll get merged in the various compositors soon then! Wayland was a non-starter for me with that issue, and is precisely what led me to AMD.

[-] d3Xt3r@lemmy.nz 12 points 9 months ago* (last edited 9 months ago)

What sort of ML tasks exactly, and is it personal or professional?

If it's for LLMs you can just use Petals, which is a distributed service which doesn't need your own GPU.

If it's for SD / image generation, there are four ways you can go about it. The first is to rent a GPU cloud service like vast.ai, runpod.io, vagon.io etc, then run SD on the PC you're renting. It's relatively cheap, generate as much as you want in the duration you've rented. Last I checked, the prices were something like ~0.33 USD per hour, which is a far cheaper option than buying a top-end nVidia card for casual workloads.

The second option is using a website/service where the SD fronted is presented to you and you generate images through a credit system. Buy X amount of credits and you can generate X amount of images etc. Eg sites like Rundiffusion, dreamlike.art, seek.art, lexica etc.

The third option is to go for a monthly/yearly subscription offering, where you can generate as much as you want, such as MidJourney, Dall-E etc. This can be cheaper than an pay-as-you go service if you've got a ton of stuff go generate. There's also Adobe Firefly which is like a hybrid option (x credits / month).

Finally, there are plenty of free Google collabs for SD. And there is also stable horde, uses distributed computing for SD. And there's also an easy WebUI for it called ArtBot.

So yeah, there's plenty of options these days depending on what you want to do, you no longer need to actually own an nVidia card - and in fact for most users it's the cheaper option. Like say you wanted to buy a 4090, which costs ~$2000. If you instead spent that on cloud services at say $20 p/m, you can get 8.3 years of usage - and most GPUs would become outdated in that time period and you'd have to buy a new one (whereas cloud GPUs continue to get better and for as-a-service models, you could get better GPUs at the same price). And I'm not even factoring in other expenses like power consumption, time spent on maintenance and troubleshooting etc. So for most people it's a waste to buy a card just for ML, unless you're going to be using it 24x7 and you're actually making money off it.

Edit: A used 3090 is going for ~$715-850 at the moment, which works out to an equivalent of ~3+ years of image generation via cloud services, assuming you're going for paid subscription options. If you factor in the free options or casual pay-as-you-go systems, it can still work out a lot cheaper.

[-] Amongussussyballs100@sh.itjust.works 6 points 9 months ago* (last edited 9 months ago)

Petals seems like a great project, barring the fact that your prompts can be looked at and altered by other members of the swarm.

[-] null@slrpnk.net 11 points 9 months ago

On my 1660 Ti -- tons graphical glitches. Lots of extreme stuttering in Steam games.

[-] foreverunsure@pawb.social 9 points 9 months ago

Native Wayland apps run great. Can't say the same about those using XWayland, as most of them suffer from graphical glitches and flickering (especially Steam and Minecraft). Secure Boot works with some manual configuration.

[-] Kanedias@lemmy.ml 9 points 9 months ago

Same here, but it turned out a lot of frameworks like tensorflow or pytorch do support AMD ROCm framework. I managed to run most models just by installing a rocm version of these dependencies instead of the default one.

[-] GlowHuddy@lemmy.world 13 points 9 months ago

Yeah, I'm currently using that one, and I would happily stick with it, but it seems just AMD hardware isn't up to par with Nvidia when it comes to ML

Just take a look at the benchmarks for stable diffusion:

[-] AnUnusualRelic@lemmy.world 2 points 9 months ago* (last edited 9 months ago)

Aren't those things written specifically for nVidia hardware? (I used to e a developer, but this is not at all my area of expertise)

[-] gabmartini@lemmy.world 8 points 9 months ago

It depends. GNOME on Wayland + Nvidia runs great. But if you try the tiling manager camp, you will run into several issues in sway, hyprland. Things like having to use software mouse because insert nvidia excuse and high cpu usage by just moving the mouse.

Well... I don't know, I would recommend GNOME on Wayland or maybe KDE, haven't tried the latest Plasma 6 release, but outside that, avoid it.

[-] Brewchin@lemmy.world 4 points 9 months ago* (last edited 9 months ago)

high cpu usage by just moving the mouse.

This sounds like co-operative multi-tasking on a single CPU. I remember this with Windows 3.1x around 30 years ago, where the faster you moved your mouse, the more impact it would have on anything else you were running. That text scrolling too fast? Wiggle the mouse to slow it down (etc, etc).

I thought we'd permanently moved on with pre-emptive multi-tasking, multi-threading and multiple cores... 🤦🏼‍♂️

[-] FluffyPotato@lemm.ee 7 points 9 months ago

Depends on the moon stage, your horoscope and your palm reading.

I have tried it on several setups and I get different results every time.

[-] waitmarks@lemmy.world 6 points 9 months ago

I don’t have an answer to your nvidia question, but before you go and spend $2000 on an nvidia card, you should give the rocm docker containers a shot worh your existing card. https://hub.docker.com/u/rocm https://github.com/ROCm/ROCm-docker

it’s made my use of rocm 1000x easier than actually installing it on my system and was sufficient for my uses of running inference on my 6700xt.

[-] astrsk@piefed.social 5 points 9 months ago

Discord and Steam flicker / render weird and I get massive input lag for seemingly no reason just trying to use almost any app. I stick with x11 and have little to no issue now that the Firefox offset cursor regression was fixed. I’m running a 3090 on EndeavourOS.

[-] RedNight@lemmy.ml 2 points 9 months ago

VSCodium, and I suspect VS Code, flickers as well. X11 is good though

[-] StrawberryPigtails@lemmy.sdf.org 2 points 9 months ago

Mostly good, though I’ve got a bug on my desktop. It’s a two monitor setup and if I am running a game like Minecraft full screen on the second display and close out the game Plasma crashes to the login screen. Works fine if I disable the second display. That system is running Plasma 5 - Wayland on Nixos 23.11.

Otherwise, I occasionally run into an app that just doesn’t work, but that’s about all. Sometimes it’s a Plasma on Wayland thing (like with Element) sometimes not.

[-] RedWeasel@lemmy.world 2 points 9 months ago

Plasma 6(arch) is pretty excellent. There is the bug mentioned in other comments with Xwayland that won't be (fully) fixed until the Explicit sync wayland protocol is finalized and implemented, but that should apply to any wayland compositor.

As to wayland vs x11, if you want to game or anything else that is only X11, use X11, otherwise most everything else can be use wayland.

[-] Brewchin@lemmy.world 1 points 9 months ago

Thank you. My laptop is EndeavourOS+KDE6 - which is solid - and I've spent today preparing to nuke my gaming desktop PC (Ubuntu and an Nvidia RTX card) to rebuild it with Endeavour tomorrow, and the only doubt I had was Wayland and Nvidia with Lutris/Heroic/Proton gaming.

[-] warmaster@lemmy.world 2 points 9 months ago* (last edited 9 months ago)

I have an Nvidia 3080TI and an AMD RX 7900 XTX.

The AMD runs great on any distro, I love it. The Nvidia is so much of a huge pain that I installed Windows on that PC.

[-] mmus@lemmy.ml 6 points 9 months ago

Are you from the future? Please tell us more about the RDNA4 architecture if so :)

[-] warmaster@lemmy.world 1 points 9 months ago* (last edited 9 months ago)

Oops, typo. 7900 🤣 edited. Thx for the heads up.

[-] ikidd@lemmy.world 1 points 9 months ago

Get an eGPU enclosure, then you can swap it around easily.

[-] TerraRoot@sh.itjust.works 1 points 9 months ago

Can't answer your nividia/wayland question, I'm not going back, so I'm just going to shill for my new fav bit of software.

Your 6700xt is miles ahead of my rx570, I could get mine working with some rocm and pytorch bodgery but I found fastsdcpu was just a lot less hassle for the occasional image.

https://github.com/rupeshs/fastsdcpu

this post was submitted on 16 Mar 2024
74 points (92.0% liked)

Linux

48721 readers
1105 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS