this post was submitted on 24 Sep 2024
147 points (98.7% liked)

Linux

5231 readers
189 users here now

A community for everything relating to the linux operating system

Also check out !linux_memes@programming.dev

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 1 year ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] Chewy7324@discuss.tchncs.de 24 points 1 month ago* (last edited 1 month ago) (3 children)

It'd be great of this meant SR-IOV for all GPUs, but this seems like it only allows for sharing of a GPU to multiple guests. And even then, with most of the driver being on the GPU this might not help regular consumer GPUs at all (features being disabled in firmware). But I really don't know anything about what this actually means.

[–] Telorand@reddthat.com 7 points 1 month ago (1 children)

Even if it is minor (and like you, I don't know whether it is or not), Nvidia opening their source, even if only little by little, is probably still a good thing.

[–] Chewy7324@discuss.tchncs.de 11 points 1 month ago (1 children)

Agreed. It seems like Nvidia is under pressure by their commercial customers for better, directly integrated open source drivers.

There is alot of demand for this kind of simplified virtualization infrastructure in the host side.

[–] emuspawn@orbiting.observer 9 points 1 month ago

Valve has moved the Linux Agenda pretty far forward. I would not be surprised if some of the pressure is from Valve's ARM based improvements. I can see why vGPU pass-through support would be desirable for certain computing applications....or just emulation.

[–] biscuitswalrus@aussie.zone 5 points 1 month ago

Sr-iov works already though? That's not needed for this. The motherboard presents the pci bus to the guest regardless of what's plugged in. Works fine.

This is when you want many guests to have shared graphics by partitioning a gpu. So the host still retains it and presents the graphics card to guests. You need to partition the ram up equally though, so useful only in VDI generally where you want a RTX A6000 like card to split to 10 guests each with 8gb of ram, and they share the gpu, but keep their individual video ram. Economy of scale can work out in graphics or maybe ML situations. Not so useful at home since you'll probably have a Rtx 3080 with like 10-12gb of ram, and at most you wouldn't want to split it below 8gb for modern games and partitions need to be equally sized. For 10g two = 2x5gb which would be a poor experience probably. Lots of frame stutters as it switches stuff between ram to video ram.

Hope that helps. Unless this technology unlocks better partitions it's more about opening to vdi and machine learning in a full open source context like proxmox rather than just the driver being locked behind hyperv vmware and citrix hypervisor/xen and a big yearly license. Maybe it still needs that yearly license.

[–] biscuitswalrus@aussie.zone 3 points 1 month ago

This is possible now, but in xen or vmware you need to buy a nvidia license to unlock this feature. You can trial it for a minute in a lab but you can't give 4 guests each 2gb of vram on your graphics card without Nvidia specialist proprietary driver on both the host and the guest.

For vdi where you can buy 48gb rtx a6000 graphics cards, with architects (for example) each user getting each about 8gb each, you can 10 guests concurrently per card. Which at a few hundred architects scales better than buying many $5000 dollar workstations that struggle with WFH.

For a home user, maybe being able to split for your two kids on a standard rtx 3070 with what like 8gb might be OK? Probably not though.

Right now I have a hacky way that isn't really supported in nvidia to split graphics cards to two guest vms but it's neither license compatible or what I'd call "production ready". I'd like proxmox to be able to handle this out of the box because it's already in the kernel.

I've no idea what this means with licensing though. The yearly license cost to allow you to use your driver is actually stupidly expensive. The Rtx A series cards are already dumb money.

Either way it's a good thing, but probably not much news for the average enthusiast

[–] hubobes@sh.itjust.works 7 points 1 month ago (2 children)

Would that allow me to split my main GPU for my host and my VM so I can play trough looking-glass? Currently running a 4070Ti and a 4060Ti in the same tower which is quite stupid as I basically have one of them idling all the time.

[–] JAWNEHBOY@reddthat.com 4 points 1 month ago

Also trying to avoid this setup

[–] possiblylinux127@lemmy.zip 3 points 1 month ago (1 children)
[–] hubobes@sh.itjust.works 1 points 1 month ago

I do not but I also want to be able to play on the host and in the VM whenever I want so a weak GPU for the host was out of the question. I tried to use a 970 first as the host GPU and giving the 4070Ti to the VM only when starting up the VM but that never worked.

[–] trolololol@lemmy.world 4 points 1 month ago (2 children)

Can someone dumb it down for me what this means for people at home? Any impact on Linux gaming?

[–] Fizz@lemmy.nz 3 points 1 month ago* (last edited 1 month ago) (1 children)

It means you can split your GPU into multiple vGPUs and pass them through to Virtual machines I believe. Similar to what we do with cpu

[–] Cort@lemmy.world 5 points 1 month ago (1 children)

So easier to do 2 gamers on 1pc?

[–] x00za@lemmy.dbzer0.com 6 points 1 month ago
[–] jbk@discuss.tchncs.de 2 points 1 month ago (1 children)

Pretty much nothing, except that NVIDIA doesn't care about the average gamer on Linux, only corps

[–] trolololol@lemmy.world 2 points 1 month ago

Ooh so nothing to see here

[–] AI_toothbrush@lemmy.zip 1 points 1 month ago

Isnt this literally the reason why SOG Mutahar said he would switch to windows? This is great news.