this post was submitted on 27 Jul 2024
344 points (99.7% liked)

Linux

48212 readers
1853 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
top 31 comments
sorted by: hot top controversial new old
[–] GenderNeutralBro@lemmy.sdf.org 109 points 3 months ago (6 children)

As a reminder, the same (closed-source) user-space components for OpenGL / OpenCL / Vulkan / CUDA are used regardless of the NVIDIA kernel driver option with their official driver stack.

CUDA hell remains. :(

[–] possiblylinux127@lemmy.zip 53 points 3 months ago (1 children)

AMD needs to get their ducks in a row. They already have the advantage of not being Nvidia

[–] john89@lemmy.ca 3 points 3 months ago (1 children)

They already have the advantage of not being Nvidia

That's just because they release worse products.

If AMD had Nvidia's marketshare, they would be just as scummy as the business climate allows.

In fact, AMD piggybacks off of Nvidia's scumbaggery to charge more for their GPUs rather than engage in an actual price war.

[–] Cornelius@lemmy.ml 4 points 3 months ago (1 children)

Who would've thunk that big, for profit, tech companies don't care about us :T

[–] john89@lemmy.ca 1 points 3 months ago

It's all by design.

[–] istanbullu@lemmy.ml 28 points 3 months ago (1 children)

it's breaking down. Pytorch supports ROCm now.

[–] ProdigalFrog@slrpnk.net 20 points 3 months ago (1 children)

ROCm is it's own hell (unless they finally put some resources into it in the past couple years)

[–] Cornelius@lemmy.ml 2 points 3 months ago

They put in the absolute minimum amount of resources for it.

It's also littered with bugs as the ZLUDA project has noted

[–] filister@lemmy.world 16 points 3 months ago (1 children)

Yes, the CUDA is the only reason why I consider NVIDIA. I really hate this company but the AMD tech stack is really inferior.

[–] laurelraven@lemmy.blahaj.zone 8 points 3 months ago

I've heard this but don't really understand it... At a high level, what makes cuda so much better?

[–] phoenixz@lemmy.ca 4 points 3 months ago (1 children)

So is CUDA good or bad?

I keep reading it's hell, but the best. Apparently it's the single one reason why Nvidia is so big with AI, but it sucks.

What is it?

[–] GenderNeutralBro@lemmy.sdf.org 3 points 3 months ago* (last edited 3 months ago) (1 children)

Both.

The good: CUDA is required for maximum performance and compatibility with machine learning (ML) frameworks and applications. It is a legitimate reason to choose Nvidia, and if you have an Nvidia card you will want to make sure you have CUDA acceleration working for any compatible ML workloads.

The bad: Getting CUDA to actually install and run correctly is a giant pain in the ass for anything but the absolute most basic use case. You will likely need to maintain multiple framework versions, because new ones are not backwards-compatible. You'll need to source custom versions of Python modules compiled against specific versions of CUDA, which opens a whole new circle of Dependency Hell. And you know how everyone and their dog publishes shit with Docker now? Yeah, have fun with that.

That said, AMD's equivalent (ROCm) is just as bad, and AMD is lagging about a full generation behind Nvidia in terms of ML performance.

The easy way is to just use OpenCL. But that's not going to give you the best performance, and it's not going to be compatible with everything out there.

[–] Swedneck@discuss.tchncs.de 1 points 3 months ago

almost sounds like god doesn't want us doing machine learning

[–] magikmw@lemm.ee 2 points 3 months ago

The fact that cuda means 'wonders' in polish is living in my mind rent free several days after I read about nvidia news.

[–] Supermariofan67@programming.dev 1 points 3 months ago (1 children)

I think this will change. Nvidia hired devs on Nouveau, NVK is coming along, etc

[–] leopold@lemmy.kde.social 4 points 3 months ago (1 children)

Last I checked, there is no evidence Nvidia has hired anyone to work on Nouveau.

[–] Supermariofan67@programming.dev 0 points 3 months ago (1 children)
[–] leopold@lemmy.kde.social 1 points 3 months ago* (last edited 3 months ago)

Right, I'm well aware that that article is the reason why a bunch of people have been making the unsubstantiated claim that Nvidia has hired people to work on Nouveau.

Nvidia hired the former lead Nouveau maintainer and he contributed a bunch of patches a couple of months ago after they hired him. That was his first contribution since stepping down and I'm fairly certain it was his last because there's no way Phoronix would miss the opportunity to milk this some more if they could. He had said when stepping down that he was open to contributing every once in a while, so this wasn't very surprising either way. To be clear, it is not evidence that he or anyone else was hired by Nvidia to work on Nouveau. Otherwise, I'd like to ask what he's been doing since, because that was over three months ago.

[–] boredsquirrel@slrpnk.net 31 points 3 months ago* (last edited 3 months ago)

Well... it is an out-of-tree kernel driver that is made by the same company, and the userspace drivers are still proprietary.

This says NOTHING other than "wow NVIDIA can write good code (open source) that doesnt suck"?

[–] sunzu@kbin.run 14 points 3 months ago (2 children)

How is it different. Wouldn't just be the same software with source code available?

[–] SMillerNL@lemmy.world 27 points 3 months ago (1 children)

It’s not, they’re not open sourcing their driver. They’ve made an open source driver.

[–] sunzu@kbin.run 6 points 3 months ago (3 children)

Is there a reason to reinvent the wheel?

[–] seaQueue@lemmy.world 29 points 3 months ago (1 children)

Usually this is done for licensing reasons. They probably don't want the old code caught up in the open license they're shipping the new driver under.

My understanding is that the new open driver separates proprietary code into a black box binary blob that isn't distributed under an open source license. I'm guessing that they've been very careful not to include anything they want to keep closed into the new open driver, whereas the old driver wasn't written with this separation in mind.

[–] sunzu@kbin.run 6 points 3 months ago

I was wondering about what they were doing with their "secret sauce", thanks for explaining.

[–] CMDR_Horn@lemmy.world 10 points 3 months ago

Control, precedent, bean counter analysis etc. Pick your poison.

[–] Supermariofan67@programming.dev 3 points 3 months ago

Some of it probably comes from other companies that are unable or unwilling to relicense it even if Nvidia wanted to

[–] krolden@lemmy.ml 1 points 3 months ago
[–] chirospasm@lemmy.ml 4 points 3 months ago
[–] KarnaSubarna@lemmy.ml 4 points 3 months ago* (last edited 3 months ago)

Anyone tried this beta version yet? Any idea how stable it is?

[–] Unyieldingly@lemmy.world 2 points 3 months ago

I been using the open kernel driver with my Debian Workstation, it has worked better then the default driver by far with the Debian backport Kernel, I installed it using the Nvidia Cuda Repo.

[–] Molecular0079@lemmy.world 2 points 3 months ago

Performance parity? Heck no, not until this bug with the GSP firmware is solved: https://github.com/NVIDIA/open-gpu-kernel-modules/issues/538