this post was submitted on 09 Dec 2023
183 points (97.9% liked)

Linux

48074 readers
1084 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Juujian@lemmy.world 54 points 11 months ago (1 children)

That sounds cool... Wish the article said what it does.

[–] AlmightySnoo@lemmy.world 95 points 11 months ago* (last edited 11 months ago) (2 children)

Double and triple buffering are techniques in GPU rendering (also used in computing, up to double buffering only though as triple buffering is pointless when headless).

Without them, if you want to do some number crunching on your GPU and have your data on the host ("CPU") memory, then you'd basically transfer a chunk of that data from the host to a buffer on the device (GPU) memory and then run your GPU algorithm on it. There's one big issue here: during the memory transfer, your GPU is idle because you're waiting for the copy to finish, so you're wasting precious GPU compute.

So GPU programmers came up with a trick to try to reduce or even hide that latency: double buffering. As the name suggests, the idea is to have not just one but two buffers of the same size allocated on your GPU. Let's call them buffer_0 and buffer_1. The idea is that if your algorithm is iterative, and you have a bunch of chunks on your host memory on which you want to apply that same GPU code, then you could for example at the first iteration take a chunk from host memory and send it to buffer_0, then run your GPU code asynchronously on that buffer. While it's running, your CPU has the control back and it can do something else. Here you prepare immediately for the next iteration, you pick another chunk and send it asynchronously to buffer_1. When the previous asynchronous kernel run is finished, you rerun the same kernel but this time on buffer_1, again asynchronously. Then you copy, asynchronously again, another chunk from the host to buffer_0 this time and you keep swapping the buffers like this for the rest of your loop.

Now some GPU programmers don't want to just compute stuff, they also might want to render stuff on the screen. So what happens when they try to copy from one of those buffers to the screen? It depends, if they copy in a synchronous way, we get the initial latency problem back. If they copy asynchronously, the host->GPU copy and/or the GPU kernel will keep overwriting buffers before they finish rendering on the screen, which will cause tearing.

So those programmers pushed the double buffering idea a bit further: just add an additional buffer to hide the latency from sending stuff to the screen, and that gives us triple buffering. You can guess how this one will work because it's exactly the same principle.

[–] QuazarOmega@lemy.lol 7 points 11 months ago

I love this explanation, I thought I'd never understand

[–] MonkderZweite@feddit.ch 1 points 11 months ago* (last edited 11 months ago) (3 children)

And why does a desktop environment need to do that?

[–] Chewy7324@discuss.tchncs.de 20 points 11 months ago (1 children)

If the system can't keep up with the animation of e.g. Gnome's overview, the fps halfes because of double buffered vsync for a moment. This is perceived as stutter.

With triple buffer vsync the fps only drop a little (e .g 60 fps -> 55 fps), which isn't as big of drop of fps, so the stutter isn't as big (if it's even noticeable).

[–] MonkderZweite@feddit.ch 3 points 11 months ago* (last edited 11 months ago)

Maybe the animation a bit simpler...?

Less animation is usually better UX in something often used, if it's not to hide slowness of someting else.

[–] jmcs@discuss.tchncs.de 5 points 11 months ago (1 children)

To reduce input lag and provide smoother visuals.

[–] MonkderZweite@feddit.ch 2 points 11 months ago (2 children)

You say the animations are too much?

[–] Moltz@lemm.ee 2 points 11 months ago* (last edited 11 months ago)

Lol, why own up to adding animations the system can't handle when you can blame app and web devs? Gnome users always know where the blame should be laid, and it's never Gnome.

[–] jmcs@discuss.tchncs.de 2 points 11 months ago

If by animations you mean smoothly moving the mouse and windows while badly optimized apps and websites are rendering, yes.

[–] AlmightySnoo@lemmy.world 1 points 11 months ago* (last edited 11 months ago) (1 children)

Biased opinion here as I haven't used GNOME since they made the switch to version 3 and I dislike it a lot: the animations are so slow that they demand a good GPU with high vRAM speed to hide that and thus they need to borrow techniques from game/GPU programming to make GNOME more fluid for users with less beefy cards.

[–] Moltz@lemm.ee 2 points 11 months ago* (last edited 11 months ago)

Not only slow, it drops frames constantly. Doesn't matter how good your hardware is.

There's always the Android route, why fix the animations when you can just add high framerate screens to all the hardware to hide the jank. Ah, who am I kidding, Gnome wouldn't know how to properly support high framerates across multiple monitors either. How many years did fractional scaling take?