this post was submitted on 18 Jul 2024
805 points (99.5% liked)

Technology

58070 readers
2799 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Companies are going all-in on artificial intelligence right now, investing millions or even billions into the area while slapping the AI initialism on their products, even when doing so seems strange and pointless.

Heavy investment and increasingly powerful hardware tend to mean more expensive products. To discover if people would be willing to pay extra for hardware with AI capabilities, the question was asked on the TechPowerUp forums.

The results show that over 22,000 people, a massive 84% of the overall vote, said no, they would not pay more. More than 2,200 participants said they didn't know, while just under 2,000 voters said yes.

you are viewing a single comment's thread
view the rest of the comments
[–] BlackLaZoR@kbin.run 50 points 2 months ago (4 children)

There's really no point unless you work in specific fields that benefit from AI.

Meanwhile every large corpo tries to shove AI into every possible place they can. They'd introduce ChatGPT to your toilet seat if they could

[–] br3d@lemmy.world 23 points 2 months ago (2 children)

"Shits are frequently classified into three basic types..." and then gives 5 paragraphs of bland guff

[–] Krackalot@discuss.tchncs.de 21 points 2 months ago

With how much scraping of reddit they do, there's no way it doesn't try ordering a poop knife off of Amazon for you.

[–] catloaf@lemm.ee 2 points 2 months ago (1 children)

It's seven types, actually, and it's called the Bristol scale, after the Bristol Royal Infirmary where it was developed.

[–] br3d@lemmy.world 1 points 1 month ago

I know. But I was satirising GPT's bland writing style, not providing facts

[–] x4740N@lemm.ee 11 points 2 months ago (3 children)

Imagining a chatgpt toilet seat made me feel uncomfortable

[–] Davel23@fedia.io 3 points 2 months ago (1 children)
[–] Lost_My_Mind@lemmy.world 2 points 2 months ago

Aw maaaaan. I thought you were going to link that youtube sketch I can't find anymore. Hide and go poop.

[–] BlackLaZoR@kbin.run 1 points 2 months ago (2 children)

Don't worry, if Apple does it, it will sell a like fresh cookies world wide

[–] Arbiter@lemmy.world 2 points 2 months ago

Idk, they can’t even sell VR.

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 4 points 2 months ago (2 children)

Someone did a demo recently of AI acceleration for 3d upscaling (think DLSS/AMDs equivilent) and it showed a nice boost in performance. It could be useful in the future.

I think it's kind of a ray tracing. We don't have a real use for it now, but eventually someone will figure out something that it's actually good for and use it.

[–] nekusoul@lemmy.nekusoul.de 3 points 2 months ago* (last edited 2 months ago) (1 children)

AI acceleration for 3d upscaling

Isn't that not only similar to, but exactly what DLSS already is? A neural network that upscales games?

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 2 points 2 months ago (1 children)

But instead of relying on the GPU to power it the dedicated AI chip did the work. Like it had it's own distinct chip on the graphics card that would handle the upscaling.

I forget who demoed it, and searching for anything related to "AI" and "upscaling" gets buried with just what they're already doing.

[–] barsoap@lemm.ee 4 points 2 months ago* (last edited 2 months ago) (2 children)

That's already the nvidia approach, upscaling runs on the tensor cores.

And no it's not something magical it's just matrix math. AI workloads are lots of convolutions on gigantic, low-precision, floating point matrices. Low-precision because neural networks are robust against random perturbation and more rounding is exactly that, random perturbations, there's no point in spending electricity and heat on high precision if it doesn't make the output any better.

The kicker? Those tensor cores are less complicated than ordinary GPU cores. For general-purpose hardware and that also includes consumer-grade GPUs it's way more sensible to make sure the ALUs can deal with 8-bit floats and leave everything else the same. That stuff is going to be standard by the next generation of even potatoes: Every SoC with an included GPU has enough oomph to sensibly run reasonable inference loads. And with "reasonable" I mean actually quite big, as far as I'm aware e.g. firefox's inbuilt translation runs on the CPU, the models are small enough.

Nvidia OTOH is very much in the market for AI accelerators and figured it could corner the upscaling market and sell another new generation of cards by making their software rely on those cores even though it could run on the other cores. As AMD demonstrated, their stuff also runs on nvidia hardware.

What's actually special sauce in that area are the RT cores, that is, accelerators for ray casting though BSP trees. That's indeed specialised hardware but those things are nowhere near fast enough to compute enough rays for even remotely tolerable outputs which is where all that upscaling/denoising comes into play.

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 2 points 1 month ago (1 children)

Found it.

https://www.neowin.net/news/powercolor-uses-npus-to-lower-gpu-power-consumption-and-improve-frame-rates-in-games/

I can't find a picture of the PCB though, that might have been a leak pre reveal and now that it's revealed good luck finding it.

[–] AdrianTheFrog@lemmy.world 2 points 1 month ago (1 children)

Having to send full frames off of the GPU for extra processing has got to come with some extra latency/problems compared to just doing it actually on the gpu... and I'd be shocked if they have motion vectors and other engine stuff that DLSS has that would require the games to be specifically modified for this adaptation. IDK, but I don't think we have enough details about this to really judge whether its useful or not, although I'm leaning on the side of 'not' for this particular implementation. They never showed any actual comparisons to dlss either.

As a side note, I found this other article on the same topic where they obviously didn't know what they were talking about and mixed up frame rates and power consumption, its very entertaining to read

The NPU was able to lower the frame rate in Cyberpunk from 263.2 to 205.3, saving 22% on power consumption, and probably making fan noise less noticeable. In Final Fantasy, frame rates dropped from 338.6 to 262.9, resulting in a power saving of 22.4% according to PowerColor's display. Power consumption also dropped considerably, as it shows Final Fantasy consuming 338W without the NPU, and 261W with it enabled.

[–] nekusoul@lemmy.nekusoul.de 1 points 1 month ago* (last edited 1 month ago) (1 children)

I've been trying to find some better/original sources [1] [2] [3] and from what I can gather it's even worse. It's not even an upscaler of any kind, it apparently uses an NPU just to control clocks and fan speeds to reduce power draw, dropping FPS by ~10% in the process.

So yeah, I'm not really sure why they needed an NPU to figure out that running a GPU at its limit has always been wildly inefficient. Outside of getting that investor money of course.

[–] AdrianTheFrog@lemmy.world 2 points 1 month ago

Ok, i guess its just kinda similar to dynamic overclocking/underclocking with a dedicated npu. I don't really see why a tiny 2$ microcontroller or just the cpu can't accomplish the same task though.

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 1 points 1 month ago (1 children)

Nvidia's tensor cores are inside the GPU, this was outside the GPU, but on the same card (the PCB looked like an abomination). If I remember right in total it used slightly less power, but performed about 30% faster than normal DLSS.

[–] AdrianTheFrog@lemmy.world 1 points 1 month ago

from the articles I've found it sounds like they're comparing it to native...

[–] AdrianTheFrog@lemmy.world 1 points 1 month ago

We have plenty of real uses for ray tracing right now, from blender to whatever that avatar game was doing to lumen to partial rt to full path tracing, you just can't do real time GI with any semblance of fine detail without RT from what I've seen (although the lumen sdf mode gets pretty close)

although the rt cores themselves are more debatably useful, they still give a decent performance boost most of the time over "software" rt

[–] Lost_My_Mind@lemmy.world 2 points 2 months ago* (last edited 2 months ago)

Which would be approptiate, because with AI, theres nothing but shit in it.