this post was submitted on 08 Jan 2025
81 points (96.6% liked)

Hardware

5082 readers
116 users here now

This is a community dedicated to the hardware aspect of technology, from PC parts, to gadgets, to servers, to industrial control equipment, to semiconductors.

Rules:

founded 4 years ago
MODERATORS
 

many people seem to be excited about nVidias new line of GPUs, which is reasonable, since at CES they really made it seem like these new bois are insance for their price.

Jensen (the CEO guy) said that with the power of AI, the 5070 at a price of sub 600, is in the same class as the 4090, being over 1500 pricepoint.

Here my idea: They talk a lot about upscaling, generating frames and pixels and so on. I think what they mean by both having similar performace, is that the 4090 with no AI upscaling and such achieves similar performance as the 5070 with DLSS and whatever else.

So yes, for pure "gaming" performance, with games that support it, the GPU will have the same performance. But there will be artifacts.

For ANYTHING besides these "gaming" usecases, it will probably be closer to the 4080 or whatever (idk GPU naming..).

So if you care about inference, blender or literally anything not-gaming: you probably shouldn't care about this.

i'm totally up for counter arguments. maybe i'm missing something here, maybe i'm being a dumdum <3

imma wait for amd to announce their stuffs and just get the top one, for the open drivers. not an nvidia person myself, but their research seems spicy. currently still slobbing along with a 1060 6GB

top 13 comments
sorted by: hot top controversial new old
[–] AmazingAwesomator@lemmy.world 37 points 18 hours ago

the fine print for their comparison charts said they were not tested equally; they just made up different benchmarking conditions for each card to make a 1st party slide. you are absolutely right on calling out their BS <3

always wait for 3rd party benchmarks. if you are looking for accurate 3rd party benchmarks, gamers nexus and hardware unboxed have extremely good standards for benchmarking (youtube)

[–] donuts@lemmy.world 19 points 19 hours ago (2 children)

I was ready to do some due diligence but the specs don't lie: the 5070 is lower in all the specs that matter like CUDA cores, Shader cores, Tensor cores, VRAM and even base clock speed.

There might be some improved use cases because of more modern architecture and offloading certain tasks to a powerful CPU, but it's looking bleak, yeah.

Minor pet peeve: it's NVIDIA, full caps.

[–] deranger@sh.itjust.works 13 points 19 hours ago (1 children)

Regarding your pet peeve, when was the change? I always want to write it as nVidia too, or maybe now Nvidia. Was that something back from the early GeForce days or am I just imagining things?

[–] glimse@lemmy.world 4 points 18 hours ago* (last edited 18 hours ago) (2 children)

It was never either of those. ~~It started as nVIDIA and made the N uppercase during the pandemic (or around that time)~~

[Edit] I could have just checked before commenting but no...I decided I'd rather be wrong I guess. This is the correct answer:

They started as nVIDIA but used NVIDIA interchangeably for decades. In 2020, all caps became "official"

[–] deranger@sh.itjust.works 3 points 18 hours ago (1 children)

I think I conflated capitalization between Nvidia and their nForce chipset. I had an nForce motherboard for my Athlon build.

[–] glimse@lemmy.world 1 points 17 hours ago

Oh yeah, I forgot about those things! nVIDIA nForce... Should have gone with nFORCE

[–] catloaf@lemm.ee 1 points 17 hours ago (1 children)

I found a couple brand guides from the 2010s showing all caps. If they changed, it was before that.

[–] glimse@lemmy.world 3 points 16 hours ago

It's been their official name since like the 2000s but they didn't seem to nForce it (sorry lol) until a few years ago. I remember reading an article about the updated guidelines.

I think it's funny that their logo still shows a lowercase n

CUDA cores, Shader cores, Tensor cores

You should never compare those cross architectures. Just like CPUs, GPUs can do more or less per clock per core. Inside an architecture you can use it get get an idea, but cross architecture it's apples to oranges.

ex: The GTX 680 had 3x the cores of the GTX 580, but only performed 2x as fast at best, closer to 1.5x.

[–] Sir_Kevin@lemmy.dbzer0.com 9 points 18 hours ago

Will it have more VRAM than my ancient 2060 with 12GB?

[–] Boomkop3@reddthat.com 3 points 18 hours ago

I mostly noticed the 5070 matched my overclocked 3080 in performance. I won't need to upgrade anytime soon

[–] noride@lemm.ee 3 points 19 hours ago

I will wait for more in-depth reviews of DLSS4 before I make my choice, but from what I've seen thus far, I think I may finally replace my trusty 1080. The 1080 was my first Nvidia card and I have always assumed I'd switch back, but I am cautiously optimistic about their 5 series changes.

This is only their first iteration moving from a neutral net to AI for frame-gen/upscaling and I think it is already significantly improved. There is still likely a lot of headroom for improving the model even further to reduce artifacts and blurring. I also like the idea of improving the performance of existing hardware through software, and personally respect that they made many of the new features available to older generation cards as well.

We'll see though... AMD may clap back.

[–] vikingtons@lemmy.world 1 points 15 hours ago* (last edited 8 hours ago)

I feel like they've been doing this with product releases since Ampere