this post was submitted on 20 May 2024
147 points (98.7% liked)

PC Gaming

8521 readers
770 users here now

For PC gaming news and discussion. PCGamingWiki

Rules:

  1. Be Respectful.
  2. No Spam or Porn.
  3. No Advertising.
  4. No Memes.
  5. No Tech Support.
  6. No questions about buying/building computers.
  7. No game suggestions, friend requests, surveys, or begging.
  8. No Let's Plays, streams, highlight reels/montages, random videos or shorts.
  9. No off-topic posts/comments.
  10. Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)

founded 1 year ago
MODERATORS
all 41 comments
sorted by: hot top controversial new old
[–] FunderPants@lemmy.ca 24 points 5 months ago* (last edited 5 months ago) (2 children)

I feel like I've been hearing about AMDs "next" CPU having dozens of cores on a bunch of chiplets for the last few generations, then the main gaming consumer parts end up with 6 or 8 or something.

[–] Ranvier@sopuli.xyz 32 points 5 months ago* (last edited 5 months ago) (2 children)

The 7950 has 16 cores. I think what the article is suggesting is the very top of the line in the next gen could go potentially double, up to 32. I would imagine if that happened though that the more midline ones would still be in the 12-16 core range. I guess we'll see when they come out though.

[–] FunderPants@lemmy.ca 13 points 5 months ago (1 children)

Yea here's hoping. I'm skipping the 7000 series parts and sticking with my 5800x3d, I really want a higher core part that still has all the single ccd x3d advantages, since I game and do CPU heavy work on the same rig.

[–] Ranvier@sopuli.xyz 5 points 5 months ago (1 children)

Same here. 5800x3d is great, and I'd rather not buy a new motherboard and things just yet.

[–] FunderPants@lemmy.ca 2 points 5 months ago

Yes, no desire for all the things that will have to come with this upgrade. I want a huge boost, so sitting out this first wave.

[–] ILikeBoobies@lemmy.ca 4 points 5 months ago

900 series have 16 cores going back to 1950

[–] WolfLink@lemmy.ml 6 points 5 months ago

Most games can’t take advantage of more than a couple cores anyway, and the high-core-count CPUS often sacrifice a little clock speed.

The optimal gaming CPU is like 4-8 cores but with a high clock speed. The 32+ core machines are for compute heavy tasks like CAD or running simulations. Sometimes compilers.

[–] barsquid@lemmy.world 11 points 5 months ago (1 children)

I thought they were already up there on Threadrippers, or am I misunderstanding and that's either not counting as a CPU or not a single die?

[–] Flex@lemmy.world 17 points 5 months ago

Threadripper 7000 went up to 64 cores with 8 dies (excluding IO die) , so 8 cores per die.

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 11 points 5 months ago (3 children)

I'd kill for a single CCD 16 core x3d part. The 7950x3d is tempting with it's 3d CCD and high clock speed CCD, but since not every game/program knows how to use it properly you end up with hit or miss performance.

[–] ChairmanMeow@programming.dev 15 points 5 months ago (1 children)

Honestly with the 7950x3D being so powerful, you rarely notice it if a game isn't fully utilizing it. I have one and I'm very pleased with it!

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 5 points 5 months ago (1 children)

My biggest concern from what I've seen is that the weird hack AMD uses to get programs to run on one set of cores vs the other wasn't exactly great last I looked and can cause issues when a game tries to move off of one CCD onto the other. That said I haven't looked into this ever since the CPU first came out so hopefully things are better now.

How observant are you to micro stutters in a game? That was the biggest reason I got the 5800x3d in the first place, but now that I have a better GPU I can tell that thing struggles. And from what I remember most of the issued you'd have moving from CCD to CCD were more micro stutters vs normal frame rate dips or just lower average frame rates.

[–] ChairmanMeow@programming.dev 4 points 5 months ago

I only really notice stutters in heavily modded Minecraft, where it's clearly linked to the garbage collector. In more demanding games I don't notice any stuttering really. Or at least, none that I can't easily link to something triggering in the game that is likely causing it.

Sure, perhaps I have slightly lower average FPS compared to a 7800x3D, but I also use this PC for productivity reasons so there the extra oompf really does help. Still, 97% of a framerate that's already way higher than what my 144Hz monitors support is still well above what my monitors support. I don't think the performance difference is really noticeable, other than in certain benchmarks or if you try really hard to see them.

It's considerably faster than a 5800x3D though.

[–] 30p87@feddit.de 2 points 5 months ago (1 children)

I'm also wondering why there is even a difference in FPS in higher class CPUs - shouldn't it be the GPU bottlenecking, especially in 4k high settings?

1% and 0.1% lows will almost always be CPU bound as it loads more in. Well assuming it’s not vram limiting you. Games are pretty CPU intensive these days since the PS5 and Xbox no longer have potato CPUs. At 120+ fps I regularly see >50% CPU usage in most games. And that’s with nothing running in the background. In the real world you have a ton of background tasks, YouTube videos, discord etc eating your CPU.

Also the 4090 is an absolute beast. My 5800X3D absolutely holds my 4090 back pretty often honestly.

[–] Lojcs@lemm.ee 6 points 5 months ago (2 children)

Doesn't c stand for e-cores? Packing up to 32 e-cores must be easier than with normal cores.

Also kinda wish they went the other direction a little, cut cure counts and put more cache across all levels on some cores instead for better single thread performance, a 'very big' core so to say. Intel's cache sizes have been larger then amd since alder lake and there stayed competitive despite their process node disadvantage

[–] Flex@lemmy.world 4 points 5 months ago* (last edited 5 months ago)

Not quite an e-core but the goal is the same: Make more efficient use of the available die space by packing in more, slower cores.

The difference is that Intel's e-cores achieve this by having a different architecture and support less features than their p-cores. E-cores for example do not support multi threading. E-cores are about 1/4 the size of a o-core.

AMD's 4c cores support the same features and have the same IPC as full zen 4 cores but operate at a lower clock speed. This reduces thermal output of the core, allowing them to pack in the circuitry much more densely.

Undoubtedly Intel's e-cores take advantage of this effect as well and they are in fact quite a bit smaller than 4c: a 4c core is about 1/2 the size of a zen 4 core. The advantage of AMD's approach is that having the cores be the same simplifies the software side of things.

[–] jlh@lemmy.jlh.name 3 points 5 months ago* (last edited 5 months ago)

AMD's c cores aren't quite the same as Intel's e cores. Intel's e-cores are 1/4 of the size of their P cores, while AMD's c cores are about the same size as their standard cores, but a bit more square shaped geometrically.

Intel's e cores are completely different architectures from their p cores, while the only difference between AMD's cores are a bit less cache and a bit lower frequency.

Intel's are like comparing an Raspberry pi core to a full x86 core, while AMD's is like a lower binned regular core.

AMD has "big" cores, too. Their 3d vcache models trade multithreaded performance for more cache. Their "3 core tiers" approach is very obvious in their server line up:

https://www.servethehome.com/amd-epyc-bergamo-epyc-9754-cloud-native-sp5/

[–] SharkAttak@kbin.social 4 points 5 months ago (1 children)

Is there really a need for them?

[–] Flex@lemmy.world 1 points 5 months ago

The c variants of zen are for cloud and are more compact variants of the full zen 5 cores, they generally want as many cores in as compact a format as possible.

We might see 5c show up in SoCs (like the chip in a hypothetical steam deck 2) as well because they want their chips to be as small as possible so they can price their devices as competitively as possible. I don't think we will see those go up to 32 cores however as there is indeed no need for that many cores on consumer chips.

[–] PiratePanPan@lemmy.dbzer0.com 2 points 5 months ago (1 children)

Cool. When's the ARM chip coming out?

[–] 9488fcea02a9@sh.itjust.works 4 points 5 months ago (2 children)

Arm is dead. The future is RISC-V

[–] qyron@sopuli.xyz 5 points 5 months ago

It should. An open technology standard should gain traction over closed proprietary ones.

[–] PiratePanPan@lemmy.dbzer0.com 1 points 5 months ago (1 children)
[–] Linkerbaan@lemmy.world 1 points 5 months ago (1 children)
[–] kuberoot@discuss.tchncs.de 1 points 5 months ago (1 children)

Aren't the new Apple chips ARM? If they are, then ARM is absolutely in the present, and proven viable for consumers by Apple.

[–] Linkerbaan@lemmy.world 1 points 5 months ago (1 children)

It proves Apple is viable for consumers not ARM.

The Windows and Linux drivers for arm are severely lacking compared to MacOS Rosetta.

ARM is further in the development stage than RISC5 but both aren't near X86 for desktop compatibility yet.

[–] kuberoot@discuss.tchncs.de 1 points 5 months ago (1 children)

If apple is viable for consumers, and apple uses ARM, then ARM is viable for consumers.

Windows and Linux being unfortunately behind is not an argument against ARM being viable, it shows it's not ready - however, apple was in the same situation before they moved to ARM, so theoretically Microsoft could attempt a similar investment and push towards ARM. Apple's control over both hardware and software certainly helped them, and went well for them.

That said, maybe it's a disagreement on terminology. When I say that ARM is viable, I mean that it's ready to create hardware and software that does what people need it to do. Apple clearly succeeded, now it's a question of if/when manufacturers start making open hardware and software starts compatibility... Or if maybe another option will succeed instead.

[–] Linkerbaan@lemmy.world 1 points 5 months ago (1 children)

RISCV is also viable to create hardware and software to do what people need.

The software just doesn't exist yet.

[–] kuberoot@discuss.tchncs.de 1 points 5 months ago (1 children)

Maybe it is, maybe it isn't... But we have definitive proof ARM is, in the form of actual consumer systems on the market.

[–] Linkerbaan@lemmy.world 1 points 5 months ago

ARM has been in consumer systems since Android.

We also have proof that ARM sucks for gaming and has many compatibility issues running X86 programs.

Is ARM more mature than RISCV? Yes definitely. But just like RISCV ARM is also not a replacement for X86. Especially when running games or professional proprietary garbage software X86 is still the way to go.

[–] CosmicCleric@lemmy.world -4 points 5 months ago (2 children)
[–] PiratePanPan@lemmy.dbzer0.com 3 points 5 months ago (1 children)

Did you really copyright your comment 😭

[–] CosmicCleric@lemmy.world -1 points 5 months ago* (last edited 5 months ago)

Did you really copyright your comment 😭

No, I licensed my content with a limited license that does not allow for commercial usage.

~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~

[–] Godort@lemm.ee 2 points 5 months ago (2 children)

Probably negatively, but also likey not enough to matter. CPUs these days run pretty cool.

Were a long way from the days of an idle Pentium 4 at 75C

[–] Zer0_F0x@lemmy.world 2 points 5 months ago

We're in the days of Intel's top chips degrading themselves in a matter of weeks due to thermals being simply unmanageable under anything less than a beefy 360mm AIO or custom loop cooling at stock settings

[–] CosmicCleric@lemmy.world 0 points 5 months ago (1 children)

CPUs these days run pretty cool.

Thought the AMD CPU ran around 90 celsius?

~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~

[–] Godort@lemm.ee 2 points 5 months ago (2 children)

My Ryzen 3900X idles at around 50C, although that's a few generations ago now

[–] mbfalzar@lemmy.dbzer0.com 2 points 5 months ago

My 3900X idles at 35 and hits 65 when it's 100% all cores. With a decent cooler modern AMD runs pretty chill

[–] CosmicCleric@lemmy.world -1 points 5 months ago

My Ryzen 3900X idles at around 50C, although that’s a few generations ago now

There seems to be a big difference between older CPUs and the newer ones, where the newer ones are running a lot hotter now under load.

I personally use a 5800X and it gets to 90c often.

~Anti~ ~Commercial-AI~ ~license~ ~(CC~ ~BY-NC-SA~ ~4.0)~