this post was submitted on 23 Jun 2024
40 points (60.8% liked)

Technology

60123 readers
3799 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] barsquid@lemmy.world 28 points 6 months ago (1 children)

Putting the claim instead of the reality in the headline is journalistic malpractice. 2x for free is still pretty great tho.

[–] barsquid@lemmy.world 30 points 6 months ago (1 children)

Just finished the article, it's not for free at all. Chips need to be designed to use it. I'm skeptical again. There's no point IMO. Nobody wants to put the R&D into massively parallel CPUs when they can put that effort into GPUs.

[–] frezik@midwest.social 7 points 6 months ago

Not every problem is amenable to GPUs. If it has a lot of branching, or needs to fetch back and forth from memory a lot, GPUs don't help.

Now, does this thing have exactly the same limitations? I'm guessing yes, but it's all too vague to know for sure. It's sounds like they're doing what superscalar CPUs have done for a while. On x86, that starts with the original Pentium from 1993, and Crays going back to the '60s. What are they doing to supercharge this idea?

Does this avoid some of security problems that have popped up with superscalar archs? For example, some kernel code running at ring 0 is running alongside userspace code, and it all gets the same ring 0 level as a result.