this post was submitted on 15 Nov 2024
106 points (99.1% liked)

Futurology

1776 readers
338 users here now

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] JohnDClay@sh.itjust.works 12 points 22 hours ago (15 children)

And it's hard to tell what the difference is. Apples 'built from the ground up for AI' chips just have more RAM. What's the difference with CPUs? Do they just have more onboard graphics processing that can also be used for matrix multiplication?

[–] hendrik@palaver.p3x.de 2 points 20 hours ago (2 children)

The Apple chips also have a wide interface to the RAM. That means you can run chatbots (LLMs) and other AI workloads that are memory-bound at crazy speeds compared to an Intel (or AMD) computer.

[–] JohnDClay@sh.itjust.works 3 points 19 hours ago (1 children)

Really? How fast is the memory bus compared to x86? And did they just double the bus bandwidth by doubling the memory?

I'm dubious because they only now went to 16gb ram as base, which has been standard on x86 for almost a decade.

[–] hendrik@palaver.p3x.de 2 points 19 hours ago* (last edited 18 hours ago)

Depending on the chip, they have somewhere from 100 to 400 GB/s. I'm not sure on the numbers on Intel processors. I think the consumer processors have about 50 - 80 GB/s. (~Alder Lake, dual channel DDR5) Mine seems to have way less. And a recent GPU will be somewhere in the range of 400 to 1000 GB/s. But consumer graphics cards stop at 24GB of VRAM and these flagship models are super expensive. Even compared to Apple products.

The people from the llama.cpp project did some measurements and I believe the Apple "Metal" framework seems to outperform the x86 computers by an order of magnitude or so. I'm not sure, it's been some time since i skimmed the discussions on their Github page.

load more comments (12 replies)