this post was submitted on 01 Nov 2023
85 points (96.7% liked)

Apple

17447 readers
109 users here now

Welcome

to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!

Rules:
  1. No NSFW Content
  2. No Hate Speech or Personal Attacks
  3. No Ads / Spamming
    Self promotion is only allowed in the pinned monthly thread

Lemmy Code of Conduct

Communities of Interest:

Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple

Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode

Community banner courtesy of u/Antsomnia.

founded 1 year ago
MODERATORS
all 13 comments
sorted by: hot top controversial new old
[–] darkghosthunter@lemmy.ml 20 points 1 year ago

Well, that’s a bummer, but it will be interesting to see how it stacks up on day-to-day usage.

It’s not that the folks on the base M3 are going to stress out the machine with high computation tasks, but the Pro and Max surely will have enough people talking about synthetic benchmarks vs real benchmarks to see what optimizations Apple made that and are not paying off.

[–] M500@lemmy.ml 11 points 1 year ago (1 children)

I’m pretty close to getting a used m1 air for $500.

I can probably search a bit and get a slightly better deal.

The price might be a bit high, but I’m not in the US and we have higher prices here.

[–] w3dd1e@lemm.ee 3 points 1 year ago (1 children)

I just got one for around $600 in the US on Swappa. I tried to get one cheaper but couldn’t find it where I lived. Anyway, I’m super happy with it. I made sure it was a low number of battery cycles and it’s in near mint condition.

The other day, I was coding in VSStudio, debugging JavaScript in Chrome with multiple tabs open, and logging issues I found on a template in Excel. Excel alone makes my work computer freeze and I didn’t notice a single slow down on this thing. It was fantastic.

I don’t love the way Mac handles open-window management but aside from that I’m very happy.

[–] M500@lemmy.ml 1 points 1 year ago (1 children)

Do you have 8gb of ram in your machine?

There is an electronics market where I live. I have a recentish lenovo it actually might be a year newer than the M1 so I am going to try and swap it. Maybe I can go next week.

[–] w3dd1e@lemm.ee 1 points 1 year ago

Yeah, just 8. I was worried about only 8 actually but I couldn’t bring myself to spend the extra money on the 16gb (I have a desktop if I need to fall back on it).

So far so good. I haven’t even noticed hitting a wall with the low amount of ram. I forgot to mention, I’m just coding websites. Even with the JavaScript, I’m not building AAA or doing a ton, really.

[–] Perfide@reddthat.com 9 points 1 year ago (1 children)

What the fuck is with this trend of releasing a great product and then 2-3 generations later nerfing the shit out of the memory bandwidth? Nvidia, Intel, I think AMD, and now Apple are all guilty of this recently.

[–] AwesomeSteve@lemmy.world 4 points 1 year ago* (last edited 1 year ago)

This is why competition is always a must, when Apple released the M1 series, the entire tech industry basically shitting on Intel, and spell the death of x86. Make no mistake, Arm-based chips are leading years ahead for efficiency and performance per watt. Qualcomm's Snapdragon Elite X and Microsoft already signed the exclusive deal to sell Arm-based laptop running Windows, coming in 2024. Nvidia and AMD also announced Arm-based PC chips to market in 2025.

On the GPU front, Nvidia basically abandoned the former customers (gamers) that put the company into a trillion dollar company, Jensen Huang now focus on server and AI chip market now that selling 1000x times the MSRP gamer-tier RTX GPU a piece. Just look at the RTX 4000 series pricing and the VRAM for entry and mid-tier cards. Intel Arc and AMD Radeon are decades behind Nvidia in terms of software API, the CUDA ecosystem is the one that allowed Nvidia to basically monopolize the AI field and milking its customers. Gamers are no longer needed by Nvidia, they will continue to release subpar GPU that barely keep up with the expected generational leap, by CUDA cores, VRAM, memory bandwitdth, some are even downgraded, ffs.

[–] bbbbb@lemmy.world 7 points 1 year ago

This was a real bummer for anyone interested in running local LLMs. Memory bandwidth is the limiting factor for performance in inference, and the Mac unified memory architecture is one of the relatively cheaper ways to get a lot of memory rather than buying a specialist AI GPU for $5-10k. I was planning to upgrade the memory a bit further than normal on my next MBP upgrade in order to experiment with AI, but now I’m questioning whether the pro chip will be fast enough to be useful.

[–] irdc@derp.foo 7 points 1 year ago

Easy way to save on power.

[–] mingistech@lemmy.world 4 points 1 year ago

M3 Pro has 150GB/s bandwidth vs 200 for the M2 Pro. I think that can be explained by using 3 6GB/12GB modules for the RAM vs 4 on the M2.

The M3 Max is listed as “up to" 400GB/s, where the M2 Max doesn't have that qualifier. The 14 core I think is always using 3 24GB/32GB wider modules for 300GB/s, the 16 core is always using 4 for 400GB/s.

[–] psycho_driver@lemmy.world 0 points 1 year ago

No more Jim Keller architecture design. Same thing will probably happen to AMD when they need to move on from Zen. Bulldozer 2.0.

[–] Nogami@lemmy.world -1 points 1 year ago

Doubt it will make a difference that anyone except benchmarkers will notice.