this post was submitted on 22 Sep 2023
13 points (56.4% liked)

Technology

34843 readers
74 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

tr:dr; he says "x86 took over the server market" because it was the same architecture developers in companies had on their machines thus it made it very easy to develop applications on their machines to then ship to the servers.

Now this, among others he made, are very good points on how and why it is hard for ARM to get mainstream on the datacenter, however I also feel like he kind lost touch with reality on this one...

He's comparing two very different situations, more specifically eras. Developers aren't so tied anymore like they used to be to the underlaying hardware. The software development market evolved from C to very high language languages such as Javascript/Typescript and the majority of stuff developed is done or will be done in those languages thus the CPU architecture becomes irrelevant.

Obviously very big companies such as Google, Microsoft and Amazon are more than happy to pay the little "tax" to ensure Javascript runs fine on ARM than to pay the big bucks they pay for x86..

What are your thoughts?

you are viewing a single comment's thread
view the rest of the comments
[–] skullgiver@popplesburger.hilciferous.nl 1 points 1 year ago (1 children)

The only thing I worried about, is the architecture of ARM are too fractured. AWS Graviton might behave differently than Ampere Altra, despite both have the ARM ISA.

We saw the same with AMD/Intel building their own instruction sets. AMD has only recently entered the server market as a serious competitor, but limitations like a lack of AVX512 seems not to be a bug issue so far. The result is either binaries with multiple code paths, optimised for each specific design, or binailries only leveraging the common ground.

If an ARM server comes out on top, I'm sure the ARM server market will centralise the same way it did around Intel before EPYC got traction.

[–] umami_wasbi@lemmy.ml 1 points 1 year ago (1 children)

With x86, there are AMD and Intel. With ARM, how many designers are here? With more designers, the smaller the potential common ground is, and more code paths to optimize, thus cost more to build.

ARM has their own design team, as does Apple. Google and Microsoft are supposedly also launching their own ARM chips, but they're behind on Apple by a couple of years. Samsung designs Exynos, Qualcom designs Snapdragon, and then you have Mediatek, and HiSilicon.

Many of these companies start out with the chip designs they're already paying ARM for and tweak them to their need. Apple is the big exception here, beating ARM at their own game by using ARM's ISA but designing better chips to run the code than ARM has been able to.