• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Apple Announces Next-Generation M1 Pro and M1 Max Chips

radio3

Active Member
Joined
Aug 20, 2020
Messages
125
Likes
58
Then again, what is more widespread: video editing (hardware accelerated in CPU) or raytracing? Besides, there is nothing stopping the market to offer an external GPU via TB.
Raytracing is just getting started in hardware. I think the current Nvidia cards are the first gen with dedicated raytracing hardware. Pretty sure all games will eventually be using it.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,194
Likes
1,542
Location
USA
That might be a bit too easy. The fact that you have unified memory means that one does not need to copy memory from CPU to GPU and back. Depending on use case, this can be a huge savings in bandwidth that would otherwise be wasted.
Interesting point, if that's how the implementation is actually done. But a 4K frame is only about, what, 70 megabits, so even at 175 frames per second that's only ~1.5GB/sec. So the copy doesn't look onerous with this hardware. Or did I miscalculate? Anyway, since we can't know the implementations to any detail, I'm curious to see a head-to-head comparison.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,496
Insane that just the CPU part is faster than the fastest AMD Threadripper. That they do it at 1/5 the power or whatever is exponentially insane. Throw in graphics that are faster than a $4000 GPU and I feel like I’ve been being conned for years. That’s one chip in affordable midrange consumer device that outperforms a $10,000 or something CPU+GPU combination while consuming fraction of power.

What's more interesting is all the supposed tech experts for the past decade have been saying the reason ARM will never be seen in serious higher performance segment of the consumer computing market is because the architecture simply cannot scale with the power envelope required and often seen in classical desktop architectures like x86.

This was a definitive statement made by many, that it can simply NEVER scale up without massive thermal and power requirements if parity with performance was the goal.

All now utterly incorrect.

Seems Apple really ran with giving an answer to the long-time consumer sentiment of many laymen (the masses that questioned why desktop/laptop sized devices don't get ARM hardware given the hilarious outpacing of performance per year mobile chips have been making in comparison to the Moore's Lie stagnation present within the desktop sphere of products like Intel, AMD, and Nvidia products).
 

radio3

Active Member
Joined
Aug 20, 2020
Messages
125
Likes
58
I would like to brag that as soon as I got an iPad with Apple cpu about 3 years ago and realized that the processor felt more than twice as fast as my 4GHz Core i7 desktop, had like 5x the battery life of my i7 laptop in spite of relatively tiny battery, and was a fraction of the size and cost, it seemed obvious that the whole CPU industry would want to immediately drop what they were doing and switch to whatever it was that Apple was onto. Like when the iPhone came out and all other phones were instantly obsolete and survival would only be feasible through mimicry.

Yet here we are and it’s somehow just the same old same ol. Recently spent close to 15k on an AMD Threadripper machine because some software I use requires NVidia. Just imagining what that kind of money would buy once Apple comes out with workstation chips.
 

dwkdnvr

Senior Member
Joined
Nov 2, 2018
Messages
418
Likes
698
Insane that just the CPU part is faster than the fastest AMD Threadripper. That they do it at 1/5 the power or whatever is exponentially insane. Throw in graphics that are faster than a $4000 GPU and I feel like I’ve been being conned for years. That’s one chip in affordable midrange consumer device that outperforms a $10,000 or something CPU+GPU combination while consuming fraction of power.
I think the hype has gotten maybe just a bit out of hand - and I say that as someone that just ordered an M1 Max Studio w/64GB.

The M1 Ultra really isn't faster than a high-end Threadripper - if you look at Passmark scores the 3995X is at 80k+ whereas the M1 Max is only at 23k, so even if the Ultra scales linearly it only hits about half the raw compute of the Threadripper. Of course, for so many consumer workloads this isn't going to be representative as the Threadripper achieves this only by having an insane number of cores which is an advantage in highly-parallel server workloads but not so much on 'the desktop'

Similarly, the Nvidia 3090 has far more raw compute power than the Ultra, and in many benchmarks it will very easily outperform the Ultra. I think Apple is being a bit coy here in their comparisons, but to be fair some workloads will be faster on the Ultra.

Where I think Apple has really opened the gap though is
a) power efficiency (as has been covered). They are doing this at a tiny fraction of the overall power budget of the competition, which is seriously impressive
b) proving the value of an integrated stack. Whatever the raw numbers are, Apple seems to have created a user experience where the Mac just feels much faster than a comparable machine since they can optimize the full stack to play to it's strengths.

I'm a long-time Linux style guy, and really wouldn't have expected to move to Mac. But, we are now a family with 2 M1 MB Airs and a soon to be delivered Studio, and I'll likely be retiring my Windows desktops and likely even much of my Linux server infrastructure since I'll be able to run it on a Linux VM on the Studio. (Win 11 Arm under Parallels runs the 2 specific apps I know of that need Windows, and I"ll keep my Win laptop around in my eternal hope that someday I'll be able to find time for Astronomy which really needs Windows as well)
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,383
Likes
18,318
Location
Netherlands
To be fair, the Threadripper alone is about as much as a complete Mac Studio with Ultra. And the PSU to power the thing is also as large as the whole thing.

It’s really amazing how even the power supply is integrated in that small box :oops:.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,194
Likes
1,542
Location
USA
What's more interesting is all the supposed tech experts for the past decade have been saying the reason ARM will never be seen in serious higher performance segment of the consumer computing market is because the architecture simply cannot scale with the power envelope required and often seen in classical desktop architectures like x86.

This was a definitive statement made by many, that it can simply NEVER scale up without massive thermal and power requirements if parity with performance was the goal.

All now utterly incorrect.

Seems Apple really ran with giving an answer to the long-time consumer sentiment of many laymen (the masses that questioned why desktop/laptop sized devices don't get ARM hardware given the hilarious outpacing of performance per year mobile chips have been making in comparison to the Moore's Lie stagnation present within the desktop sphere of products like Intel, AMD, and Nvidia products).
Who are these experts? Were they really experts, meaning engineers who actually worked in CPU development, or just industry commentators?

(A short history lesson. The earliest CPU designs which had the simplest practical instruction sets were developed by IBM in the late 1970s, led by John Cocke. Today's RISC processor design strategy, and the nomenclature, was the result of research at UC Berkeley in the early 1980s, led by Turing Award winners Dave Patterson and John Hennessy. Hennessy went on to become president of Stanford University. Patterson has retired from Berkeley and is now a distinguished engineer at Google. Fun fact about Dave, last I saw him several years ago he drove an excellent custom replica of the yellow 1932 Ford Duce Coupe driven by the character John Milner in the film American Graffiti.)

ARM processors are just an example of RISC processor architecture, a class that includes the IBM Power10, which is a remarkable server processor used in HPC systems. Also, while ARM-based processors were not considered competition for x86 designs, say, ten years ago, that was more due to ARM's corporate strategy than any inherent limitation of the ARM core strategy. We could have a long discussion of why that was probably the case, but it would be a digression. It should also be noted that the ARM cores of ten years ago have been massively extended in the latest designs, such as the ARM v8 and v9 series. These are very sophisticated, powerful 64 bit designs, fully the equals of many x86 cores. Finally, the Apple cores are not strictly ARM designs, they are based on ARM IP and extended by Apple CPU architects and engineers with designs that (as I mentioned earlier) have architectural features that exceed what Intel and AMD have done in some cases. (The ability to extend the ARM designs require what ARM calls an "architecture license", and essentially lets you design proprietary CPU cores and surrounding logic using the ARM instruction set.)

Also, take into account that Apple's DRAM-in-package strategy allows them to use faster DRAM chips than are cost-effective in motherboard-mounted DIMMs, as the x86 motherboards do. Faster memory is a significant performance improvement factor.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,496
Who are these experts? Were they really experts, meaning engineers who actually worked in CPU development, or just industry commentators?

(A short history lesson. The earliest CPU designs which had the simplest practical instruction sets were developed by IBM in the late 1970s, led by John Cocke. Today's RISC processor design strategy, and the nomenclature, was the result of research at UC Berkeley in the early 1980s, led by Turing Award winners Dave Patterson and John Hennessy. Hennessy went on to become president of Stanford University. Patterson has retired from Berkeley and is now a distinguished engineer at Google. Fun fact about Dave, last I saw him several years ago he drove an excellent custom replica of the yellow 1932 Ford Duce Coupe driven by the character John Milner in the film American Graffiti.)

ARM processors are just an example of RISC processor architecture, a class that includes the IBM Power10, which is a remarkable server processor used in HPC systems. Also, while ARM-based processors were not considered competition for x86 designs, say, ten years ago, that was more due to ARM's corporate strategy than any inherent limitation of the ARM core strategy. We could have a long discussion of why that was probably the case, but it would be a digression. It should also be noted that the ARM cores of ten years ago have been massively extended in the latest designs, such as the ARM v8 and v9 series. These are very sophisticated, powerful 64 bit designs, fully the equals of many x86 cores. Finally, the Apple cores are not strictly ARM designs, they are based on ARM IP and extended by Apple CPU architects and engineers with designs that (as I mentioned earlier) have architectural features that exceed what Intel and AMD have done in some cases. (The ability to extend the ARM designs require what ARM calls an "architecture license", and essentially lets you design proprietary CPU cores and surrounding logic using the ARM instruction set.)

Also, take into account that Apple's DRAM-in-package strategy allows them to use faster DRAM chips than are cost-effective in motherboard-mounted DIMMs, as the x86 motherboards do. Faster memory is a significant performance improvement factor.
I mean, it's been years, but it was mostly tech outlets with interviews if I recall.

You mention near the end about their DRAM packaging, I understand the notion. What I don't understand is why would they be the only ones adopting such a model in the consumer market to this level of efficacy. AMD has APU's, Intel has iGPU's/NUC's in similar form factor (and custom board design as of late) that could have allowed them to venture in this territory. TO me personally, it reeks of complacency, and as far as Intel is concerned, there is very little arguing against this point given them getting trounced what amounts to an indie company (AMD by comparison in terms of funding and such). The Apple ordeal seems to have turned on and ramped up EXTREMELY fast. I have no idea what Gelsinger was talking about when he mentions in recent interviews about how he hopes to have Apple back as a customer (this seems like either deranged thinking or just shareholder corporate mouthpiece talk). Intel with their upcoming dGPU is targeting supposedly 3080 performance at the absolute high end and has been in the kitchen for a while not, while Apple is already advertising relative performance and PERF per WATT specs outpacing both CPU and GPU performance from the highest end consumer offerings of each company (12900K from Intel on the CPU front, and a 3090 from Nvidia on the GPU from) on a single package, with lesser thermals than each in isolation..

I can grasp the idea of going chiplet route, or having what is essentially a higher performance SOC with everyone on die, and perhaps with great interconnects interfacing if not going the monolithic route which can contribute to such performance gains. What I want to know, if this was self evident (as it ought to be if we're talking about "smart people" in the respective companies), why wasn't Intel or AMD first on the block with some of this stuff (or Nvidia considering their desperation to absorb ARM, they had to have seen something of this sort coming miles ahead).

I just can't understand how Apple could beat them to this sort of of paradigm. And speaking of which. Given these massive performance headways, and given that you know your history and the players well I'd imagine.. Surly no hardware company can ignore this approach? What might be 20/20 in hindsight about the whole "ARM scaling" claims, we now (as I imagine all the "smart folks") have a better picture of where things are going. So what's the verdict? Are we going to see Apple be the exclusive adopter of this approach to consumer hardware, or is x86's days numbered if it can't be adopted to fall in line with similar gains and Moore's Law tracking going forward?
 

voodooless

Grand Contributor
Forum Donor
Joined
Jun 16, 2020
Messages
10,383
Likes
18,318
Location
Netherlands
So what's the verdict? Are we going to see Apple be the exclusive adopter of this approach to consumer hardware, or is x86's days numbered if it can't be adopted to fall in line with similar gains and Moore's Law tracking going forward?
Basically Apple made what AMD should have done for the gaming consoles: hyper integration. I think it’s indeed complacency for a big part: there was no need to go the extra mile.

I think we will see similar tech from Intel and AMD in the next few years. Computer systems need to be ever smaller and more efficient. There is basically no other way than to integrate more and more into a single package.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,194
Likes
1,542
Location
USA
I mean, it's been years, but it was mostly tech outlets with interviews if I recall.
That's what I figured. The industry is still covered mostly by people who are not computer engineers, and generally don't know what they're talking about.
You mention near the end about their DRAM packaging, I understand the notion. What I don't understand is why would they be the only ones adopting such a model in the consumer market to this level of efficacy. AMD has APU's, Intel has iGPU's/NUC's in similar form factor (and custom board design as of late) that could have allowed them to venture in this territory. TO me personally, it reeks of complacency, and as far as Intel is concerned, there is very little arguing against this point given them getting trounced what amounts to an indie company (AMD by comparison in terms of funding and such).
The DRAM in package strategy does have a drawback - lack of flexibility. Apple uses their CPUs to sell into their specific markets, which are much smaller than the client market as a whole. Intel CPUs go into everything from Chromebooks to gaming desktops, and so do AMD's. Apple products, as you well know, have more targeted markets and are more expensive. Nonetheless, I completely agree with you about complacency, especially at Intel. But getting their butts kicked every once in a while is a big part of Intel's history. Intel was late to integrating memory controllers in the CPU over 15 years ago, which was the last time AMD had a big resurgence. (Apple was still on their PowerPC variant.) Finally, as I mentioned, Intel's and AMD's customer bases for chips are very broad, and the disaggregated strategy they both use are a result of trying to cover those markets with the fewest possible SKUs.
The Apple ordeal seems to have turned on and ramped up EXTREMELY fast.
Apple has been designing their own CPUs and accelerators for a long time for iPhones. Those CPUs use ARM IP also. And even before that they were designing PowerPC chips for Macs. So Apple has been designing CPUs for decades. And Apple is a far more bold company than Intel. Apple fought it's way up from the bottom. Intel's success was bootstrapped by IBM choosing Intel's x86 processors for the original PC. Intel's culture has a lot of monopoly thinking and entitlement in it. Apple still has a lot of Steve Jobs culture in it.
I have no idea what Gelsinger was talking about when he mentions in recent interviews about how he hopes to have Apple back as a customer (this seems like either deranged thinking or just shareholder corporate mouthpiece talk).
Gelsinger is talking about having Apple as a fab customer, as an alternative to TSMC and Samsung. Apple is gone when it comes to chip designs. Intel even sold them the design and the team for the 5G modem they were working on.
Intel with their upcoming dGPU is targeting supposedly 3080 performance at the absolute high end and has been in the kitchen for a while not, while Apple is already advertising relative performance and PERF per WATT specs outpacing both CPU and GPU performance from the highest end consumer offerings of each company (12900K from Intel on the CPU front, and a 3090 from Nvidia on the GPU from) on a single package, with lesser thermals than each in isolation..

I can grasp the idea of going chiplet route, or having what is essentially a higher performance SOC with everyone on die, and perhaps with great interconnects interfacing if not going the monolithic route which can contribute to such performance gains. What I want to know, if this was self evident (as it ought to be if we're talking about "smart people" in the respective companies), why wasn't Intel or AMD first on the block with some of this stuff (or Nvidia considering their desperation to absorb ARM, they had to have seen something of this sort coming miles ahead).
Many - MANY - chip industry executives are more worried about their careers than they are the companies they work for. Years and years from being working engineers and highly technical make them risk-averse, and they surround themselves with subservient yes-people because that's the only ones they are comfortable with and trust. They usually can't make high-level technical decisions themselves. Lisa Su appears to be above that bullshit, and IMO that's one of the reasons why AMD is doing so well now. (And AMD spun off their fabs as Global Foundries, and partnered with TSMC for fabrication, and TSMC attained a technology leadership role giving them an edge over Intel, but Su still gets a lot of credit, IMO.) I do expect that Dell and HP, and Microsoft for that matter, would like a competing CPU to the M1 and upcoming M2. It would not surprise me to hear that Microsoft decides to do a client CPU, since they are rapidly ramping a chip design team in Portland, OR, which is Intel's biggest design site. I suspect Microsoft is aiming at a server chip first (to compete on cost with Amazon's Nitro), because they're easier to design, but I could easily be wrong. Microsoft's Surface line would be an obvious candidate for a Microsoft competitor to the M1/M2.

I'm not sure what Jensen Huang was thinking in trying to buy ARM. He could do what Apple did easily just by getting an ARM architecture license. But Huang is probably the smartest CEO in the computer industry, so I am sure he knows things I don't.
I just can't understand how Apple could beat them to this sort of of paradigm. And speaking of which. Given these massive performance headways, and given that you know your history and the players well I'd imagine.. Surly no hardware company can ignore this approach? What might be 20/20 in hindsight about the whole "ARM scaling" claims, we now (as I imagine all the "smart folks") have a better picture of where things are going. So what's the verdict? Are we going to see Apple be the exclusive adopter of this approach to consumer hardware, or is x86's days numbered if it can't be adopted to fall in line with similar gains and Moore's Law tracking going forward?
There are independent CPU design teams to be had for acquisition, and it wouldn't surprise me to see Dell or HP, and especially Microsoft, acquire one of those companies and go into the system-in-a-package market. Samsung could do it too, since they have an internal team for phone CPUs, just like Apple. Qualcomm could enter this market too. But only Microsoft seems to me to be well-managed enough to try it. I do think Apple's success will draw others to the strategy.
 

dwkdnvr

Senior Member
Joined
Nov 2, 2018
Messages
418
Likes
698
I have to admit that I don't quite see the benefit of booting Linux natively on Apple Silicon. I mean, I understand the raw 'techie' desire to do it, but as an end user if you want to run Linux natively on your machine I don't see the benefit of doing it on an M1 Mac. We've already covered the fact that in terms of raw compute power there are options in the Intel/AMD world that are better and likely cheaper, so unless form factor and power efficiency are your #1 considerations, the Mac doesn't seem to offer much. Further, it seems clear (to me, at least) that the primary reason the Macs are so amazing is the integrated top-to-bottom stack where the software has been optimized for the platform; the Linux guys don't have full specs on the CPU and instruction set, nor the gpu (or probably the neural engine, either). So, it seems inevitable that Linux will be hamstrung due to lack of full driver support.

So, even being something of a Linux guy I think it's pretty obvious that the best approach is to run the Mac under OSX and then spin up a Linux VM under UTM or Parallels for native Linux compatibility.
 

radio3

Active Member
Joined
Aug 20, 2020
Messages
125
Likes
58
I have to admit that I don't quite see the benefit of booting Linux natively on Apple Silicon. I mean, I understand the raw 'techie' desire to do it, but as an end user if you want to run Linux natively on your machine I don't see the benefit of doing it on an M1 Mac. We've already covered the fact that in terms of raw compute power there are options in the Intel/AMD world that are better and likely cheaper, so unless form factor and power efficiency are your #1 considerations, the Mac doesn't seem to offer much. Further, it seems clear (to me, at least) that the primary reason the Macs are so amazing is the integrated top-to-bottom stack where the software has been optimized for the platform; the Linux guys don't have full specs on the CPU and instruction set, nor the gpu (or probably the neural engine, either). So, it seems inevitable that Linux will be hamstrung due to lack of full driver support.

So, even being something of a Linux guy I think it's pretty obvious that the best approach is to run the Mac under OSX and then spin up a Linux VM under UTM or Parallels for native Linux compatibility.
I actually have a Threadripper Pro machine that cost over 12,000 USD and I’m envious of the Ultra.
 

Ata

Senior Member
Forum Donor
Joined
May 6, 2021
Messages
388
Likes
334
Location
Adelaide, Australia
I can't see the M2 offering more than a relatively small, incremental change in performance and/or capabilities. If this is what M2 (non-pro, non-max, non-ultra) brings, I think anyone can comfortably go through the next couple of years with their 2020 Macbook Air M1 and not skip a beat.
 

Doodski

Grand Contributor
Forum Donor
Joined
Dec 9, 2019
Messages
21,572
Likes
21,857
Location
Canada
I actually have a Threadripper Pro machine that cost over 12,000 USD
I have a CDN $4K Intel gaming box plus peripherals and I am curious what you have for a USD $12K pooder. Can you give me a shorthand rundown?
 

_thelaughingman

Major Contributor
Forum Donor
Joined
Jan 1, 2020
Messages
1,362
Likes
2,041
The GPU on M2 isn't any better than all the iterations of the M1 chip. Apple will still have to play catch up to a discreet GPU from the initial die analysis.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,194
Likes
1,542
Location
USA
The GPU on M2 isn't any better than all the iterations of the M1 chip. Apple will still have to play catch up to a discreet GPU from the initial die analysis.
That's not what the specs say, but I'm not sure the claimed 35% M2 improvement matters much. Any serious gamer is going to want a hot and expensive Nvidia GPU, and more DRAM than Apple offers, albeit significantly slower DRAM than Apple uses. Apple is a no-deviations-from-the-menu company, Intel and AMD are about having it your way. Two different markets.
 

stevenswall

Major Contributor
Forum Donor
Joined
Dec 10, 2019
Messages
1,366
Likes
1,075
Location
Orem, UT
The GPU on M2 isn't any better than all the iterations of the M1 chip. Apple will still have to play catch up to a discreet GPU from the initial die analysis.

For mobile, only ultra low end legacy GPUs exist. The Nvidia 5090 mobile may be able to compete with an M1 Max, but I'm not betting on it.

Desktop GPU wise, I'm not sure. Given the same power input the M1 Max is what I'd bet on... And yes, they can literally just keep scaling up.

Scaling up a legacy desktop system the same way ends up needing a dryer 240v outlet and you'd smell like a pig wallowing in your own sweat while fans drown out the sweals.

I'm mad that Intel and Nvidia haven't achieved what I want on the high end, and with mobile laptops are absolutely wretched.

I'd love a ThinkPad with an M1 Max that ran Windows/X86 but it I had a eighteen octillion dollars and a planets worth of resources I couldn't buy it.

Even back after the 3rd generation i7 laptop CPUs they focused on power efficiency it seems and didn't really make notable performance gains.

Apple has both now and they are the absolutely god emporer ruling class company of stagnation and late to the party dung heap components... Now, I highly doubt there is a Windows machine that can compete price wise with the MacBook Air for general college use. Or pro video editing. Or color grading. Or photo editing. Or video streaming/webcam stuff. Or mobile entertainment.
 
Top Bottom