• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Apple Announces Next-Generation M1 Pro and M1 Max Chips

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
If you can't make extra money with this new M1 Max boat anchor, best to leave it for someone else to buy.

A boat anchor? That would be a pretty small boat. The Ultra model weighs 3.6kg. ;)
 

elvisizer

Active Member
Joined
Sep 16, 2018
Messages
264
Likes
212
Apple laptops use to have a great repair record but in the last 5 years the keyboards have been horrible and even with some accommodations by Apple for the problem they still refuse to fix lots of the MBP keyboards that fail making the unit totally useless.
so, couple things . . . .
  1. that's the stupid butterfly keyboard you're talking about
  2. Apple 100% repairs those and has bent over backwards to repair them out of warranty even because they know they're crap
  3. They stopped using them over a year ago
 

amper42

Major Contributor
Forum Donor
Joined
Dec 21, 2020
Messages
1,661
Likes
2,451
Latest MBP is not fully capable of editing multi-camera Pro-Res video due to two limiting factors. Lack of memory bandwidth and processing capacity. You can only scale up to multiple instances of editing software until you hit the limiter on memory bandwidth.

Have you tried Intel MacBook Pro using an eGPU with the Radeon RX 6900 XT?
 

elvisizer

Active Member
Joined
Sep 16, 2018
Messages
264
Likes
212
Latest MBP is not fully capable of editing multi-camera Pro-Res video due to two limiting factors. Lack of memory bandwidth and processing capacity. You can only scale up to multiple instances of editing software until you hit the limiter on memory bandwidth.
huh? the original m1 is capable of this, you don't even need an m1pro/max or ultra
 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,324
Location
UK
If you are editing multi-camera Pro-Res video you can do that with a standard MacBook Pro. The MBP is sold advertising it can accomplish that task. In all likelihood, if that is your primary use then it's a business. Like I said if you can't make money from using this new $4000 Apple toy then save your money.
Does that MBP cost $500? Why don't you accept that you were wrong instead of muddying your message.
 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,324
Location
UK
Latest MBP is not fully capable of editing multi-camera Pro-Res video due to two limiting factors. Lack of memory bandwidth and processing capacity. You can only scale up to multiple instances of editing software until you hit the limiter on memory bandwidth.
Please do not talk on a subject that you obviously have no knowledge on.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
Have you tried Intel MacBook Pro using an eGPU with the Radeon RX 6900 XT?
I had an i7 MacBook Pro from my employer. (I don't remember if that was exact GPU model.) Excellent performance for its day as a desktop system with a monitor. As a laptop used on my lap... too much heat, but I hardly ever used it that way. The new M1-based MacBooks are awesome.
 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,324
Location
UK
You really should take your own advice. :D

If I were you I would apologise for making baseless comments before continuing to muddy the thread.

Get a standard $500 to $1000 laptop and it will work fine for ANY personal computer need. :D
 
Last edited:

stevenswall

Major Contributor
Forum Donor
Joined
Dec 10, 2019
Messages
1,366
Likes
1,075
Location
Orem, UT
Apple just announced the new M1 Ultra, which is two M1 Max chips connected via a silicon interposer in a package, combined with up to 128GB of DDR5 memory. I'm impressed.


I thought this was something different than DDR5. I didn't know DDR5 had anywhere close to the memory bandwidth of 600GB/s or whatever Apple claimed in the release. They might have said "10x as fast as competing RAM" if I remember correctly.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
I thought this was something different than DDR5. I didn't know DDR5 had anywhere close to the memory bandwidth of 600GB/s or whatever Apple claimed in the release. They might have said "10x as fast as competing RAM" if I remember correctly.
As I pointed out in post #27, Apple is using DDR5-6400, which they're able to use more easily because the DRAM chips are in the CPU package. It's not that the RAM was 10x as fast, but at the release time when the M1 was capable of 600GBps of memory bandwidth, a high-end Intel i9 was capable of less than 1/10th of that. For example, even the latest 16-core Intel i9-1290K is only capable of 76.8GBps of memory bandwidth. The Intel CPU has only two memory channels, and the fastest memory it can use is DDR5-4800. The M1 Ultra is said to be capable of 800GBps of memory bandwidth, according to Apple.

 

sarumbear

Master Contributor
Forum Donor
Joined
Aug 15, 2020
Messages
7,604
Likes
7,324
Location
UK
I thought this was something different than DDR5. I didn't know DDR5 had anywhere close to the memory bandwidth of 600GB/s or whatever Apple claimed in the release. They might have said "10x as fast as competing RAM" if I remember correctly.
The memory bandwidth of M1 Ultra is 800GB/s!

 
Last edited:

_thelaughingman

Major Contributor
Forum Donor
Joined
Jan 1, 2020
Messages
1,363
Likes
2,045
Please do not talk on a subject that you obviously have no knowledge on.
Wow, the hostility on this! I'm dipping out of this dumpster fire.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,497
So this new Mac Studio thing is being claimed by Apple to be more powerful WHILE being more power efficient than an RTX 3090+12900K system.

What the hell?
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,766
Likes
37,624
So this new Mac Studio thing is being claimed by Apple to be more powerful WHILE being more power efficient than an RTX 3090+12900K system.

What the hell?
I think it's true. The new Apple chips are really something.
 

Tks

Major Contributor
Joined
Apr 1, 2019
Messages
3,221
Likes
5,497
I think it's true. The new Apple chips are really something.
It's just so weird if so.. A company that just started designing their own chip is demolishing perf/watt figures from dedicated players in each field? Intel from the CPU front, and Nvidia from the GPU front.

The only question is, what the HELL are these two other clowns doing then? Is x86 that much of a drag? Because if it isn't (which seems like it really isn't since there are software translation layers that allow for some albeit decreased performance combability for Windows apps), how are they getting trounced by a company like Apple?

A 3090 being beat by what is essentially a NUC system is simply hilarious. With the prices on some of these GPU's on the second-hand market, it almost literally makes biting the Apple tax bullet on this pricy system a decent deal. Certainly the build quality looks to be really nice as is always the case with Apple, so...

This is just embarrassing. I can only imagine what's coming with the Mac Pro announcement...
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
So this new Mac Studio thing is being claimed by Apple to be more powerful WHILE being more power efficient than an RTX 3090+12900K system.

What the hell?
x86 processors are not especially power-efficient. Complex Instruction Set Computers (CISC), like the x86, try to attain high performance by adding more and more application-specific instructions. The thinking is that application-specific instructions (e.g., there's one for iSCSI CRC-32c calculation) is better than executing a lot of simple instructions to perform the same functions. (The Reduced Instruction Set Computing (RISC) paradigm, as ARM uses for their CPU cores.) While CISC's performance advantage is sometimes the case for individual operations, as a whole for a workload it is often not. Modern client CPUs, like Intel and AMD produce, and the M1, include so-called accelerators (application-specific CPU-offload processors) for various functions like graphics, networking, security, and "neural processing" anyway, negating a lot of the supposed CISC benefits. Add in the factor that many so-called x86 instructions are actually executed in microcode (running on more primitive RISC processors embedded in CPU architecture, no less, a case of irony), that ARM's fixed length instructions are easier to decode in hardware than x86 variable length instructions, and it is possible for a well-designed RISC CPU with the right accelerators, not to mention superior fabrication process, to outperform a classic CISC CPU at a lower power level.

The M1 is also a marvel of modern CPU design. For example, the M1 is able to decode 8 instructions at once, while Intel's (last I checked) does only 6, in their recently announced Sapphire Rapids CPU cores. (The last generation was only 4-wide.) There's also the previously discussed matter of the M1 Max having something like 10x the memory bandwidth of an Intel i9, and uses faster memory chips. Intel and AMD have a lot of catching up to do.

If you're into the mucky details, this article is the best I've found on the M1 implementation.


Here's a similar discussion for Intel's latest:


As you can see, Intel is still bloating their instruction set to try to compete. Interesting that Intel and AMD have both been forced to chiplets (tiled dies in Intel-speak) when Apple can make do with a single CPU die. IMO, Apple has become the premier CPU designer on the planet, possibly followed by IBM, with their brilliant Power10.

Everything I've read says the x86 CPUs are still better for gaming, but Apple is still early in their development timelines.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,766
Likes
37,624
It's just so weird if so.. A company that just started designing their own chip is demolishing perf/watt figures from dedicated players in each field? Intel from the CPU front, and Nvidia from the GPU front.

The only question is, what the HELL are these two other clowns doing then? Is x86 that much of a drag? Because if it isn't (which seems like it really isn't since there are software translation layers that allow for some albeit decreased performance combability for Windows apps), how are they getting trounced by a company like Apple?

A 3090 being beat by what is essentially a NUC system is simply hilarious. With the prices on some of these GPU's on the second-hand market, it almost literally makes biting the Apple tax bullet on this pricy system a decent deal. Certainly the build quality looks to be really nice as is always the case with Apple, so...

This is just embarrassing. I can only imagine what's coming with the Mac Pro announcement...
As well as what Blueone said, Intel is still on a much larger lithography for chips. They are on 10 nm and claim to soon have 7 nm chips. They've struggled to get yields at smaller sizes. That is how AMD caught up and passed them. AMD has had 7 nm for awhile and is getting I think into 5 nm processing of chips. Apple is on 5nm with the M-series chips. Apple with these chips are now in a sense a hardware bargain if you need the power. One of the fast Ryzen CPU's which might run about as fast, slightly faster or slightly slower depending upon the task costs more than an M1 Mac Mini which is an entire NUC sized machine. The Ryzen CPU would use much more power than the entire Mini as well.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,195
Likes
1,545
Location
USA
As well as what Blueone said, Intel is still on a much larger lithography for chips. They are on 10 nm and claim to soon have 7 nm chips. They've struggled to get yields at smaller sizes. That is how AMD caught up and passed them. AMD has had 7 nm for awhile and is getting I think into 5 nm processing of chips. Apple is on 5nm with the M-series chips. Apple with these chips are now in a sense a hardware bargain if you need the power. One of the fast Ryzen CPU's which might run about as fast, slightly faster or slightly slower depending upon the task costs more than an M1 Mac Mini which is an entire NUC sized machine. The Ryzen CPU would use much more power than the entire Mini as well.
Your premise is basically correct. A better fab process can cover for many architecture and design inefficiencies. Intel had that advantage for years. Nonetheless, I recommend ignoring the so-called "node names" used by the fabrication industry for their process designations. In reality, the Intel 10nm process is approximately as dense as the TSMC 7nm process, which is why Intel renamed their 10nm process "Intel7", in a fit of marketing-speak. TSMC has moved on to 5nm process for Apple and a few other selected customers, and it is indeed much better than Intel7. Intel won't have a competing process until 2024, called Intel20A process, the "A" standing for angstroms. TSMC is promising its 3nm process for 1Q23, which likely keeps them at least a generation ahead of Intel for years yet, and TSMC claims a 2nm process is in development and on track. It should be interesting to see if Intel can catch up.
 

jae

Major Contributor
Joined
Dec 2, 2019
Messages
1,208
Likes
1,509
Was a mac user for almost 10 years, after some big frustrations with osx went back to linux (laptop- personal/work) and windows (desktop- rendering/gaming/media/misc). Was drooling over the M1 hardware for the power/efficiency, especially as my Thinkpad is falling apart and starting to get sluggish, the battery is cooked, and also broken hinge I've been too lazy to fix. But, I was really reluctant to go back to macos. I caved bought an M1 Pro and M1 Max with the intent on comparing practical performance with respect to battery life and returning one of them. Unfortunately there were massive delays getting the Max and I received the Pro early, so I tried the Pro and had to return it before getting the Max. Currently using the Max now and deciding if I am going to keep that.

The keyboard is complete rubbish as expected, probably worse than my first macbook pro from almost 14 years ago. It's practically torture typing on that thing after using nothing but a Lenovo Thinkpad and an IBM model M keyboard for the better half of the last decade, so not sure I will ever be happy with this. On the software side, I would say modern Windows 10/11 UI is just simply more my taste and easier on the eyes, less frustrating to deal with, although I can understand how the UX for the average non-techy person will probably still be better on the mac. No idea why apple is obsessed with rounded screens, notches, rounded borders, and drop shadows on windows that you can't even turn off via terminal anymore.

No real complains about the hardware, and its very impressive. I was after the longest battery life possible and with most laptops rarely having a max capacity battery anymore and lacking the efficiency of M1, it definitely exceeds expectations. The Pro seemed to have marginally better battery life for my use case but as I had to return it, that ship has sailed. If I don't keep the Max, the only other laptop I can really see myself getting is another Thinkpad T-series or X1 Carbon, both of which have 40-50% less battery capacity and less efficiency.

With Nvidia now showing disinterest in acquiring ARM, there is probably not going to be much better options other than apple as far as good mobile hardware is concerned for quite a while.
 
Top Bottom