• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

New Macs

GeorgeWalk

Senior Member
Joined
Sep 6, 2019
Messages
472
Likes
792
D'oh! Yes, sorry, mistyped - I indeed meant to write "allows Intel apps to run on Apple silicon Macs."

I hear your and others' points about the performance penalty, and as noted in my prior comment I also get that Apple's speed comparisons are not to be taken as average use cases. At the same time, the original Rosetta had what I believe was a pretty minor speed hit for non-graphic-intensive PowerPC apps running in emulation on Intel Macs, and the performance bump from the current Intel chips to the M1 - including integrated graphics performance bump - appears to be larger than the speed bump from PowerPC to the first Intel chips back in 2006 or whenever.

I could be wrong of course - but I don't know that there's yet evidence to call Apple's M1 performance claims BS. Rosy, cherry picked, sure - but not necessarily false or uniformly exaggerated. There are metrics out there already, I believe, comparing A14X and so on Apple chips to various Intel chips and they stack up shockingly well.

Looks like some benchmarks on Geekbench showed up. It looks pretty good.
https://www.zdnet.com/article/apple...911203533858162948&mid=13163194&cid=716932111
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,196
Likes
1,548
Location
USA
It will be very interesting to finally see an apples-apples comparison. It's been evident for a while that Apple devices were untouched in performance-per-watt, but it's pretty remarkable to see the suggestion that they may well be right on par in absolute performance with the cutting-edge Ryzen processors that were literally just released and represent a non-trivial step up over previous generations. Having just purchased a new R7 3700x desktop (which might actually become a server), I'm not sure I have any immediate interest in these machines - particularly given the 16GB limit. They may well represent a pretty disruptive entry into the market though, particularly since they aren't resulting in a price bump over Apples already premium pricing.

I have a suspicion that a substantial part of the M1 performance advantage is having DRAM dies in the SoC package. I have no information about what the interconnect is to the embedded DRAM dies, but I highly doubt they used a DDRx-like half-duplex strategy. So far I've not seen a discussion of the memory interconnect specs, but I'm guessing they use the full-duplex SoC fabric. I would guess the performance increase due to a full-duplex strategy is quite large.
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,196
Likes
1,548
Location
USA
They better start working on a 32GB version.

Since they're using a so-called "chiplet" strategy (multiple dies of probably varying fab processes within the same SoC package), it wouldn't surprise me at all if the memory limitation was a multi-variable cost, power consumption, and cooling decision. Add more DRAM dies and you increase power consumption, which may need a different cooling solution, a larger and therefore more costly chip package, and for laptops you might not get the battery life or cost advantages Apple is looking for. Also, my guess is that as a fraction of their total iMAC market the demand for memory sizes greater than 16GB is smaller that the geeks here might guess.
 

Vasr

Major Contributor
Joined
Jun 27, 2020
Messages
1,409
Likes
1,926
Also, my guess is that as a fraction of their total iMAC market the demand for memory sizes greater than 16GB is smaller that the geeks here might guess.

It is not like the Gamers are rushing into Macs.:)

I wonder how much the Covid-19 related lock-downs and lack of travel and public exposure cut into Mac ultra-portable sales. Sitting in a coffee-shop being seen with the latest Mac was part of the appeal to the hipsters.

But seriously, when my laptop died recently I have not replaced it yet because the need has significantly decreased and the iPad has stepped up into the limited need during these times. Most of the meetings are virtual with Zoom.

In the few times, I have needed to be outside, the phone and the iPad (w/ external keyboard) running TeamViewer to home computers has been more than enough.

I do recommend TeamViewer highly. Better than any other remote desktop solution and don't need to punch a hole in firewalls or have to VPN in to access internal computers. Much more responsive than VNC, etc. The free version is good enough for personal use.
 

dwkdnvr

Senior Member
Joined
Nov 2, 2018
Messages
418
Likes
698
I have a suspicion that a substantial part of the M1 performance advantage is having DRAM dies in the SoC package. I have no information about what the interconnect is to the embedded DRAM dies, but I highly doubt they used a DDRx-like half-duplex strategy. So far I've not seen a discussion of the memory interconnect specs, but I'm guessing they use the full-duplex SoC fabric. I would guess the performance increase due to a full-duplex strategy is quite large.
I think you're right that this is a big part of the improvement, but the article linked earlier does go into the architecture a bit, and there are definitely other aspects. There is an 8-entry-wide instruction decode stage vs 4 in x64 designs, and the out-of-order instruction buffer is about 2x as deep; relatedly, the number of queued load/stores is also higher which I think is beneficial with the deeper buffer. My read is that these changes probably allow the cpu to run without stalling as frequently and thus improving overall throughput.
I suspect there is a certain amount of 'the whole is greater than the sum of the parts' in play too, though - the ability to control the entire hardware and software stack together makes it possible to optimize in ways that isn't possible in the x86 world.
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,782
Likes
8,178
Does it still run well? I have an iMac that I bought from ~2010, and while I can get it up and running, it isn’t that fast and overheats easy, the fans kick on and stay on even when on standby, so I just have it off.
My mother has a Mac Mini she bought a few years ago and all she does on it is spreadsheets really, and that too has become unbearably slow, lots of beachball waiting.

That said, I have a 2013 MBP from my work and it holds up.
As for the keyboard, I was using someone’s newish MB with touchbar and I hated the keyboard. So if that is still the current gen (and somehow it was worse before), I don’t know how ai feel about that.

The normal iPad in 32GB for like $300 is a deal that‘s really screaming to me though.
That said, I did just order the new 12 Pro Max, and the tiny increase in screen size over my current 11 Pro Max may hold me over (can’t be spending too much money).

Anyone know if the iPad can run the browser version of Google Sheets with ease? Running it on my iPhone causes crashes (the app is fine, but isn’t fully featured).

My iMac is the late 2009, 27" screen, lower-end model, so it has a lowly 3.06GHz Core 2 Duo rather than the early "i" series (i3 maybe?) CPU that the top-end iMac of the time had.

Still, it runs just fine for word processing, Zoom, web browsing, YouTube, audio editing, music playback, lightweight Photoshop work, spreadsheets, photo management, and much more. What has kept the iMac humming along is that I removed the original hard drive and replaced it with a 500GB SSD. And when the optical drive finally bit the dust, I got one of those optical-to-laptop-drive adapters and put a 1GB SSD in that spot.

The 27" model is also nice because in addition to the massive amount of workspace it offers, it also has notably more airspace inside than the 21.5" model, meaning it stays cooler. My CPU temp sensor shows 100 degrees Fahrenheit (38 Celsius) or less under most loads, and I can't remember the last time it went over 120F (49C) under any circumstances.

Of course, the internal case volume is not the only reason for the lower temps. The SSD instead of a spinning HD helps, and I imagine both SSDs add even more air space since they take up less space than the original HDD and optical drive. But I think the low temps are also due to the limitations of the Core 2 Duo CPU- it probably doesn't get super hot or thermal throttle much because it's simply not that powerful.

I definitely notice the speed issues in comparison to my early 2020 MacBook Air, which has twice as many cores and a CPU that, while lower-end, is still several generations more advanced than my old iMac's. So for example if I use XLD to convert a music album from FLAC to Apple Lossless, the Air rips through the process at warp speed, while the iMac is snappy enough but takes 1.5 to 2 times as long. Similarly, when I play high-res YouTube videos on the iMac, sometimes there will be a brief stutter a couple of seconds in while it buffers or something. The rest of the video plays fine, though. Ditto for Zoom - if I have a Zoom meeting going, and I'm switching to multiple other apps to refer to documents or check email, I will get some minor stuttering of the video on Zoom while it's in the background, which never happens on the newer Air.

But overall I have been quite happy and quite pleasantly surprised at how usable this Mac has remained over all these years. I did have to replace the CPU fan once, which sucked because doing so requires disassembling almost the entire computer - and in the process I accidentally connected the microphone cable to the wrong motherboard port and now the mic no longer works. I corrected the connection error but no dice. So I had to get a $40 external USB mic (because I didn't want to buy a mic cable off eBay, open up the unit again, replace the cable, and then find out the cable isn't the source of the problem). So it's definitely been through the wars, but it still looks great and works fine. With a little luck it will carry me through for another year or so, when I believe the first Apple Silicon iMacs will be released and I can at long last give the old iMac a long-deserved rest. :)
 
Last edited:

Alexanderc

Addicted to Fun and Learning
Forum Donor
Joined
Jun 11, 2019
Messages
641
Likes
1,018
Location
Florida, USA
I have a late 2013 MacBook Pro that has been and remains my daily-driver home and work computer. I see that this in now the oldest MBP that supports the newest MacOS, so I’ll probably be buying one of these new laptops in the next few months.
 

GeorgeWalk

Senior Member
Joined
Sep 6, 2019
Messages
472
Likes
792
I have a late 2013 MacBook Pro that has been and remains my daily-driver home and work computer. I see that this in now the oldest MBP that supports the newest MacOS, so I’ll probably be buying one of these new laptops in the next few months.

Yes, my current MBP is mid 2012 so I can't get the newest MacOS. Another reason I ordered the MBP M1
 

Veri

Master Contributor
Joined
Feb 6, 2018
Messages
9,599
Likes
12,041
I for one am quite attached to x64. I like being able to boot a Windows partition from my MacBook and run some apps or games natively on the 'other side'. The performance of this new silicon looks amazing, but I'll wait a gen or two and see where this goes..
 

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,196
Likes
1,548
Location
USA
I think you're right that this is a big part of the improvement, but the article linked earlier does go into the architecture a bit, and there are definitely other aspects. There is an 8-entry-wide instruction decode stage vs 4 in x64 designs, and the out-of-order instruction buffer is about 2x as deep; relatedly, the number of queued load/stores is also higher which I think is beneficial with the deeper buffer. My read is that these changes probably allow the cpu to run without stalling as frequently and thus improving overall throughput.
I suspect there is a certain amount of 'the whole is greater than the sum of the parts' in play too, though - the ability to control the entire hardware and software stack together makes it possible to optimize in ways that isn't possible in the x86 world.

It is really difficult to compare the ARM CPU microarchitecture because ARM's is RISC, while AMD and Intel are CISC with micro-op execution engines. (It is sort of weird that the micro-op engines are technically RISC, so Intel and AMD are CISC over RISC for many instructions.) RISC architectures tend to take more cache to make them efficient, which might explain part of the SRAM allocation.

It has been a complaint of mine for many years now, ever since Intel went to the micro-op architecture actually, that the x86 vendors were spending too much resource on compatibility and not enough on innovation. IMO, not only has Apple now exceeded them, so has IBM (Power CPUs; the Power10 is awesome, IMO), and I think Oracle did nice work with the M7 before they killed it. Because the efficacy of these CPU architectures are so dependent on the target applications, I'm not really sure which is best for which application.

I do think IBM and Apple have shown far more great thinking and innovation with their CPUs than Intel or AMD have done. I think for client systems it is also possible that the accelerators and their integration into the overall CPU/memory architecture may be more important than the intrinsic speed of the primary CPU cores in benchmarks. The M1 also includes "big" and "small" cores, which complicates up software development a bit, but I doubt that scared Apple much. I think that design is better than hardware multi-threading, which is often finicky for tuning software to take advantage of it. Intel apparently agrees about the importance of accelerators, if you look their acquisitions over the past few years and how hard they're pushing the new CXL interconnect.

Overall, I'm very impressed with what Apple has done. The irony is that their CPU design team hired several senior key engineers from Intel for M1 development (or so it seems from LinkedIn) over the past few years, so if Intel had let these people innovate who knows where they might be today.

Personally, I still think integrating the DRAM into the SoC was a brilliant move. I doubt Intel and AMD will follow suit immediately, but perhaps they should consider some SKUs for PC implementations.
 

stevenswall

Major Contributor
Forum Donor
Joined
Dec 10, 2019
Messages
1,367
Likes
1,075
Location
Orem, UT
Access to iOS apps is potentially huge, but I'll reserve judgement 'till I see how they make the transition from touchscreen to trackpad.

For no sensible reason save that it seemed really, really cool, I got a new camera which produces 240 megapixel pixel-shifted images and I already know that an 8 GB Mac won't handle raw files that big, but I should try it on my 16 GB Windows notebook. Have so far only tried it on a 32 GB system, and it works fine there. But I wonder what it'd be like to stitch several such images together into a panorama...

I swear the multi-hundred megapixel images from Hasselblad cameras aren't even 8GB. Should be just fine. A compressed 240mp photo would still only be barely hundreds of megabites. Ten times that would be... 2GB.

How much RAM did the system use on the 32GB system?
 

maxxevv

Major Contributor
Joined
Apr 12, 2018
Messages
1,872
Likes
1,964
What were you planning on running that needs more than 16GB?

I run industrial CAD software on my Thinkpad P52. I'm currently at 80GB of RAM, it will get stuffed up to 128GB in the next 6 months or so.
 

ElNino

Addicted to Fun and Learning
Joined
Sep 26, 2019
Messages
558
Likes
727
Good lord that's amazing for an entry-level MacBook Air with 8GB RAM and no cooling fan.

Yeah, the initial benchmarks for these are nuts! This is a machine students can buy for $899 that outperforms virtually every laptop on the market except some Zen 3 (Ryzen) laptops, at least for short term loads, and all the while maintains an 18 hour battery life with an IPS screen.

I'm not an Apple devotee, but this changes the nature of what's on the market, and it's hard for me to make a case for anything else for most consumers without specialized needs (32GB+ RAM, CUDA, etc.).
 

NTK

Major Contributor
Forum Donor
Joined
Aug 11, 2019
Messages
2,720
Likes
6,014
Location
US East
I swear the multi-hundred megapixel images from Hasselblad cameras aren't even 8GB. Should be just fine. A compressed 240mp photo would still only be barely hundreds of megabites. Ten times that would be... 2GB.

How much RAM did the system use on the 32GB system?
Unfortunately photo processing can only be done with uncompressed images. Given your example, a 240 mp 16-bit raw photo will take 240 x 3 (3 color channels) x 2 (2 bytes per channel) = 1440 MB of RAM to keep the picture in memory. If processing is complicated enough that it cannot be done "in-place", then you'll need RAM for both input and output, i.e. ~3 GB of RAM for just data for a 240 mp pic.

Memory usage can go up fast. If you want to focus stack a dozen (or more) images, it is easy to run out of RAM.
 
OP
Ron Texas

Ron Texas

Master Contributor
Forum Donor
Joined
Jun 10, 2018
Messages
6,249
Likes
9,389
Unfortunately photo processing can only be done with uncompressed images. Given your example, a 240 mp 16-bit raw photo will take 240 x 3 (3 color channels) x 2 (2 bytes per channel) = 1440 MB of RAM to keep the picture in memory. If processing is complicated enough that it cannot be done "in-place", then you'll need RAM for both input and output, i.e. ~3 GB of RAM for just data for a 240 mp pic.

Memory usage can go up fast. If you want to focus stack a dozen (or more) images, it is easy to run out of RAM.

I'm making stitched panoramas on a 16 gb machine. Even with a dozen 45 mp images to start with, the merge to a 180 mp image goes smoothly. That said, my new XPS13 notebook has 32 gb and I'm considering getting a dock and retiring the desktop.
 

q3cpma

Major Contributor
Joined
May 22, 2019
Messages
3,060
Likes
4,419
Location
France
What were you planning on running that needs more than 16GB?
I use Gentoo and compiling massive piles of C++ shit like qtwebengine can max my 16 GB. Doing some stuff like image filtering (waifu2x to remove JPEG artifacts, and/or resolution doubling, imagemagick -distort resize to do good scaling, zopflipng/oxipng to do some final compression) in 16 parallel threads exhausted almost all of it too; will be a lot worse once I upgrade to a 12 or 16 cores Zen 3/4 CPU.

(thank god for zram/zswap, though)
 
Last edited:

blueone

Major Contributor
Forum Donor
Joined
May 11, 2019
Messages
1,196
Likes
1,548
Location
USA
I use Gentoo and compiling massive piles of C++ shit like qtwebengine can max my 16 GB. Doing some stuff like image filtering (waifu2x to remove JPEG artifacts, and/or resolution doubling, imagemagick -distort resize to do good scaling, zopflipng/oxipng to do some final compression) in 16 parallel threads exhausted almost all of it too; will be a lot worse once I upgrade to a 12 or 16 cores Zen 3/4 CPU.

You’re definitely outside of Apple’s initial target market for the M1. Lack of 10GbE support will also keep the M1 machines out of the media development industry to a significant extent.
 
Top Bottom