If you can't make extra money with this new M1 Max boat anchor, best to leave it for someone else to buy.
A boat anchor? That would be a pretty small boat. The Ultra model weighs 3.6kg.
If you can't make extra money with this new M1 Max boat anchor, best to leave it for someone else to buy.
so, couple things . . . .Apple laptops use to have a great repair record but in the last 5 years the keyboards have been horrible and even with some accommodations by Apple for the problem they still refuse to fix lots of the MBP keyboards that fail making the unit totally useless.
Latest MBP is not fully capable of editing multi-camera Pro-Res video due to two limiting factors. Lack of memory bandwidth and processing capacity. You can only scale up to multiple instances of editing software until you hit the limiter on memory bandwidth.
huh? the original m1 is capable of this, you don't even need an m1pro/max or ultraLatest MBP is not fully capable of editing multi-camera Pro-Res video due to two limiting factors. Lack of memory bandwidth and processing capacity. You can only scale up to multiple instances of editing software until you hit the limiter on memory bandwidth.
Does that MBP cost $500? Why don't you accept that you were wrong instead of muddying your message.If you are editing multi-camera Pro-Res video you can do that with a standard MacBook Pro. The MBP is sold advertising it can accomplish that task. In all likelihood, if that is your primary use then it's a business. Like I said if you can't make money from using this new $4000 Apple toy then save your money.
Please do not talk on a subject that you obviously have no knowledge on.Latest MBP is not fully capable of editing multi-camera Pro-Res video due to two limiting factors. Lack of memory bandwidth and processing capacity. You can only scale up to multiple instances of editing software until you hit the limiter on memory bandwidth.
You really should take your own advice.Please do not talk on a subject that you obviously have no knowledge on.
I had an i7 MacBook Pro from my employer. (I don't remember if that was exact GPU model.) Excellent performance for its day as a desktop system with a monitor. As a laptop used on my lap... too much heat, but I hardly ever used it that way. The new M1-based MacBooks are awesome.Have you tried Intel MacBook Pro using an eGPU with the Radeon RX 6900 XT?
You really should take your own advice.
Get a standard $500 to $1000 laptop and it will work fine for ANY personal computer need.
Apple just announced the new M1 Ultra, which is two M1 Max chips connected via a silicon interposer in a package, combined with up to 128GB of DDR5 memory. I'm impressed.
Apple unveils M1 Ultra, the world’s most powerful chip for a personal computer
Apple today announced M1 Ultra, the next giant leap for Apple silicon and the Mac.www.apple.com
As I pointed out in post #27, Apple is using DDR5-6400, which they're able to use more easily because the DRAM chips are in the CPU package. It's not that the RAM was 10x as fast, but at the release time when the M1 was capable of 600GBps of memory bandwidth, a high-end Intel i9 was capable of less than 1/10th of that. For example, even the latest 16-core Intel i9-1290K is only capable of 76.8GBps of memory bandwidth. The Intel CPU has only two memory channels, and the fastest memory it can use is DDR5-4800. The M1 Ultra is said to be capable of 800GBps of memory bandwidth, according to Apple.I thought this was something different than DDR5. I didn't know DDR5 had anywhere close to the memory bandwidth of 600GB/s or whatever Apple claimed in the release. They might have said "10x as fast as competing RAM" if I remember correctly.
The memory bandwidth of M1 Ultra is 800GB/s!I thought this was something different than DDR5. I didn't know DDR5 had anywhere close to the memory bandwidth of 600GB/s or whatever Apple claimed in the release. They might have said "10x as fast as competing RAM" if I remember correctly.
Wow, the hostility on this! I'm dipping out of this dumpster fire.Please do not talk on a subject that you obviously have no knowledge on.
I think it's true. The new Apple chips are really something.So this new Mac Studio thing is being claimed by Apple to be more powerful WHILE being more power efficient than an RTX 3090+12900K system.
What the hell?
It's just so weird if so.. A company that just started designing their own chip is demolishing perf/watt figures from dedicated players in each field? Intel from the CPU front, and Nvidia from the GPU front.I think it's true. The new Apple chips are really something.
x86 processors are not especially power-efficient. Complex Instruction Set Computers (CISC), like the x86, try to attain high performance by adding more and more application-specific instructions. The thinking is that application-specific instructions (e.g., there's one for iSCSI CRC-32c calculation) is better than executing a lot of simple instructions to perform the same functions. (The Reduced Instruction Set Computing (RISC) paradigm, as ARM uses for their CPU cores.) While CISC's performance advantage is sometimes the case for individual operations, as a whole for a workload it is often not. Modern client CPUs, like Intel and AMD produce, and the M1, include so-called accelerators (application-specific CPU-offload processors) for various functions like graphics, networking, security, and "neural processing" anyway, negating a lot of the supposed CISC benefits. Add in the factor that many so-called x86 instructions are actually executed in microcode (running on more primitive RISC processors embedded in CPU architecture, no less, a case of irony), that ARM's fixed length instructions are easier to decode in hardware than x86 variable length instructions, and it is possible for a well-designed RISC CPU with the right accelerators, not to mention superior fabrication process, to outperform a classic CISC CPU at a lower power level.So this new Mac Studio thing is being claimed by Apple to be more powerful WHILE being more power efficient than an RTX 3090+12900K system.
What the hell?
As well as what Blueone said, Intel is still on a much larger lithography for chips. They are on 10 nm and claim to soon have 7 nm chips. They've struggled to get yields at smaller sizes. That is how AMD caught up and passed them. AMD has had 7 nm for awhile and is getting I think into 5 nm processing of chips. Apple is on 5nm with the M-series chips. Apple with these chips are now in a sense a hardware bargain if you need the power. One of the fast Ryzen CPU's which might run about as fast, slightly faster or slightly slower depending upon the task costs more than an M1 Mac Mini which is an entire NUC sized machine. The Ryzen CPU would use much more power than the entire Mini as well.It's just so weird if so.. A company that just started designing their own chip is demolishing perf/watt figures from dedicated players in each field? Intel from the CPU front, and Nvidia from the GPU front.
The only question is, what the HELL are these two other clowns doing then? Is x86 that much of a drag? Because if it isn't (which seems like it really isn't since there are software translation layers that allow for some albeit decreased performance combability for Windows apps), how are they getting trounced by a company like Apple?
A 3090 being beat by what is essentially a NUC system is simply hilarious. With the prices on some of these GPU's on the second-hand market, it almost literally makes biting the Apple tax bullet on this pricy system a decent deal. Certainly the build quality looks to be really nice as is always the case with Apple, so...
This is just embarrassing. I can only imagine what's coming with the Mac Pro announcement...
Your premise is basically correct. A better fab process can cover for many architecture and design inefficiencies. Intel had that advantage for years. Nonetheless, I recommend ignoring the so-called "node names" used by the fabrication industry for their process designations. In reality, the Intel 10nm process is approximately as dense as the TSMC 7nm process, which is why Intel renamed their 10nm process "Intel7", in a fit of marketing-speak. TSMC has moved on to 5nm process for Apple and a few other selected customers, and it is indeed much better than Intel7. Intel won't have a competing process until 2024, called Intel20A process, the "A" standing for angstroms. TSMC is promising its 3nm process for 1Q23, which likely keeps them at least a generation ahead of Intel for years yet, and TSMC claims a 2nm process is in development and on track. It should be interesting to see if Intel can catch up.As well as what Blueone said, Intel is still on a much larger lithography for chips. They are on 10 nm and claim to soon have 7 nm chips. They've struggled to get yields at smaller sizes. That is how AMD caught up and passed them. AMD has had 7 nm for awhile and is getting I think into 5 nm processing of chips. Apple is on 5nm with the M-series chips. Apple with these chips are now in a sense a hardware bargain if you need the power. One of the fast Ryzen CPU's which might run about as fast, slightly faster or slightly slower depending upon the task costs more than an M1 Mac Mini which is an entire NUC sized machine. The Ryzen CPU would use much more power than the entire Mini as well.