I expect my Intel i5K custom DIY gaming box to last me 10 years + and even up to 15 years for basic surfing and video viewing.Shame, because nothing beats a custom build. The level of control and parts selection is just far greater if you DIY.
I expect my Intel i5K custom DIY gaming box to last me 10 years + and even up to 15 years for basic surfing and video viewing.Shame, because nothing beats a custom build. The level of control and parts selection is just far greater if you DIY.
AI uses massively parallel matrix math and benefits from thousands to millions of parallel cores. Threadripper is nice on the desktop because it has a lot of PCI lanes to push data between GPUs and memory / storage, but high end desktop is a bit player.AI workloads benefit from maximum number of threads, hence AMD is currently the rising star with threadripper.
That's true, the instruction set definition is IP (which you get via an agreement Arm calls an "Architecture License"), but it is very thin IP. AMD and Intel have the equivalent of a cross-licensed (meaning anything one company invents in the instruction set, the other company gets it) "Architecture License" for the variants of the x86 instruction set, but the CPU designs are vastly different.The ARM instruction set is ARM IP; Apple, Qualcomm and others pay ARM for an Architectural license in order to build their own CPU designs using the ARM architecture.
For mobile devices and tablets that's true, but for laptop and desktop computing you're incorrect.But I was talking about is the fact that most of the world has an ARM CPU in their pocket and do most of their computing with it. Most of the world have already transitioned away from x86 (or were never there in the first place).
For mobile devices and tablets that's true, but for laptop and desktop computing you're incorrect.
Whew! Those that can do computing and make websites and such on a cel tel are godZ!I could do most, if not all, of my personal computing on my tablet and phone
Well, if you want to do multi-programming (run multiple apps simultaneously), OS virtualization, multi-threaded applications, or high performance graphics, you'll probably want an OS like Windows or MacOS, much larger real and virtual memories, and sophisticated memory protection. Apple has been putting M-series CPUs in iPads for a few years now, I have one, but the iPad can't replace what I do with a Mac.Why make the distinction between device types? They all have CPUs in them. If i'm honest, I could do most, if not all, of my personal computing on my tablet and phone - I don't really need a 12 core Ryzen and 64GB of RAM under my desk.
In fairness this is because the iPad runs on a battery and has a super-low thermal dissipation capability, not because of the chip or software architecture. The same is true of phones, generally.Well, if you want to do multi-programming (run multiple apps simultaneously), OS virtualization, multi-threaded applications, or high performance graphics, you'll probably want an OS like Windows or MacOS, much larger real and virtual memories, and sophisticated memory protection. Apple has been putting M-series CPUs in iPads for a few years now, I have one, but the iPad can't replace what I do with a Mac.
...on linux based clusters packed full of Nvidia GPUs and quite possibly with AMD processors.To date, the only real benefit that I’ve gotten from AI is as natural language front-end to search engines. And that’s being donein the cloud
The software architecture you're actually referring to, the IOS operating system, is the cause of lesser capability even on iPads that use the M-series CPUs. The same is true with Android.In fairness this is because the iPad runs on a battery and has a super-low thermal dissipation capability, not because of the chip or software architecture. The same is true of phones, generally.
Well, if you want to do multi-programming (run multiple apps simultaneously), OS virtualization, multi-threaded applications, or high performance graphics, you'll probably want an OS like Windows or MacOS, much larger real and virtual memories, and sophisticated memory protection.
I have a feeling AI is behind the recent increase in data breaches. Don't think I consider that a benefit.I’d think that if you want to build an enthusiast-grade x86 PC, go right ahead and do it. Because Windows on ARM has been around for years, only now it’s being touted as the AI future. But the success of Copilot+ devices (as they are now called) isn’t assured, and so far, seems limited to a few premium-priced portable devices aimed at early adopters. Not aware of reference designs for other classes of hardware being promoted yet.
To date, the only real benefit that I’ve gotten from AI is as natural language front-end to search engines. And that’s being done in the cloud, not on my 7th-gen i7 desktop.
I’d think that if you want to build an enthusiast-grade x86 PC, go right ahead and do it. Because Windows on ARM has been around for years, only now it’s being touted as the AI future. But the success of Copilot+ devices (as they are now called) isn’t assured, and so far, seems limited to a few premium-priced portable devices aimed at early adopters.
Try gaming on itNothing fazes it.
Yes reviews I've read of it say it is not great for DaVinci on video or photoshop or gaming. Not that it cannot be one day, but software isn't there yet. That sort thing is what the AMD guy had in mind in Berwhales comments.Try gaming on it
Sadly the graphics drivers don’t seem to be made to get the most out of the systems, at least not yet. Give it some time to let the drivers catch up and have the games patched to run better, and then gaming will also be a halfway decent on these.
Agreed. Hormuth used to work for Intel, and the software optimization for x86 argument was invented there (not by him), though back then it was more specific and aimed at just Intel CPUs.I attended a small Financial Services gig in the City last week. Robert Hormuth from AMD was one of the panellists and he some interesting things to say about ARM vs x86 and challenges of cloud, hybrid and polycloud deployment. One of his arguments was that x86 was ubiquitous and therefore workloads running on it could run on-prem and in the cloud without re-compilation. Whilst this is certainly a valid point, it does smell a bit of FUD directed to large enterprises with a high number of legacy systems. i.e. this argument is much less relevant to a start-up with no on-prem infrastructure or legacy systems to maintain or migrate. Even in the enterprise space, it's becoming less relevant as we consume for more SaaS for core business functions. e.g. Do I care what architecture Microsoft365, Salesforce or ServiceNow is running on as long as it's functional, reliable, performant and cheap?
Agreed. Hormuth used to work for Intel, and the software optimization for x86 argument was invented there (not by him), though back then it was more specific and aimed at just Intel CPUs.
And x86 object code compatibility means a lot less than it used to. On desktop/laptop CPUs, all NPUs from everyone are proprietary. And you don't access them via instructions and compilers, you use libraries. Libraries mean programming changes. And, Intel for example, puts proprietary hardware accelerators in Xeon server CPUs now, which are accessed in differing ways. AMD and Intel CPUs are growing further apart, regarding compatibility.