• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

The Slow Death and Rebirth of Intel

I get the sense that companies are still trying to decide how to utilize the “neural cores” contained in new hardware, and by the time they come up with something really useful, newer hardware will be needed :p Early iterations of Microsoft’s bolder ideas have tended to seem more like demos than fully realized products.
 
That kind of proprietary lock-in is not new. My company are only getting off our old Netezza Twinfins (with their FPGA accelerators) because we're closing a DC and you can't pay IBM (or anyone else) enough to move them with a guarantee that they will come back to life at the other end.

One of the slightly surprising take-aways from the round table was the unanimous advice to use open systems for everything. I wasn't surprised to hear this from the lady from Redhat, but she had Microsoft and AMD either side and they said the same thing. I guess when your enemies are Broadcom/Vmware and ARM, everyone else is a friend :).
Ah, Netezza, the original computational storage/database appliance. I'm surprised any of them are still in use. Well, I always under-estimate resistance to migration in IT data centers. :facepalm:
 
Looks like desktop mode is likely to become a standard Android feature...


I've always found the OEM desktop implementations a little clunky. Perhaps things will improve if it become standard across all Android devices.
 
Last edited:
Looks like desktop mode is likely to become a standard Android feature...


I've always found the OEM desktop implementations a little clunky. Perhaps things will improve if it become standard across all Android devices.
I would have thought this would happen long before now. The great majority of people don't really need a full size computer. I could get by with that 99% of the time. What I do need is a full size screen, keyboard and such for some tasks. My phone already 3 years old is more powerful than all the computers I've had except for two, and maybe only one. Phones are perfectly capable of being your only computer device if they had a good interface for forms, and such online.
 
I would have thought this would happen long before now. The great majority of people don't really need a full size computer. I could get by with that 99% of the time. What I do need is a full size screen, keyboard and such for some tasks. My phone already 3 years old is more powerful than all the computers I've had except for two, and maybe only one. Phones are perfectly capable of being your only computer device if they had a good interface for forms, and such online.

My Lenovo P11 Pro Gen2 tablet has a nice keyboard case, a decent trackpad and a proprietary desktop mode that works OK. It functions well enough as a PC replacement when travelling and i've also used it for work (accessing Outlook, Teams, reading Powerpoints, etc.).

However, I just tried to use the tablet with the cheap 10.8" screen I bought from AliExpress recently (I use it as an auxiliary 'now playing' screen on my PC). The screen works with the tablet, it will even charge the tablet whilst it's connected. In normal, non-desktop, mode I get to choose whether the display is mirrored or extended to the extra screen. I switch to desktop mode and it only supports mirroring :facepalm:
 
More on the ongoing question of what is causing assorted crashes with certain 13th and 14th gen processors, including in data centres where they aren't pushing the power envelope:
It's starting to sound like a hardware issue rather than something that can be fixed in microcode, and seems to be causing some data centre customers to switch to AMD for game server workloads.
 
Shame, because nothing beats a custom build. The level of control and parts selection is just far greater if you DIY.
I used to think that too, but after more than 1000 computers and servers, 20 years ago I switched to servers and workstations from Supermicro and HP Business for work, and to business and workstation notebooks from HP. Personally, I switched to Mac in 2008.
Very few self-built PCs achieve the reliability of these devices, and even then they are usually not cheaper.
I understand the appeal of it, I held on to it for too long myself, but it only makes sense as a hobby or perhaps in the gaming area.
 
I used to think that too, but after more than 1000 computers and servers, 20 years ago I switched to servers and workstations from Supermicro and HP Business for work, and to business and workstation notebooks from HP. Personally, I switched to Mac in 2008.
Very few self-built PCs achieve the reliability of these devices, and even then they are usually not cheaper.
I understand the appeal of it, I held on to it for too long myself, but it only makes sense as a hobby or perhaps in the gaming area.

I don't play games, but I have specific requirements that mass production seems unable to meet. I don't like to use the word "hobby" to describe what should be the most important of all, in my case, music. Work is a hobby! :p Reliability is far greater, IMO, if I put together pre-tested and selected parts. I"m not looking for the fastest and the cutting edge, I stay a few generations behind to make sure all issues have been addressed. And I dislike stuff I didn't ask for, while I don't want hardware which is essential to me removed just because someone else decided, or the industry moved in another direction. As an example, in my desktop workstations, I still have SCSI hardware that can work with my older but necessary gear. In my 10 year old Lenovo laptop, I have two hardware RAID storage systems. Which pre-assembbled commercial machine can do that these days? Which manufacturer will let me pick the brand and type of fan in the power supply? That said, I did buy one pre-assembled laptop recently. It does what it is supposed to, but seems hard to even open and I still have not fully figured it out. It has a ton of features I don't need and will never use, and is, of course, missing a bunch that I need. Oh, and W11 is a PITA.
 
Last edited:
I used to think that too, but after more than 1000 computers and servers, 20 years ago I switched to servers and workstations from Supermicro and HP Business for work, and to business and workstation notebooks from HP. Personally,
Very few self-built PCs achieve the reliability of these devices, and even then they are usually not cheaper.
I learned the value of purchasing servers, switches and software from a single source (such as Dell) as a packaged solution: By purchasing in this manner, responsibility for ensuring that the various bits all worked together fell upon a single entity, and I was very grateful that for once, that entity wasn't me! Wasn't much concerned about specifics about the hardware, because for something like a cloud infrastructure installation, it's all treated as commodity units of computing and storage.
 
So in the end, we enthusiast PC builders will be affected, as they will concentrate on the said AI and defence production, while normal boards and CPUs will be of secondary importance.
True that. I too, have been building my own desktops and servers based on the x86 Intel gear. We'll be getting AI influenced designs, but at a slower rate and much lower priority.

Sadness ensues...
 
There is a hypothesis: AMD's recent successful is based on acquiring TSMC's advanced capacity. While Intel seems to expand the cooperation with TSMC since 2023, I thought AMD could be in trouble. However, I was wrong. Intel could be busy on their own things.
 
There is a hypothesis: AMD's recent successful is based on acquiring TSMC's advanced capacity. While Intel seems to expand the cooperation with TSMC since 2023, I thought AMD could be in trouble. However, I was wrong. Intel could be busy on their own things.
No, its based on superior CPU design. Advanced capacity does nothing if your chips aren't up to par. AMD completely crushes Intel in terms of performance per watt and is about equal when you don't take power into account. Intel made the mistake of sitting on its laurels for years and gave AMD time to catch up. AMD also tried to catch NVIDIA, but that company never slowed down development. That's the difference between having an engineer as a CEO and not having that.

I used to think that too, but after more than 1000 computers and servers, 20 years ago I switched to servers and workstations from Supermicro and HP Business for work, and to business and workstation notebooks from HP. Personally, I switched to Mac in 2008.
Very few self-built PCs achieve the reliability of these devices, and even then they are usually not cheaper.
I understand the appeal of it, I held on to it for too long myself, but it only makes sense as a hobby or perhaps in the gaming area.
Only real workstations and true business notebooks have decent reliability. A random PC from a big brand at a major retailer will have absolute shit reliability. They will use the cheapest parts they can find. I don't know if you ever opened them up, but its pure trash what you find in them. Most homemade builds will have decent motherboards and power supplies, and those are the parts that matter.
 
Last edited:
There is a hypothesis: AMD's recent successful is based on acquiring TSMC's advanced capacity. While Intel seems to expand the cooperation with TSMC since 2023, I thought AMD could be in trouble. However, I was wrong. Intel could be busy on their own things.
No, its based on superior CPU design. Advanced capacity does nothing if your chips aren't up to par. AMD completely crushes Intel in terms of performance per watt and is about equal when you don't take power into account. Intel made the mistake of sitting on its laurels for years and gave AMD time to catch up. AMD also tried to catch NVIDIA, but that company never slowed down development. That's the difference between having an engineer as a CEO and not having that.

I think you can trace it back to 2012 when the US Department of Energy gave AMD a ton of cash to develop their next supercomputer... https://www.nextplatform.com/2024/0...g-road-to-the-hybrid-cpu-gpu-instinct-mi300a/
 
AI workloads benefit from maximum number of threads, hence AMD is currently the rising star with threadripper.

My SME told me they were the company to follow. I purchased their stock at approx $40, and others she recommended. Needless to say, it worked out well.
 
All wouldn't have mattered if Intel kept their development cycle up. But they didn't.

Then perhaps we can trace it back another 4 years to 2008 when AMD divested it's Global Foundaries fabrication business to concentrate on chip design rather than the process node race with Intel and TSMC.
 
Three years ago Apple switched from Intel x86 processors to their own ARM based M series chips. Last month Microsoft and its PC partners released the Copilot+ PC based on a Qualcomm ARM based chip. The ARM technology has much lower power usage for doing a given amount of work. Intel is transitioning its business away from proprietary X86 CPU's to a foundry manufacturing semiconductors based on other companies designs similar to the business model of TSMC. This is also driven by a US national security imperative to manufacture vital semiconductors domestically and billions in subsidies to encourage this goal.
Yes. The Wintel cycle is coming to a close. Intel is well managed. Intel has led and devoted significant resources to open hardware+software standards benefitting the entire industry equally. They had a very good CFO who retired, Andy Bryant. They have made mistakes, but not as bad as GE. They are very strong on software supporting their customers. They have generally been managed by engineers, and as former head of the Labs, Gelsinger, the current CEO, has a deep knowledge of the business and the technology.

Intel made mistakes on the GPU business, on not adopting ASML's deep UV lithography, failed on wireless, and they wasted a lot of money on side businesses/internal startups they subsequently killed. They had big, unforgivable, design quality problems which were a contributing factor to Apple's rejection. Their sales group is fierce for which they have been criticized, including on benchmarksmanship.

Intel used its process technology only for internal products, that meant that the business needed high gross margins to finance the capital investment in the fabs.

If you look at the foundry business it is TSMC at about $US75B, Samsung $US3.4B. TSMC is well managed, like most Taiwan tech. TSMC has a lot of debt. Their gross margins are good. Intel likes to keep their balance sheet clean and pay dividends. I think Intel will be successful in foundry. I don't think TSMC's business is secure under Xi-thought. South Korea is not the most secure country in the long term.

To ARM vs X86, those represent the reduced instruction set computing - RISC and complex instruction set computing - CISC philosophies.

In computing you will have dozens to hundreds of process. Moreso with virtualization, containers, and the like. Processes wait in a priority queue and are assigned to run when they have the data they need, on an interrupt, and various priority measures as determined by the scheduler.

When a process runs, its registers are copied into the hardware CPU registers, and all of the memory it needs is populated, called a context switch, then the process runs, after, everything is copied back out to memory, and the next process comes in. In code, subroutines/objects start a context switch. All that copying context into the CPU registers and cache, running, then saving it back out takes time.

In RISC one of the theories is that you will improve performance by having faster context switching.

In CISC, the theory is you will improve performance by having floating point, large vector instructions, and instructions to speed up video, machine learning, and cryptographic processing.

Larger and faster on-chip caches benefit both architectures.

I'm not an expert today, but in general the ARM architecture has become more complex and the X86 has added some RISC-like attributes. Intel is also the master of hardware accelerators off to the side, particularly on Xeon. So the ongoing discussion in chip design is what do cloud data centers need, what are their evolution cycles, and what will they pay. That has led to cloud providers developing their own chip architectures.
 
Last edited:
Back
Top Bottom