The fact that an industry predominantly chosen particular technology says nothing about technology as such, it only means that it turned better economically. In tech specifically there are many examples where superior technologies were dropped – or pushed into a niche – in favor of inferior which were significantly cheaper (directly or indirectly).
Photo industry went with inferior CMOS sensors, while superior CCD remained in extremely expensive top-end cameras (and by the way full frame was dropped initially by Pentax and Contax because it was too expensive). Apple dropped superior Power architecture in favor of cheaper but inferior Intel (only now returning back to kinda RISC chips), Firewire was dropped due to being too expensive for mass implementation, SGI went out of business etc.
Also you have to keep in mind what is called path dependence. Once a certain arrangement is in place, switching to an alternative or a return to a previously dropped one may be too costly.
You're making conclusions based on generalizations that aren't correct. Technologies usually don't become dominant because they turn out better economically, they dominate because of significantly greater R&D investment over time. You can pretty much always just follow the R&D money. To use your Intel example, Intel became the leading CPU provider because IBM chose their 8088 CPU for the IBM PC. Many thought the Motorola 68000 was a superior CPU design, including me, but it didn't matter, Motorola wasn't ready for large scale production in time. The high volume of emerging PC market gave Intel the opportunity to invest a superior level of R&D spending in chip fabrication to achieve a technological lead that lasted until very recently. Being a generation or two ahead in chip fabrication technology compensated for a lot of design weaknesses in x86 architecture.
Was MS DOS a technically superior OS design? Of course not, but IBM also chose it for the PC, providing a huge addressable market, and DOS revenues begat Windows R&D spending, and PC Windows begat Windows Server, and a few decades later Microsoft used their R&D spending superiority to achieve a $2.2T market capitalization. Following the money worked for the OS, and enabled related applications (e.g. Office, databases, games, etc.) too. Even Apple can't keep up. (Excel beats Numbers hands-down, IMO, for spreadsheet functionality.)
Another example is Ethernet. In computer networking Ethernet R&D spending dominates that of all other networks, client or server or wide-area, by a huge margin. Ethernet is not necessarily a superior wired networking technology, but it was an industry standard design (as opposed to propriety alternatives) that allowed many companies to invest, compete, and their products interoperate to create workable solutions. Industry R&D spending on Ethernet ballooned, eventually exceeding that for all other message passing networks combined. How many open switched networks were there in the 1990s? One, that I'm aware of. [This is incorrect. See my correction below.] Also, brilliantly IMO, Ethernet used algorithmic logical topology generation, which meant you could create a random physical topology (plug anything into anything), and if a physical path was possible Ethernet would use Spanning Tree Protocol to calculate a loop-free route. Ethernet, to this day, has some silliness in it from the days of its CSMA/CD bus origin, but huge R&D investment (and openness) made Ethernet the only LAN that matters, warts and all. I was just reading that 800Gbps per port IP (the circuit designs licensed by chip developers to integrate in their higher level designs) is becoming available for producing data center switch and server interface chips. Ethernet is also evolving beyond its original specification through massive investment by cloud computing vendors and their suppliers, displacing the Spanning Tree Protocol strategy. They use proprietary programmable switches and centralized network management to program the switches (enabling the creation of more scalable networks for cloud data centers), making standard Ethernet more of a link specification to the cloud computing companies than a full-on OSI Layer 2 network specification, but Ethernet links and PHYs are still dominant due to the highest level of R&D spending, and IMO will likely continue to be.
I remember 20 years ago some people thought WIMAX should displace CDMA and GSM cellular technologies to make IEEE 802-based technology open and prevalent worldwide for data and voice communications, increasing innovation through openness and decreasing costs. Maybe it was superior, but it didn't matter, the world was investing massively in cellular technology and now 5G and 6G are the future, and WIMAX is a "Huh? What's that?" part of history.
IMO, the same is true for audio and video. You can always follow the money, unless some disruptive innovation comes along. In computing everyone thinks that disruptive technology is quantum computing, which probably has a long way to go to commercial practicality for general purpose systems, but fear of being on the wrong side of that disruption is causing massive industry and research investment. I think streaming music is like this. Even Amazon and Apple are jockeying for position, for fear of being left out, and of the potential side effects on other businesses.
[Edited to correct grammar, and insert a note to see my post below.]