Anything that has a GUI (mac, windows, or Linux) is geared towards your average consumer. You can build a serious workstation with a GUI interface, but real computational work gets done on servers.
Given a guest access to hardware running on the hosts is still very much in it's infancy, and doesn't even work with all hardware yet.The thing I'm particularly curious about is virtualization. My impression was that a hypervisor has to apportion control of the hardware amongst the clients. The issue is that if the hypervisor doesn't have good hardware drivers it can't handle that well.
For example, if Linux hypervisor has one Mac client that wants to access the sound card for music composition, and another Windows client that wants to access the sound card for gaming, the result won't be pretty, especially if the hypervisor doesn't have a good driver it may not be able to provide the DSP firmware to the sound card and then what happens? DAW's are pretty much hardware-locked.
Or what if one client is running a TV tuner and recording OTA shows, but the Linux hypervisor doesn't understand how to access the tuner card because it doesn't have adequate drivers for the Windows-based card?
There are two approaches here - presenting a virtual audio interface to the client instances, or passthrough giving one client direct control of the hardware. The first is the only option for shared access, while the latter allows the client to use devices that the host doesn't even have a driver for. AFAIK the virtialised soundcards are fairly basic, but someone _could_ write a more capable one, either emulating real hardware so the client could use an existing driver, or as a new interface requiring a new driver for the client OS. Passthrough just requires the client OS to have a driver for the hardware. I've used passthrough with USB devices, but I think it exists for PCI and PCIe too.For example, if Linux hypervisor has one Mac client that wants to access the sound card for music composition, and another Windows client that wants to access the sound card for gaming, the result won't be pretty, especially if the hypervisor doesn't have a good driver it may not be able to provide the DSP firmware to the sound card and then what happens? DAW's are pretty much hardware-locked.
The thing I'm particularly curious about is virtualization. My impression was that a hypervisor has to apportion control of the hardware amongst the clients. The issue is that if the hypervisor doesn't have good hardware drivers it can't handle that well.
For example, if Linux hypervisor has one Mac client that wants to access the sound card for music composition, and another Windows client that wants to access the sound card for gaming, the result won't be pretty, especially if the hypervisor doesn't have a good driver it may not be able to provide the DSP firmware to the sound card and then what happens? DAW's are pretty much hardware-locked.
We tried Steam over a year ago. He could only find one of his FOSS games for Linux and it was a primitive one similar to PacMan. He uses a Windows PC.There are two approaches here - presenting a virtual audio interface to the client instances, or passthrough giving one client direct control of the hardware. The first is the only option for shared access, while the latter allows the client to use devices that the host doesn't even have a driver for. AFAIK the virtialised soundcards are fairly basic, but someone _could_ write a more capable one, either emulating real hardware so the client could use an existing driver, or as a new interface requiring a new driver for the client OS. Passthrough just requires the client OS to have a driver for the hardware. I've used passthrough with USB devices, but I think it exists for PCI and PCIe too.
When did your son try Steam on linux? It has been a fast moving target over the last few years with Valve's push to get more of the catalogue running on the SteamDeck.
Okay thanks so it’s not just me being obtuse then, there is a hardware driver issue.Given a guest access to hardware running on the hosts is still very much in it's infancy, and doesn't even work with all hardware yet.
Additionally, once the guest is granted access to hardware in the host, that access becomes exclusive so there are no issues with contention.
A simple example of this is an older (USB) Canon scanner I have, which has no drivers for Windows 8/10/11, so I use it in a Windows XP guest. The guest captures the scanner hardware which then becomes invisible to the host, so there can be no contention issues anyway (but it works in the same way for USB hardware that does have driver support in the later O/S).
I don't understand your point here - for this to work, the TV tuner would need to support access from multiple controllers (software running on the guest) and that's unlikely...?
The same thing applies to the sound card - you'd only want the sound from one stream at a time...
Okay so that’s the enterprise version of VMware and requires a paid license. I assume that direct pass-through isn’t supported in the free version and shared access via hardware drivers in the hypervisor slows down things some?The answer is that hypervisors do not not usually allow access directly to hardware. Usually, the VMs only have access to abstract virtual devices provided by the hypervisor. There is an obvious a performance implication for this additional layer of abstraction, even though hypervisor vendors have worked hard to reduce this over the years...
In 2009 Intel released CPUs and NICs that supported I/O virtualization which Vmware supported with their VMDirectPath I/0 feature of the ESXi hypervisor. This allowed the NIC to be presented to and controlled by a VM. The VM has exclusive control of the hardware device. The Hypervisor has no control over the device and cannot provision virtual devices that make use of it.
I used VMDirectPath I/0 in my home setup 8 years ago...
My old Dell T20 server ran ESXi 5.5 and contained a Xeon CPU, a quad Intel NIC, a fairly generic 4-port SATA adapter and a USB 3.0 adapter which were all supported by VMDirectPath I/O (IIRC the SATA adapter required a specific chipset, but that was more for DSM support - see below).
The server hosted a VM running Synology's DSM OS from their NAS (DSM is based on Linux with a proprietary boot loader which was hacked by the folks at Xpenology).
I used VMDirectPath I/O to pass-through 2 NICs, the 4-port SATA adapter (with 4 HDD attached) and the USB 3.0 card. The Synology recognized all of these devices natively, DSM was able to create a RAID array (SHD) across the 4 HDDs attached to the SATA adapter.
This system was my main file server for over 6 years. The system underwent several iterations of both hardware (i.e. HDD) and software (ESXi and DSM) upgrades over this time.
18 months ago I replaced the system with a Synology DS920+ NAS (and a Dell Optiplex 7060 micro PC as the new ESXi host).
Synology distribute their OS and it's configuration over the disks in the NAS array. You can take a set of disks from one Synology NAS and transplant them in another one and the new NAS will become the old one without loss of configuration or data (DSM does an on-line upgrade if it's required to support the CPU architecture in the new hardware).
I was able to take the 4 drives attached to the SATA adapter in the T20 and put them straight in the DS920+. The on-line upgrade to DSM took 10 minutes and then everything was as before i.e NAS had same hostname, users, data, etc.
Okay so that’s the enterprise version of VMware and requires a paid license.
https://www.protondb.com/ should help with what works and what doesn't.We tried Steam over a year ago. He could only find one of his FOSS games for Linux and it was a primitive one similar to PacMan. He uses a Windows PC.