Since 2017, learned from NickSherlock.com about installing macOS High Sierra on Proxmox 5 using Clover. His works inspired me on a unified workstation concept. His recent software guide and updated hardware specification lead me to a path to gather all the necessary components one by one over a year.

1. First Unified Workstation 2020

Figure 1 : Single Computer with 3 Hybrid VMs

Yes! Here is my first unified workstation. Each VMs have a GPU passthrough and a USB controller passthrough. These 3 hybrid VMs is able to be controlled by the same keyboard and the same mouse using a software KVM (Keyboard, Video, and Mice ), the Barrier.

2. NickSherlock Hardware

Here is the Nicksherlock hardware specification:

Nicksherlock hardware can be a storage monster, it got 4 units of M.2 and 14 units of SATA.

Figure 2 : Nicksherlock hardware

Here is the possible placement of all the components

Figure 3 : Nicksherlock Hardware Components Placement

3. Previous Hardware 2019

Here is my previous hardware setup. The macOS Sierra in Proxmox 5 setup is no longer work, and it’s now been used for a Windows10-20H2 VM with GPU passthrough.

Figure 4 : Previous Kenny 2019 Hardware
Figure 5 : Previous Kenny 2019 Hardware Components Placement

4. Current Hardware 2020

Here is my current hardware setup. And I called it a Unified Workstation. It can fulfil all kind of workloads at any OS platform and any GPU cards.

Figure 6 : Current Kenny 2020 hardware

Here is the placement of all the components.

Figure 7 : Current Kenny 2020 Hardware Components Placement
Figure 8 : Kenny 2020 PCIe Passthrough as per VM

If attempting to PCIe passthrough the onboard graphic (Intel HD) into any VMs, the Proxmox hypervisor will hang.

Successfully PCIe passthrough the 2 sets of onboard USB controllers. Both are working well for Windows10-20H2 VM and Ubuntu-20.10 VM.

Special care on PCIe passthrough for macOS VM, as the Hackintosh with OpenCore/Clover only support :
– USB controller with FL1100 chipset, and
– Certain Wi-Fi cards to support AirDrop feature

It’s a challenge to search for a 3rd PCIe graphic card, as there is no more PCIe x16 slot. The search ends up with Asus GeForce GT710 PCIe x1 for Ubuntu-20.10 VM.

PCIe passthrough a USB controller into each VM is important. It helps to avoid a lot of USB connectivity issues, especially when plug in a mobile phone for mobile app development. The USB passthrough technologies are not so suitable in this case.

5. Next Challenges

Here is my next challenges:-

  • Challenge 1: Setup 2 rows and 3 columns monitors
  • Challenge 2: Fine-tune macOS VM for better integration with all the 3 PCIe-passthrough cards (RX580, USB-FL1100, Wi-Fi)
  • Challenge 3: On table USB hub for each hybrid VM.
  • Challenge 4: Develop Flutter mobile apps to replace Proxmox Mobile WebGUI.
  • Challenge 5: Passthrough supported and unsupported GPU at the same time into the same macOS VM.
  • Challenge 6: Find a unified sound speaker solution for all the 3 VMs (Windows, macOS, Ubuntu)
  • Challenge 7: Share the Bluetooth pairing key across all the 3 VMs for that same keyboard and same mouse.
Figure 9 : Monitor Mounting Kits

Above monitor mounting kits will be a part of the 2 x 3 monitor setup.

The above monitor arrangement diagram can become a UI for Flutter mobile apps. Let imagine that we can touch the OS icon and drag it into the selected graphic card. Then behind the scenes, Proxmox will auto reconfigure the PCIe passthrough in the VM_ID.conf file. FYI the default Proxmox mobile WebUI is not able to re-configure PCIe passthrough.

Here is the PCIe passthrough reconfiguration if there is a need for Windows to used up all the monitors.

Figure 11 : Windows 10-20H2, a Hybrid VM occupied all the graphic card

Here is the PCIe passthrough reconfiguration if there is a need for Ubuntu-18.04-CUDA to used up all the monitors for neural network training.

Figure 12: Ubuntu 18.04+CUDA, a Hybrid VM occupied all the graphic card

Will test to passthrough supported and unsupported GPU at the same time into a macOS VM.

Suspect that macOS Metal engine will still work, and not sure about the behaviour of the generic driver on the unsupported GPU.

6 Dream Machine

My Dune Pro case is on the way. and I am dreaming of below setup:-

This possible setup required a financial budget and need to figure out some technical wonders.

Technically wondering if the on-processor USB controller can also be PCIe passthrough into a VM or not.

Technically wondering if by default the Proxmox hypervisor will occupy a PCIe-GPU. Since this setup does not have an onboard graphic card.

Figure 14 : Impossible Setup

There is no high hope on this impossible setup because IOMMU is not yet tested or yet to supported the last 3 generations of GeForce (GTX16xx, GTX20xx, GTX30xx) and the last 2 generations of Radeon (RX5000, RX6000).

The new Intel XE graphic card is yet to be released. And hope that in the future Intel willing to contribute the XE driver source code.

Nowadays, it’s very hard to find a motherboard that has an extra 2 PCIe x1 slots after installed 3 units of 2 slot GPU cards. Its mainly caused by the M.2 slot occupancy on the motherboard surface.

Another big dream is hoping that Virgil 3D project can be a de facto to allow the use of a GPU to accelerate graphics or GPGPU applications running on a virtual machine regardless of GPU hardware vendor.

If Virgil 3D project become de facto, then we no need a “PCIe passthrough”, maybe we only need a “mediated passthrough” that behave like the current hardware vendor technologies:-

Leave a comment

Your email address will not be published. Required fields are marked *