Gaming in a VM on linux

So I did a thing! I started upgrading hardware with the express purpose of running windows in a VM, for gaming on, a long time ago. Buuuut, never had the time or the lus for the monumental effort, in trying this out.

So, about a day and a half later, I have Windows 10 running in a VM, passing through a 1080Ti graphics card to it, along with a keyboard, mouse and a usb headset for audio. Gave it half the CPU cores (8), with 16Gb memory to play with. Oh, and passed through an NVME drive, that I was using in any case, for the native install of Windows 10. It has a 1080p @120Hz screen attached. All in all, seems to be working :scream:

3DMark original gave me 4400. Then optimisations pushed that up to 7800. Native performance sits at around 9100. That’s a 15% deficit. All in all, not bad. Since the VM only gets half the CPU resources (well kind of). As well as the C-drive running off of an encrypted linux drive

On to actual gaming. CODMW seems to run like shit. Let me rephrase that. CODMW runs normal. 115fps :joy:
Borderlands 2 runs like a champ. I capped it at 120fps.
Grim Dawn seemed fine too.
Far Cry 6 runs at 70-ish fps with a mixture of medium/low settings.

Telegram compression, butchered my photos :upside_down_face:




There are some things I still want to work out:

  • cleaner switching between audio, because I don’t always want to use headphones.
  • find some sort of drawer desk furniture to “stack” 2 keyboards on top of each other.
  • buy another monitor mount, so that i can move the main 34" screen out of the way, and bring the 24" in, front and centre, when gaming
  • fix the only REAL issue i have at the moment. snapshotting isn’t working with the windows vm, though it works for other vm’s.
4 Likes

What are you using for your main OS’ display output? Last I looked into this stuff I would have needed two dedicated cards.

Bought an nvidia 1030, using that in the first pcie slot. Ti in second pcie slot.

I know the intel cpus with onboard graphics work. It would be interesting to test amd cpu like 5600g.

1 Like

I might just try this over the festive season… I’ve got Win10 and Garuda Linux on dual boot right now, and literally the only reason I can’t ditch Win10 is BF2042, SolidWorks, and PC game pass.

What VM software did you use?
IIRC if you pass your gfx through, there only a black display in your host OS, is this still the case?

QEMU with KVM, Libvirt and virt-manager. It sounds like you might have passed the GPU that your host was using, through to your guest vm. But I can’t be sure. I never experienced this black screen.

Since you’re on an Arch-based linux, you could try pavol elsig’s scripts:

Ultimately though, he’s just packaging the arch wiki into scripts
https://wiki.archlinux.org/title/PCI_passthrough_via_OVMF

2 Likes

Where is the “woosh-over-my-head” emoji?

1 Like

Update: figured out how to snapshot windows. You can’t do it with “pflash” ie. uefi . You need to change it to “rom”, then snapshot. Obviously change it back, before you attempt to boot.

Also, I’m going to reinstall it, because I need to use the virtio windows storage driver by red hat. That should eliminate the c-drive bottleneck at the moment, and hopefully bring the performance difference down from 15%, closer to 5%, if not lower.

3 Likes

The last thing that was holding me back, was Oculus Quest 2 Airlink couldn’t see the Gaming VM, because it wasn’t on the same subnet. But I realise I can solve that easily, with a laptop ethernet dongle, and simply connect it when I need it, which isn’t much. :crazy_face:

So… I’ve decided to go all in on this idea. The workstation computer has worked so damn well, that I’m converting the simrig computer into the same thing, albeit somewhat beefier. 5900X, RX6600 for host, 32Gb RAM, the existing RTX 3080. I wanted an AMD GPU for the host OS, because nvidia’s @#$% drivers are the weakest link for linux support. Honestly, would’ve liked something older too, not the RX6600, since I wouldn’t be gaming on the host. But hey, whatever.

I will most likely keep the original NVME that has Windows installed on it, somewhere in a box. But I don’t expect to use it much, if ever.

Things to still sort out:

  • Might need another screen and/or a USB KVM Switcher
  • Need to figure out how I’m going to record OBS. There are many options, I just need to choose, and do it
  • Automated snapshots of Windows, ideally, before updates would be great. I guess that would be every monday? :man_shrugging:

The future is looking much much rosier for me. This is about as dream a computer as I’ve ever aimed for. And it’s actually within reach :star_struck:

1 Like

I’ve added blur/spoiler tags, because what follows is a wall of information. So you can read in bite-sized chunks

Update: the importance of reading before you buy things… :sweat_smile:

I am delirious from lack of sleep, but it works and it’s alive! On to what I ended up with. I had listed all the hardware, but actually, it’s not that important to list everything. Configuration is probably more useful.

So, I didn’t read properly, and the previous mainboard, had some serious IO limitations. Seeing as I definitely wanted this to handle a work virtual machine, and a windows VM, with a mix of virtual storage layered on top of native storage, as well as storage with direct pass-through. I also didn’t want to hamstring my host GPU, even though I wasn’t going to be doing any gaming on that thing. This all added up to needing a much better mainboard, with better IO, as well as a new more powerful PSU, because I didn’t want to run the power draw too close to the cliff edge constantly.

HOST OS, GPU’s and Storage

Click to expand

So, host OS, is much the same as my previous proof of concept, Arch Linux. This time, I have pretty much nothing installed, in comparison to the previous iteration, that had the world of shit installed, to handle every eventuality. It needs to be the host, so all i need is virt-manager to manage the VM’s, some password management tools, some hardware monitoring tools, and a browser. It is installed on to the 2Tb NVME which sits in the 1st M.2 slot, which is directly CPU connected. This is a given for any install, on any mainboard, that has at least one M.2 slot.

The host OS is using the RX6600 GPU, plugged into the 2nd PCIE X16 4.0 slot. Gigabyte mainboards allow you to change the bootable PCIE slot for graphics, so this is set to slot 2. The RTX3080 is pass-through directly to the Windows guest, along with 1 USB controller, that maps to 4x USB3.x ports, on the back (more on this later), another NVME for game installs, that’s sitting in the 2nd M.2 slot. This is the other reason that I upgraded the mainboard, because it doesn’t disable or cut speed of this slot, if I use a GPU in the slot above it. There’s 2 more SATA SSD’s that is passed through to the guest, using them for extra locations for video recordings and extra storage.

USB Port Management
To manage all the peripherals, I’m actually using the USB KVM switches in each of the 3 monitors I have installed. That’s how I get around the seeming lack of USB ports, to support a whole sim rig that has a wheel, pedals, 2 shifters, a handbrake, a dash, it’s own mic, headphones, and stream deck.

Click to expand

The 4 ports on the back of the mainboard (that was connected to the USB Controller), that I’m passing through to the windows guest, are for the ultrawide monitor, and the 2nd 27" monitor, a loud ducky keyboard, and the last one for the Oculus link cable.

So from there, ultrawide monitor KVM handles the sim rig peripherals. 2nd monitor KVM handles mostly windows guest peripherals: mouse, headphones wireless receiver, wireless keyboard receiver (for use when I’m sitting in the sim rig), gamepad receiver, and there’s also some in-ear headphones plugged in too, because why not. Main 24" monitor KVM handles all the work peripherals: easy typing keyboard, phone as webcam, main speakers, with the rest, just going into the back of the PC, into ports that aren’t passed through. I am doubling up on video input on the 27" monitor, so I had to learn some shortcuts, to manage a Linux window opening on the 2nd monitor, when the windows guest is running. But for the most part, it opens on the correct monitor. I suspect the buggers that don’t, might be installed with flatpaks.

Performance
Benchmarks

Click to expand

Performance seems better than the first time I did this. But I’ll have more conclusive proof, once I’m done benchmarking the world of things. Also, this time around, the highest impact performance tweaks are installed. Ran Cinebench R23, and single core test ran better than expected. Multi-core was lower than before I did the CPU pinning tweaks, so i think there’s some fiddling to do there, that is specific to this configuration. Time Spy was flying, but I do need to check back with my old stats, to see margins.

Actual game

Click to expand

Benchmarks are only useful, if you have existing ones to compare to. So I tested Dirt Rally 2.0. Because it has possibly the worst VR support available and runs like a pig, at the best of times, that’s my touchy feely benchmark tool of choice. I loaded up the game, on a stage, I literally use for graphics/VR settings tweaking, and couldn’t tell any difference to native. No frame drops, no stutters. Did 3 runs, and managed to get within 3 tenths of my stage record. So, I’d say, it’s working well, but don’t AT ME.

What’s should I learn from this?

Click to expand

So, looking at this project, that has kept me sane, but some say insane, IT DOESN’T HAVE TO BE THIS COMPLICATED. If you take one thing away from this, I hope it is: oh, so it’s possible to play games in a windows virtual machine, that’s running on top of Linux. Needless to say, I went balls to the wall, in on this. A computer with enough stuff, to handle work and play, at the same time, all the while, keeping that dirt little windows 10 privacy disrespecting operating system, in a padded room, alone to contemplate what it’s done.

BUT KELVIN, I want to try this, but keep it simple. What the minimum I need?

Click to expand

A CPU and mainboard, that supports virtualization. This particular configuration uses 2 graphics devices, which can be 2 discrete GPU’s or one onboard GPU and one discrete one. (You can actually make this work, with only one GPU, but I haven’t gone there yet, so I don’t know the specifics of that method.) You need a screen with 2 video ports. You will need a mainboard, that has good IOMMU support. That is, the hardware addresses of every port and controller on the mainboard, need to be grouped separately. Or at the very least, the ones you choose to pass-through to the guest VM, need to be on their own.

Most important, google: mainboard model name + IOMMU, and guarantee someone somewhere has already done the test to see if it’ll work.

That all said, you can make do without this, because there is an ACS patch, which does this in software, just before the VM loads, but it takes a performance hit.

As far as brands that usually support this: Gigabyte, ASRock are fantastic. And do note, that the only reason, I went and bought a new mainboard, is because I wanted to compromise less on features and performance. I could have totally made the previous mainboard work. But I’m a madman. ASUS has been seen to work too. You’re mileage may vary.

Making it work with one screen, is possible. Ideally you will need 2 video ports. Plugging in both GPU’s to the monitor’s 2 video inputs, you can manually switch video inputs. If you only have one video input, on your monitor, you can actually just keep swapping cables. It’s not ideal, but it will work.

The same can be said of one keyboard, mouse and audio output device. Ideally, 2 separate sets are better, because of wear and tear of plugging in cables if you only have 1 set. But it’s possible, and it’s also nothing a KVM switch can’t solve. Where you plug your one keyboard and mouse set into the KVM switch, and use the switcher to toggle between host USB or guest USB. (I should do that, but the wife is using it upstairs. don’t poke the bear)

I mostly followed Pavol Elsig’s guides. Though I didn’t just trust him at first. I went through documentation, written guides, video guides. And after watching his videos, it is by far, the easiest to get up and running. Google him, he has a YouTube channel, and keeps all his helper scripts on GitHub. He covers every major distro: Ubuntu, Fedora, Manjaro (which works perfectly well with Arch).

Little gotchas I met along the way:

Click to expand
  1. Gaming mice, or special devices that poll USB at 1000Hz or some high frequency, should always be connected to a port, that is passed through.
  2. Passing through hardware is best for native performance, but it doesn’t mean you always need it. You can also redirect hardware when you don’t need sub millisecond response time. This is the main difference between the “gaming VM” and the “work VM”.
  3. Some PCIE slots will disable certain features on your mainboard, when they are used. Same goes for some M.2 slots. Not every mainboard, can use every single physical slot, without auto disabling or cutting the speed of an underlying and attached feature on the board. This is where you learn to read a manual circuit diagram. And even then, they don’t always explain it well.
  4. Pass a USB controller through, not a device attached to a physical port. That will allow you to plug and unplug stuff while it’s running. The reverse is also possible. You DON’T need to pass a whole USB controller through, you could just pass through an actual connected USB device. But, if you do this, you can’t unplug it, else it disappears, and you’ll need a reboot.
  5. Monitor built-in USB KVM switches are a great way to expand a USB3.0 port to support a ton of devices. Use it. Heck, even if you’re not doing this, it’s a great way to connect things. A display will sleep from inactivity, which can power down connected devices, extending the life of yo’ shit yo
  6. UEFI snapshots are possible, it’s just not ideal.
  7. TPM can be faked, for those that want windows 11 :joy:
  8. If you have RGB on your mainboard, GPU or Chassis, be prepared to fiddle with some opensource tools, for controlling that. They mostly work, but sometimes they don’t.
  9. CPU and Chassis fan management software may not work in linux. If at all possible, set these performance graphs and thresholds in the BIOS.
  10. Temperature monitoring is done on the OS that manages the hardware, duh. That means, your gaming GPU temp, you check in windows. But your CPU and motherboard temps, you would check in Linux.

If you actually made it this far, here’s a noddy badge :medal_sports:. Tell your mommy, the teacher says you’re fantastic.

3 Likes

Used the “Hide Details” option to edit your post. Hope that works for you.


Also, I didn’t actually read it all the way through, but you can keep your Noddy Badge anyway because your mommy already told me I’m fantastic last night. :smiley:

1 Like

OMG, I’m an idiot. Thank you @GregRedd

1 Like

:rofl:

1 Like

I’m not gonna try this anytime soon (who has a second gfx… not this guy!), but I thoroughly enjoyed your write up, thanks for this!

When I get another gfx some day, I’ll refer back to this post :+1:

2 Likes