Any reason NOT to run Linux in a VM all the time?
I’ve switched to using Arch Linux for most of my day to day work and don’t need Windows for anything but gaming and the couple of apps that aren’t ported to Linux like OneNote. My Linux distribution is hosted in VirtualBox with Windows as host, and I quite like it that way, snapshots are incredibly useful.
Let’s say I were to pretty much never care about the Windows host and spend 95% of the time in the guest, what would I be missing out on?
Are there serious downsides?
Is performance severely affected and will installing straight onto the machine make my life much more amazing?
Basically everything will be working fine from internet to installing packages also for initializing hardware, however you will be paying the price for any failure of the windows machine.
Assuming you can get everything working, and you don’t want to do resource intensive tasks such as playing games or doing large compiles, then I think you’ll be fine.
There’s some basic issues you will probably encounter:
- guest time incorrect
- guest screen size or color depth incorrect
- can’t access USB devices (printers, phones, etc.)
To fix this, you should install VirtualBox guest additions. See the VirtualBox Arch Linux guests guide for details.
To get some extra features, such as USB 2.0 and Intel PXE support, you can also install the VirtualBox extension pack.
After that, there’s a few issues you should know about:
- can’t use USB 3.0
- can’t use IEEE1394/”FireWire”
- can’t use seamless mode in combination with dual-head
- time gets out of sync on 64-bit guests
Obviously your Linux VM will be affected if your Windows system crashes too. Issues I’ve had happen recently:
- Windows host crashes due to driver bug (blue screen)
- Windows host reboots due to security update
When running a virtual machine the biggest performance hit will be to your disk I/O. If at all possible, put your VM on a separate disk and/or use a solid-state drive. Using a virtual SATA drive instead of a virtual IDE drive can help too.
Don’t forget that a VM is an emulation. Your Unix system will never be as powerful in a VM than installed. Archlinux is made to fit your tastes, it is a distribution you can customize to it’s maximum.
I used to make it run on a VM, though I thought about installing it definitely on my computer. Now my system boots in about 15 seconds, my builds are a looooot faster and everything is working better.
Archlinux is not that big, you can install it on a small partition (just make sure you have enough space for your programs on your root partition (I had to reformat my root partition because it was too small)). If you use Windows only for playing, you should consider that option =)
PS: Yes installing straight on your computer will make your life amazing. 😛
What graphical environment are you using in Linux? Most of the modern desktop environments (GNOME, KDE, Unity) are moving towards requiring hardware 3D acceleration support to work properly. Hardware acceleration support for graphics inside VMs is a relatively immature technology at the moment. VirtualBox has experimental support.
I run Ubuntu in a VirtualBox VM, and I think the only issue I hit is that the OpenGL acceleration pass-through to the host is ropey.
There are three ways you could set up the two OSs:
-
Windows host, Linux VM (as you have it).
-
Linux host, Windows VM.
-
Dual boot.
If you want to run Windows games I would not recommend option 2.
If you regularly want to use a Windows only program (that doesn’t run well under Wine) during your Linux session then option 3 won’t work well for you.
If you use non-game Windows stuff so rarely that rebooting isn’t much of a chore then option 3 is the most efficient.
So, the question is: does your current set up annoy you? Or, is it good enough? The old real downside I can think of is the extended boot time and lower memory availability.
BTW, it is possible to set up a dual boot system where you can also boot the same Linux install inside a VM in Windows, but not the other way around (Linux detects hardware at boot time, but Windows has it’s drivers hardcoded, once installed).
If you want a try-before-you-buy dual-boot set up then try out the Ubuntu “wubi” installer. (Yes, I know you’re an Arch guy, but you’re just trying it, right?) Wubi installs the disks as an image file within Windows, just like a VM, but it boots it as a host OS. There’s no partition meddling, and you can uninstall it right from the Windows Control Panel, when you’re done. The only down-side is that disk I/O performance is slightly reduced.
If you are not using VMs for special purposes (e.g., need to clone VMs; copy/move between servers; have multiple different test environments; etc), I’d suggest installing linux as the primary OS for your 95% of activities, and then install windows as a VM from within linux for your 5% activity of windows activities. (Unless your 5% of windows activity is extremely CPU/memory intensive; like using photoshop or video editing.) If you have a linux as your primary OS it will have full access to all your memory and all your cpu cores. However, if its within a VM you can only assign a small fraction of memory CPU cores to it; generally at best half the resources of the machine can be assigned to a VM. So if you have an quad-core machine with 8 GB of RAM, but only assigned 1 core and 2GB to your VM, performance in the VM for 95% of stuff will suffer significantly.
Using a VM will be slower. For most stuff, virtualization nowadays is very good and the difference will not be noticeable (other than the noticeable drop in available CPUs/RAM to the VM); however if you need fancy hardware acceleration (e.g., for graphics) your VM may not translate to your card properly; so you may notice video/3-d rendering suffers significantly within the VM.
I teach a hands-on class on Linux, and unfortunately, by company policy I’m not allowed to reformat the class-provided laptops, so we’re going by the VirtualBox guest approach.
Ignoring all performance concerns, here are some notes / problems I noticed:
1) Bridged mode and Wireless
Some wireless cards apparently have difficulty having “dual identities”, which means that our routing / firewall / networking lessons go to hell. It’s a known issue – most wireless drivers do not support bridging.
bridge | The Linux Foundation – It doesn’t work with my Wireless card!
This means that if you’re using a wireless interface, you have to do some extra work for the guest to have a “public” IP.
2) Desktop Integration
Save for a few wallbangers in design, the modern desktop environments are pretty well thought-out, and offer some conveniences that are lost when they don’t have full access to the input/output, or to the device attach/disattach tool. For instance, the Vbox menu at the bottom gets really annoying if you have a window chooser or notifications there. And some machines use the Ctrl+Alt+cursor_key shortcut (switching workspaces) as to flip the display.
I mean, compare how easy it is to disattach a USB device in GNOME, vs the equivalent number of submenus / clicks in Windows, and I know which one I’d prefer any day.
3) USB “stealing”
Sometimes, windows just doesn’t want to let go of a USB drive – telling virtualbox to attach it doesn’t always work – most likely when windows is reading the contents for some reason or other. And then there are some USB drives that aren’t straight-up storage devices, but do a mode switch-like action to make your drives accessible – those are annoying to attach to the Linux machine.
4) Stability
It’s typically easier to “break” Windows than Linux, which is why you typically want to have Linux “protect” Windows instances, rather than vice versa. I already lost a couple of work days with the staff overwriting the wrong files and ending up breaking both our installation of Virtualbox and our Linux images.
5) Command Line Tools
On Linux at least, you have the option of mucking around with Vbox disk images using qemu-nbd and the network block device.
QEMU/Images – Wikibooks, open books for an open world – Mounting an image on the host
This lets you look at and modify the contents of the guest OS disk without having to boot it, for example, if you rendered it unbootable.
You could also do things like scripting backups of VDIs – or just their contents, or changing VirtualBox “profiles” via symlinks – a lot easier on bash.
I can tell you that -in my experience- the contrary configuration is better. I mean host with linux and the guest with windows. This because of the performance and the stability. In this moment I am working in the office with a host windows and a guest linux (I need both of them) but in my personal laptop i have host with linux and guest with windows. And the performances is better in my laptop. Even when my personal laptop has less resources.
Anyway I can’t see any problem (without a workaround) in your configuration. it’s just a matter of taste.
I use a similar configuration, and I find it incredibly useful simply because I can copy and move my work Linux VM between machines.
I have only found two significant downsides to using a VM.
- If the host is using a wireless network connection, a vpn is very unreliable in the guest
- Multi monitor setups generally suck in a VM.
Point number 2 can effectively be overcome by using VMWare and Unity – Unity being a VMWare feature that runs applications in windows on the host’s desktop (not to be confused with Ubuntu Unity).
If you use this box mainly via SSH, there’s a good chance that you’re in the butter zone where it really doesn’t matter much whether it’s a VM or on real hardware. Many of the problems mentioned in other answers come up when you’re trying to use the guest OS as a GUI desktop. Linux servers are very happy inside VMs; a huge chunk of the web hosting market is Linux in VMs.
I’ve run into just a few cases where I was forced to run a Linux server on real hardware, instead of in a VM:
Real Hardware Access
Sometimes you need to use some PCI card that the VM system can’t virtualize. Say, a 4-channel MPEG-2 decoder. Some VM systems can give exclusive ownership of the card to the VM, such as via Intel’s VT-d technology, but that’s not without its problems:
-
There’s a speed hit. It might matter.
-
Not all VM systems can do this, and you might not have the freedom to switch to one that can.
-
There may be inessential consequences, as with VMware ESXi 5, where giving a VM ownership of a card requires rebooting the host, and then prevents it from doing snapshots of that VM. (By inessential I mean that these problems could be solved, it just takes development time.)
Big Storage
Your VM system may not be able to create a virtual disk as large as the bare hardware allows for real disks. VMware ESXi 5, for instance, has a 2 TB virtual volume size limit. If you need a larger single volume inside the VM, you have to jump through hoops to work around the limitation:
-
You can push a RAID controller through to the VM with VT-d, but again, it has problems.
-
You can push a passel of 2 TB virtual volumes through to the VM and string them together with LVM, but you’ve bought yourself a passel of problems, too.
For one thing, when (!) one of the physical disks dies, if you were using the guest OS on bare hardware, you could diagnose and fix it by using the provided management software, such as 3Ware’s 3DM or
tw_cli
software. But try finding versions that will run on a VMware ESXi 5.0 host! Now you’re forced to reboot so you can use the BIOS management interface.For another, the abstraction layer has disconnected the virtual volumes from the physical volumes, so the software RAID/LVM layer in the guest OS can’t manage the disks efficiently. That layer may think it’s being clever writing to disks in a round-robin fashion, but because they probably share some of the same physical disks, performance will take a hit because some disks are getting back-to-back writes.
-
You can create the volume on a real hardware system and export it to the VM via NFS, but there’s a speed hit when you do that, too.
You may be able to counterbalance some of the above with virtualization advantages, such as the ability to pause a VM, move it to another host, and start it back up again seamlessly.
One thing you might want to consider is to make sure your hardware has been tested with the linux distro you are using. I ran into the issue that a distribution I used ran perfectly well in a VM, but was horribly unstable natively, due to a graphics card that wasn’t fully tested with the distribution. Fedora for example has a list of hardware that has been tested fully. The fact is, no linux distribution is going to be 100% stable with state of the art hardware with new, buggy drivers. I tried to do what you did but ended up deleting my linux partition after several distributions would not run stable enough for me. In my opinion, unless you are doing anything that requires hardware acceleration, there is absolutely no need to run linux natively.
My power consumption rises drastically whenever I start VirtualBox.
In my case, I run Linux as both host and guest, and I don’t know if the host/guest OS makes a difference, or if this is inherent to either VirtualBox or the virtualization technique.
Using powertop I can see that the process “VBoxHeadless” is frequently the single largest consumer of power on my system.
If this is a desktop system, maybe this doesn’t matter to you, but on my laptop, I want to turn VirtualBox off whenever I have no need for the guest system.
I started off doing what you do, *nix in a virtual machine. This is great for trying it, but I suggest flipping it around. Windows can run surprisingly well in a VM. If you mainly use Linux, then why not make the host system use Linux?
Pros:
- More control over host issues(ie crashes/automatic reboots less likely with Linux)
- Linux uses less resources than Windows when idle (resources you could instead allocate to a virtual machine)
- Virtualbox, in my opinion, runs better on Linux. I’ve tried it both ways.
- Easy to setup awesome speed boosts to a virtual machine on linux. I use software RAID across 2 consumer harddrives to make a Windows XP VM boot into firefox in 8 seconds flat
Cons:
- Possible driver issues when running linux as a host
- 3D acceleration might be difficult to get going on Linux
- If you use Windows for playing games, a virtual machine may not be fast enough
I also use this config (Ubuntu guest on Windows).
Pro:
- No change to the initial windows, my company’s IT department will support any windows related problem/crash.
Con:
- Slow to start: need to start windows, then start the virtual machine.
- No Wireless bridging.
There is absolutely no reason not to do so, as long as everything you want to do in the host and client works as you want.
I used this setup on my Sony PCG-Z505 with VMware from early 2000 until mid 2003. Host: Windows 98 and client SuSE Linux. The main reason for that setup were that I could use the imap server under Linux from the host Outlook Express while being mobile (I had that on my Desktop Linux machine before that with Win98 under Linux). The Linux client also would do do spam filtering etc in Linux. I also could login the servers at work using a ssh in a more familiar environment.
The VM host nicely shielded Linux from hardware problems. IIRC there were some problems with wireless, but most of the time I was on a wired connection at home or in the office. If not I would have Outlook Express pickup the mail and push it to the imap server, temporarily loosing spamfiltering while only on wireless.
I could not run this the other way around (as I did on my Desktop before that), because I normally ran Linux without graphics, otherwise things would not fit in memory. With Linux as a host I would have had to run it in graphics mode as well all the time, leaving too little memory under Windows 98 to run Word without swapping.
I’m adding a note to the already existing (and excellent) answers: it is also possible to run Linux and Windows side by side.
The project Cooperative Linux is aimed at this:
Cooperative Linux is the first working free and open source method for optimally running Linux on Microsoft Windows natively. More generally, Cooperative Linux (short-named coLinux) is a port of the Linux kernel that allows it to run cooperatively alongside another operating system on a single machine.
and there are even Linux distros that run on it: TopologiLinux and andLinux.
Unfortunately it seems that these projects have been abandoned; latest release of coLinux is 3 years old, and latest release of both distros are 8 years old.