Is there any benefit in disabling kvm module on bare metal that won't ever run kvm guests?

Somewhere along the line, the nested kvm kernel module became enabled by default. As hard as it may be to believe in this day and age, not everyone in the world had a need to run kvms on every host that gets built. Is there any performance cost to leaving this default enabled even if you have zero intention to deploy kvms? All my googling on this subject has come up with no useful info on pros/cons to having this module present – performance, security, or any other potential impact.

Or, is this just case of "if it ain’t broke, don’t fix it"?

Asked By: guzzijason


Or, is this just case of "if it ain’t broke, don’t fix it"?

I’d go with that 🙂

There’s no cost to the module being there. There might be a cost to the CPU having the feature enabled (which, on some older x86_64 you could disable in the UEFI setup, not sure this is still the case on modern machines), namely that of course nested page tables add a layer of page table redirection. But seeing you’d only use a single value for that redirection: that is effectless.

Things get more complicated once you consider features that can only work when virtualization is enabled – mostly IOMMU (if you actually use IOMMU groups, there is a minor performance overhead. Not enough for people with billions of dollars in servers to abandon that; draw your own conclusion).

So, in a sense, it really doesn’t cost you anything (ok, a couple kilobytes in module memory, maybe) to keep it loaded. Then again, the kernel can work just as well without. If you’re building a slimmed down kernel anyway, it’d be OK to exclude it.

Answered By: Marcus Müller
Categories: Answers Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.