Comment Re:Fats are the key! (Score 1) 763
I was so incredibly disappointed by Ruth Chris for exactly that reason. The steak wasn't just overcooked, it wasn't very high quality either.
I was so incredibly disappointed by Ruth Chris for exactly that reason. The steak wasn't just overcooked, it wasn't very high quality either.
The short answer is: Whatever.
It's a little more nuanced than that. To the extent that a long term session is more predictable than a short term session (or vice versa), it may matter. See Timing Analysis of Keystrokes and Timing Attacks on SSH.
The Xen PV drivers have historically been closed source for Windows. Fortunately a brave soul in the community stepped up and wrote a set of GPL drivers but Citrix still maintains their proprietary drivers. In general, there's a great deal of fragmentation with Xen PV drivers because they haven't been Open Source from the start.
I think the fact that KVM is avoiding this is quite good.
I've always wondered how paravirtualizing some functions such as I/O or networking affects security.
Say a VM gets compromised, and is able to do what it wants with the block devices, how tough would it be to get out of the VM? If malicious code is able to access the host's block device that runs in kernel mode and start running code directly on the host's OS, game over.
Unlike Hyper-V and Xen, in KVM a paravirtual device looks an awful lot like an emulated device. For instance, virtio-net appears to the guest as a normal PCI device. It's quite conceivable that a hardware vendor could implement a physical virtio-net card if they were so inclined. In our backend, we implement virtio-net like any other emulated device.
This means from a security perspective, it's just as secure as an emulated driver. It's implemented in userspace and can be sandboxed as an unprivileged user or through SELinux.
VMware uses a similar model. Hyper-V and Xen prefer to not model hardware at all and use special hypervisor-specific paths. From a security perspective, the fact that these devices are on a different code path means that they have different security characteristics than emulated devices. For instance, in Xen, a paravirtual network device is backed directly in the domain-0 kernel so an exploit in the xenpv network device is much more severe than an exploit in a Xen emulated network device (since the device emulation happens in an unprivileged stub domain).
CONFIG_HW_RANDOM_VIRTIO enables it. It's been there for quite a while.
We could easily support it in KVM but I've held back on it because to really solve the problem, you would need to use
lguest does support this backend device though.
No, SMM is loaded from ROM memory by the BIOS. You would have to reload the SMM code every time.
What's more, this only works while the SMM code stays resident in the CPU cache. You would need something running at the OS level that was constantly rewriting this memory to ensure it stayed in the CPU cache.
I expect this would actually be quite difficult to build a root kit with that was not as easy to detect as any other root kit.
Both Nehalem and Barcelona (Phenom) are quad-core and most importantly, support EPT and NPT respectively. This feature has significant impact on virtualization performance.
If you want to run 4 VMs, you'll probably want to have a fair bit of memory. 4GB would be good, 8GB would be better.
Is there any reason you couldn't keep a list of processor dependent memory locations and regenerate them for the current machine as part of the migration?
The halting problem?
The new Intel/AMD CPU features that allow masking of CPUID bits while running virtualized also make processors recent enough that most of the interesting features are present - MMX, SSE up to ~3. The "common subset" ends up looking like an early Core2 or a Barcelona (minus the VT/SVM feature bits, of course) - Intel and AMD run about a generation behind on adding each other's instructions. Run on anything older than the latest processors, and you have to trap-and-emulate every CPUID instruction. Enough code still uses CPUID as a serializing instruction that this has noticeable overhead.
Modern OSes do not use CPUID for serialization. We trap CPUID unconditionally in KVM and have not observed a performance problem because of it. Older OSes did this but I'm not aware of a modern one.
My understanding of the reason for the recent CPUID "masking" support is because if you are not using VT/SVM (Xen PV or VMware JIT), there is no way to trap CPUID when it's executed from userspace. AMD just happened to have this feature so when Intel announced "FlexMigration", they were able to just document it. I don't think it's really all that useful though.
(As a side note to everyone reading, the reason Linux timekeeping is such a problem is that TSC issue. Intel long ago stated TSC was NOT supposed to be used as a timesource. Linux kernel folks ignored the warning, made non-virtualizable assumptions, and today are in a world of hurt for timekeeping in a VM. And only now, many years later, are patching the kernel to detect hypervisors to work around the problem.)
The TSC is often used as a secondary time source, even outside of Linux, but yes, Linux is the major problem. But Windows it not without it's own faults wrt time keeping. Dealing with missed timer ticks for Windows guests is a never ending source of joy. Virtualization isn't the only source of problems here. Certain hardware platforms have had overzealous SMM routines and the results was really bad time drift when running Windows.
FWIW, KVM live migration has been capable of this for a long time now.
KVM actually supported live migration of Windows guest long before Xen did. If you haven't given KVM a try, you should!
Declaration: VMware support engineering here, but speaking strictly on my own behalf.
The stability issues are justified if you consider all types of VMs. Windows 2003, default RHEL5 kernels etc use more than the basic set of assembler instructions (disk IO code uses MMX, SSE etc).
KVM goes to great lengths to by default, mask out CPUID features that aren't supported across common platforms. You have to opt-in to those features since they limit a machine's migrate-ability.
However, I won't say this is always safe. In reality, you really don't want to live migrate between anything but identical platforms (including identical processor revisions).
x86 OSes often rely on the TSC for time keeping. If you migrate between different steppings of the same processor even, the TSC calibration that the OS has done is wrong and your time keeping will start to fail. You'll either get really bad drift or potentially see time go backwards (causing a deadlock).
If you're doing a one time migration, it probably won't matter but if you plan on migrating very rapidly (for load balancing or something), I would take a very conservative approach to platform compatibility.
The world is moving so fast these days that the man who says it can't be done is generally interrupted by someone doing it. -- E. Hubbard