I decided recently to upgrade a couple of commodity hardware staging boxes that were pushed to capacity (and had no viable upgrade path) which I use for hosting Virtual Machines - after a bit of umming and erring on what would be cost effective I ended up
opting for 2 x the following kit:
- Q9550 Processor (2.83ghz, but they'll probably end up being clocked around 3ghz or a little above, once I'm happy the boxes are stable).
- Asus P5Q-EM Motherboard (cheap, on-board graphics, supports 16gig DDR2)
- G.SKILL 16gb DDR2 kit.
As drop-in replacements these worked out really cost-effective, with the ram being the most expensive part at just under $800 for the kit.
So for around $3K NZ I end up with 32gig of ram and 8 reasonably fast cores - not bad, plenty of bang for buck compared to picking up an equivalent server-spec hardware - and I can avoid the more expensive FB-DIMM (which would have worked out at least $3K for the
I decided to go with HyperV on these new machines, replacing my existing VMWare Server setup... It's actually been reasonably painless to port my the various machines across, once I got a handle on a few little gotchas:
So for Windows 2K3 virtual servers I've had to:
- Uninstall the VMWare tools (via add/remove programs).
- Uninstall the VMWare video driver (from the device manager).
- Run the VMWare cleanup utility (I had to do this on a few machines to clean up a "VMWare memory service" that was failing to start after moving the machine to HyperV) - you can grab the tool from the bottom of this KB article for cleaning them, which works on VM's as well as hosts... in one case I also had to do some registry
spelunking to remove the errant service.
- Install an IDE drive (doesn't matter what size, I just set it to the smallest possible value) in the VMWare server - see here for details - not doing this will result in a BSOD if your VMWare server is using virtual SCSI drives, like mine
- Shutdown the Virtual machine.
- Convert VMDK VMWare hard drives to VHD's using this tool (ignoring the IDE drives I added).
- Create a new machine, add the drives back in (as IDE drives) in HyperV.
- Boot it up.
- Insert the Integration Services Disk (CTRL+I) - let it do it's thing, and reboot when prompted.
- All done.
For the few Ubuntu & Debian virtual machines I just:
- Uninstalled the VMWare tools, if they were installed (I don't always bother with Linux, most of the machines aren't disk or network intensive, so it never seems to make much difference).
- Convert VMDK VMWare hard drives to VHD's using this tool.
- Created a new machine, adding the drives back in (as IDE drives).
- Removed the integrated network adaptor that's added by default and replaced it with the "Legacy network adaptor".
- Boot it up, quickly hit "e" and the edit the grub boot line replacing "root=/dev/sda1" with "root=/dev/hda1".
- At this point it should finish booting (otherwise it'll hang trying to boot and eventually dump you into an (initramfs) prompt).
- Log in, locate "menu.lst" (normally in /boot/grub/) and change references to /dev/sda1 to /dev/hda1, save and reboot (so the change of hard drive for the root filesystem is persisted).
- Optionally install the Linux integration components, though it takes some trial and error to get this working (I'm still fiddling with this a bit when I can be bothered - once working that should allow the virtual machine to stop using the legacy network adaptor and provide improved disk I/O & Network performance).
And for the Windows server 2008 virtual machines I skipped the addition of an extra virtual IDE drive prior to conversion, as it just seemed to work (i.e. no BSOD).
So far it's been reasonably painless process... the best part is I now have a plenty of room for some additional build servers to target my own open source projects, host some examples of RESTful/OAuth services etc.
Edit - Feb 1st 2008: It should be noted I since moved over to using VMWare server version 2, because I found the performance of HyperV to be diabolical for CPU-intensive tasks i.e. the server would be utilizing only about 10% of the available CPU resources when running continuous integration builds, resulting in builds taking well over 30 minutes instead of the usual 2 or 3 - at the time I found a few other users who suffered from the same problems, but no solution.
At first I thought it might be a result of converting the VMWare machines to HyperV, but freshly paving new build servers didn't make any difference. Moving over to the VMWare server v2 yielded very high CPU utilization, with builds that used to take 3 minutes on my old build server, nowtaking less then a minute on the new host machine :)
My medium-term plan is to migrate over to ESXi, once I've sourced some Intel server gigabit NIC's, using iSCSI on my openfiler box for storage.