Why?
Ok, I think I did my due diligence. With all the entries I have in this blog as proof, I think I put up long enough with the ESXi box. It was not as bad as Xen but I got tired of not being able to get it to behave as it should. And when I could not do PCI passthrough -- I am not even saying the Netronome card, but every single PCI or PCIe card I had available and had no problems passing to a vm guest using KVM as the hypervisor -- it was time to move on. The writing on the wall came after almost a year I could not get an answer from VMWare.
The Plan
- While the ESXi server, vmhost2, is still running, export the guests in .ovf format to a safe location. Just to be on the safe side, write down somewhere the specs for each guest (memory, cpu, OS, which network it is using, etc).
- Build the new vmhost2 using Debian or CentOS as the base OS and kvm as the hypervisor. Some things to watch out for:
- Setup Network trunk and bridges to replicate old setup.
- Use the same IP as before since we are keeping the same hostname.
- Setup the logical volume manager so I can move things around later on.
- Configure ntp to use our internal server
- Configure DNS to use our internal server
- Accounts of users who need to access the vm host itself will be mounted through autofs. If that fails, can login to root using ssh keypair authentication. If that is down (say, network issues, switch to console).
- Like in the old vmhost2, ISOs for the install images are available through NSF.
- Add whatever kernel options we might need. Remember we are building this from scratch, not just dropping a prebuilt system like xenserver. Or ESXi.
- Setup Network trunk and bridges to replicate old setup.
- Import enough vm guests to validate the system. Might take the opportunity to do some guest housecleaning.
- Add any PCI/PCIe cards we want to passthrough.
- Import the rest of the vm guests.
- (Future:) set it up so it can move/load balance vm guests with vmhost, the other KVM host.
Note: I did not wipe the original hard drive; instead I just bought a 111.8GB (128GB in fake numbers) SSD to run the OS in. I did not get a second drive and make it a RAID for now since I plan on running the OS in that disk, which will be configured using Ansible so I can rebuilt it quickly. Any vm guest running in this vm host will either run from the fileserver (iSCSI) or in a local RAID setup of sorts. Or I might simply deploy ZFS and be done with it. With that said, I might run a few vm gusts from that drive to validate the system.
Note: This project will be broken down into many articles otherwise it will be long and boring to read on a single sitting (some of the steps are applicable to other applications besides going from ESXi to KVM). I will try to come back and add the links of those articles, treating this post as the index.
No comments:
Post a Comment