Lets start by getting something straight: This is not something that is normally done in an environment that has a cluster of XenServer servers. In those, you are probably using some kind of external storage line a SAN which feeds LUNs to the servers. But, there are those who did not setup their clusters like that for whatever reason -- cost, space, resources -- and as a result built their xen servers with local storage for the vm clients. Or maybe they just have one single xen server. For those this might be a bit useful.
Note: As usual most of this to be done in command line; if you have read my previous articles you would expect nothing less from me. I am just not that original; just deal with it already.
For this example, let's say you bought a server with lots of drive bays (say, 15), a nice raid controller that is supported by XenServer. And you got it running by creating, say, a RAID1 array using two small drives to install the OS. And they happen to have enough free disk space to build a few test vm clients. This server shall henceforth be called vmhost3.
Note: If you had built a Linux box and then installed Xen on the top of it, like RedHat used to recommend before they switched to KVM, this is much easier because you can use the usual LVM commands. Since we are using XenServer things are conceptually the same but the implementation slightly different.
So you built a few test vm clients and verified all is nice and peachy. I mean, even your VLAN trunks work properly. So, now it is time to go into production.
- You used 2 drive bays to put the OS on; you still have 13 left to play. So, you grab, say, 6 SSDs and make a nice RAID6 out of it. I will not go in details here because it is out of scope of this discussion, but I can write another article showing how to create the new RAID from command line in xenserver. Let's just assume you did it and it works.
- Xenserver by default creates local storage for vm clients by using LVM: it allocates a logical volume (lv) as the drive(s) for those clients, which then format them to whatever filesystem they want. So, we need to grab the drive created by the RAID6 and then configure it to do some LVMing. Every journey starts with the first step, and in this case that is to take a quick look at the drive we created when we did the raid; I will save you some time and tell you it is called /dev/sdc:
[root@vmhost3 ~]# fdisk -l /dev/sdc Disk /dev/sdc: 1999.8 GB, 1999844147200 bytes 255 heads, 63 sectors/track, 243133 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk /dev/sdc doesn't contain a valid partition table [root@vmhost3 ~]#
Yes, our drive is not that large, but it will be good enough for this example.
- Before we start passing LVs out like farts we need a lvm physical volume(pv). We could create a single partition in sdc that is as big as the disk, or just make the entire drive as the pv. I will do the latter.
Do you remember my warning about XenServer vs Linux+Xen? This is where things diverge a bit. In the latter we would do something like
pvcreate /dev/sdc vgcreate data /dev/sdc
To create the pv and a volume group (vg) called data, but since this is XenServer, we should use their command line thingie, xe:
[root@vmhost3 ~]# xe sr-create name-label=data shared=false \ device-config:device=/dev/sdc type=lvm f70d9ff8-3567-ef23-42c2-7c7997a4abc6 [root@vmhost3 ~]#
which does the same thing, but tells xenserver that it has a new local storage device:
From here on, it is business as usual./
No comments:
Post a Comment