Difference between revisions of "Information Systems:Server Virtualization Planning"
Jump to navigation
Jump to search
m |
|||
Line 60: | Line 60: | ||
* Most of the topics discussed in the previous sections/phases are still relevant. |
* Most of the topics discussed in the previous sections/phases are still relevant. |
||
− | * Another point of discussion is the Smithers server. It is a dual-CPU, 12-core, 24-thread (6c/12t each) system with 32GB of DDR4 RAM. It has more than enough power to be one of the hosts. Though, from a business perspective, this machine has technically been approved and purchased for the uses that it currently serves today, it is still heavily under-utilized (CPU-wise). Seeing that the virtualization of our current physical servers will not be done all at once, there is potential value in using this machine as one of the hosts. Recall that the point of virtualization is to shift the emphasis/focus from the specifications of the physical machine. Server hosts should just be considered compute nodes |
+ | * Another point of discussion is the Smithers server. It is a dual-CPU, 12-core, 24-thread (6c/12t each) system with 32GB of DDR4 RAM. It has more than enough power to be one of the hosts. Though, from a business perspective, this machine has technically been approved and purchased for the uses that it currently serves today, it is still heavily under-utilized (CPU-wise). Seeing that the virtualization of our current physical servers will not be done all at once, there is potential value in using this machine as one of the hosts. Recall that the point of virtualization is to shift the emphasis/focus from the specifications of the physical machine. Server hosts should just be considered compute nodes, and as such, Smithers appears quite capable. |
+ | :* A potential caveat to this endeavor is that the processor is or will be 2-3 generations older, which means little when considering computer capabilities, but vMotion requires compatible CPUs. There is vSphere feature to work around this (EVC clustering), but this should be kept in mind nonetheless. |
||
− | :* A potential caveat to this endeavor is that |
||
[[Category: Virtualization]] |
[[Category: Virtualization]] |
Revision as of 15:13, 20 February 2017
Overview
This article is a drawing board or "pinterest" for the server virtualization project. The phases loosely represent periods of time where there was concentrated effort put into the project i.e. ~2015-2016 for initial discussions prior to training, 2017 for discussion throughout training etc.
Discussion
Phase 1 Planning
This was written before VMware training. -norwizzle (talk)
Windows Server licensing
- Datacenter is very expensive, but entitles us to unlimited virtual instances on a single machine with two processors.
- Standard edition allows for 2 virtual instances
- Worth checking to see if our Standard licenses can each convert to two virtual licenses.
Hardware
- 2 hosts
- 10-14 core processors most ideal
- SAN - this will be a major decision
- Shared storage is not required for vMotion - which is being able to migrate a virtual machine to another server.
- Shared storage, however, is required for high availability i.e. seamless machine failover
- This would be a nice thing to have, but our servers don't need this kind of 100% uptime. Or rather, the ones that do will not be virtualized.
- SANs are very expensive, but if we do consider one, there is a little relief in the fact that we won't need a huge amount of GB for each virtual machine.
VMware vSphere
For clarity:
- vSphere is the virtualization software package
- ESX/ESXi is the hypervisor
- vMotion, vCenter are features of vSphere
- There are two main licensing categories: Essentials and Operations Management
This is another major decision that has a significant impact going forward
- Essentials entitles you to three hosts, or 6 CPUs, but you will not be able to expand beyond this without practically purchasing a new license altogether.
- Essentials Plus (which is what we would consider if we went the Essentials route), entitles us to vMotion.
Phase 2 Planning
This section discusses ideas throughout VMware training.
Note: It is apparent that this project still carries the potential to be very expensive. That is my main concern halfway through the training - that the expense of implementation might exceed the value savings of virtualization. Scale needs to be a major factor during implementation planning, otherwise it will make more sense to not virtualize, and just replace physical servers at the current rate of replacement. -norwizzle (talk)
- Cost/features. The areas that need to be scrutinized for cost are:
- Storage. Shared or local storage? If shared, fibre channel can be ruled out - it makes sense with an existing FC infrastructure, but it's too expensive to get going from scratch. Right now it's shared SAS or iSCSI. Being a networking guy, I'd love to do iSCSI. Ipso facto, I'd hate to deal with the coupling of network and infrastructure downtime, should either happen.
- There is actually a big case here for scrapping the idea of shared storage, and just using local storage i.e. redundant arrays on each host. The new-ish vSphere feature that makes local storage a feasible idea is cross-host vMotion. Along with cold-migration, this may be all we need to meet or even exceed the current level of service provided by our current infrastructure. Currently, I see shared storage as representing perhaps 1/3 of the project cost. While it would still be good to have, perhaps the acquisition of shared storage can be implemented as a separate and future phase of this project (i.e. migration to shared storage in year 20xx).
- vSphere license.: Essentials Plus still appears to be the right flavor for us. No DRS, no FT, no Distributed Switch, but it does have vMotion (although no storage vMotion). This license allows for 3 hosts.
- Host hardware configuration. We definitely want two hosts for redundancy, but with 12-core CPUs, I'm not sure if we'll need dual CPUs per host. RAM can be scaled back too. Perhaps 64GB per host. Overprovisioning resources to VMs seems to be a standard practice anyways, and our compute needs are consistent (not fluctuating).
- High availability/Disaster recovery. Deploying VMware vSphere (actually, doing virtualization in general) raises the baseline for high-availability/redundancy i.e. virtualizing an entire infrastructure immediately makes it more redundant and better able to cope with downtime (planned or disaster-related). However, HA/DR has many faces, and it will be important to weed out the features that either are not feasible or not ideal in our infrastructure, so we can establish that "baseline". For example, the following features are likely not a good fit:
- vSphere HA: This feature is a maybe, or fits under "would be nice". But 'compute HA' is not something we currently have anyway. Requires shared storage.
- vSphere FT: Requires multiple copies of the virtual machine. High computer overhead. We do not require the level of uptime afforded by this feature.
- vSphere Replication/Data Protection: These are separate products, and may be better considered down the road, if we feel the need to backup virtual machines off-site.
- vCenter Deployment. vCenter deployment can be a Catch-22: we are trying to virtualize our entire infrastructure, yet need a host that is external to the entire infrastructure to manage it. There are several online resources (Google it) discussing vCenter as a VM within the cluster (how meta!). That is, vCenter as a VM managing the infrastructure on which it itself is virtualized.
I'm actually doing this in my home lab. It works very well, and I've even vMotion-ed it a bunch of times to perform maintenance on the other host. But the potential risks should be considered. -norwizzle (talk)
- An alternative could be to recycle the physical machine of one of the servers that gets virtualized. Consider that, while vCenter is a very important part of the vSphere infrastructure, connecting to the hosts directly to perform emergency tasks is possible through vSphere Client. So even if vCenter is down, you can access the hosts.
Phase 3 Planning
Post-training discussion.
- Most of the topics discussed in the previous sections/phases are still relevant.
- Another point of discussion is the Smithers server. It is a dual-CPU, 12-core, 24-thread (6c/12t each) system with 32GB of DDR4 RAM. It has more than enough power to be one of the hosts. Though, from a business perspective, this machine has technically been approved and purchased for the uses that it currently serves today, it is still heavily under-utilized (CPU-wise). Seeing that the virtualization of our current physical servers will not be done all at once, there is potential value in using this machine as one of the hosts. Recall that the point of virtualization is to shift the emphasis/focus from the specifications of the physical machine. Server hosts should just be considered compute nodes, and as such, Smithers appears quite capable.
- A potential caveat to this endeavor is that the processor is or will be 2-3 generations older, which means little when considering computer capabilities, but vMotion requires compatible CPUs. There is vSphere feature to work around this (EVC clustering), but this should be kept in mind nonetheless.