Difference between revisions of "Information Systems:Server Virtualization Planning"

From uniWIKI
Jump to navigation Jump to search
Line 31: Line 31:
   
 
==Phase 2 Planning==
 
==Phase 2 Planning==
Ideas throughout VMware training:
+
Ideas throughout VMware training:
  +
 
It is apparent that this project still carries the potential to be very expensive. That is my main concern halfway through the training - that the expense of implementation might exceed the value savings of virtualization. Scale needs to be a major factor during implementation planning, otherwise it will make more sense to not virtualize, and just replace physical servers at the current rate of replacement. -[[User:Norwinu|norwizzle]] ([[User talk:Norwinu|talk]])
   
 
* '''Cost/features.''' The areas that need to be scrutinized for cost are:
* It is apparent that this project still carries the potential to be very expensive. That is my main concern halfway through the training - that the expense of implementation might exceed the value savings of virtualization. Scale needs to be a major factor during implementation planning, otherwise it will make more sense to not virtualize, and just replace physical servers at the current rate of replacement. -[[User:Norwinu|norwizzle]] ([[User talk:Norwinu|talk]])
 
 
* The areas that need to be scrutinized for cost are:
 
 
:# Storage: Shared or local storage? If shared, fibre channel can be ruled out - it makes sense with an existing FC infrastructure, but it's too expensive to get going from scratch. Right now it's shared SAS or iSCSI. Being a networking guy, I'd love to do iSCSI. Ipso facto, I'd hate to deal with the coupling of network and infrastructure downtime, should either happen.
 
:# Storage: Shared or local storage? If shared, fibre channel can be ruled out - it makes sense with an existing FC infrastructure, but it's too expensive to get going from scratch. Right now it's shared SAS or iSCSI. Being a networking guy, I'd love to do iSCSI. Ipso facto, I'd hate to deal with the coupling of network and infrastructure downtime, should either happen.
 
:#* There is actually a big case here for scrapping the idea of shared storage, and just using local storage i.e. redundant arrays on each host. The new-ish vSphere feature that makes local storage a feasible idea is '''cross-host vMotion'''. Along with cold-migration, this may be all we need to meet or even exceed the current level of service provided by our current infrastructure. Currently, I see shared storage as representing perhaps 1/3 of the project cost. While it would still be good to have in the future, perhaps the acquisition of shared storage can be implemented as a separate and future phase of this project (i.e. migration to shared storage in year 20xx).
 
:#* There is actually a big case here for scrapping the idea of shared storage, and just using local storage i.e. redundant arrays on each host. The new-ish vSphere feature that makes local storage a feasible idea is '''cross-host vMotion'''. Along with cold-migration, this may be all we need to meet or even exceed the current level of service provided by our current infrastructure. Currently, I see shared storage as representing perhaps 1/3 of the project cost. While it would still be good to have in the future, perhaps the acquisition of shared storage can be implemented as a separate and future phase of this project (i.e. migration to shared storage in year 20xx).
   
:# vSphere license: Essentials Plus still appears to be the right flavor for us. No DRS, no FT, no Distributed Switch, but it does have vMotion (although no storage vMotion). This license allows for 3 hosts.
+
:# '''vSphere license.''': Essentials Plus still appears to be the right flavor for us. No DRS, no FT, no Distributed Switch, but it does have vMotion (although no storage vMotion). This license allows for 3 hosts.
   
:# Host configuration: We definitely want two hosts for redundancy, but with 12-core CPUs, I'm not sure if we'll need dual CPUs per host. RAM can be scaled back too. Perhaps 64GB per host. Overprovisioning resources to VMs seems to be a standard practice anyways, and our compute needs are consistent (not fluctuating).
+
:# '''Host hardware configuration.''' We definitely want two hosts for redundancy, but with 12-core CPUs, I'm not sure if we'll need dual CPUs per host. RAM can be scaled back too. Perhaps 64GB per host. Overprovisioning resources to VMs seems to be a standard practice anyways, and our compute needs are consistent (not fluctuating).
   
* Deploying VMware vSphere (actually, doing virtualization in general) raises the baseline for high-availability/redundancy i.e. virtualizing an entire infrastructure immediately makes it more redundant and better able to cope with downtime (planned or disaster-related). However, HA/DR has many faces, and it will be important to weed out the features that either are not feasible or not ideal in our infrastructure, so we can establish that "baseline". For example, the following features are likely not a good fit:
+
* '''High availability/Disaster recovery.''' Deploying VMware vSphere (actually, doing virtualization in general) raises the baseline for high-availability/redundancy i.e. virtualizing an entire infrastructure immediately makes it more redundant and better able to cope with downtime (planned or disaster-related). However, HA/DR has many faces, and it will be important to weed out the features that either are not feasible or not ideal in our infrastructure, so we can establish that "baseline". For example, the following features are likely not a good fit:
   
 
:# vSphere HA: This feature is a maybe, or fits under "would be nice". But 'compute HA' is not something we currently have anyway. Requires shared storage.
 
:# vSphere HA: This feature is a maybe, or fits under "would be nice". But 'compute HA' is not something we currently have anyway. Requires shared storage.
Line 49: Line 49:
 
:# vSphere Replication/Data Protection: These are separate products, and may be better considered down the road, if we feel the need to backup virtual machines off-site.
 
:# vSphere Replication/Data Protection: These are separate products, and may be better considered down the road, if we feel the need to backup virtual machines off-site.
   
* vCenter deployment: vCenter deployment can be a Catch-22: we are trying to virtualize our entire infrastructure, yet need a host that is external to the entire infrastructure to manage it. There are several online resources (Google it) discussing vCenter as a VM within the cluster (how meta!). That is, vCenter as a VM managing the hosts in which it is a part of!
+
* '''vCenter Deployment.''' vCenter deployment can be a Catch-22: we are trying to virtualize our entire infrastructure, yet need a host that is external to the entire infrastructure to manage it. There are several online resources (Google it) discussing vCenter as a VM within the cluster (how meta!). That is, vCenter as a VM managing the infrastructure on which it itself is virtualized!
   
 
I'm actually doing this in my home lab. It works very well, and I've even vMotion-ed it a bunch of times to perform maintenance on the other host. But the potential risks should be considered. -[[User:Norwinu|norwizzle]] ([[User talk:Norwinu|talk]])
 
I'm actually doing this in my home lab. It works very well, and I've even vMotion-ed it a bunch of times to perform maintenance on the other host. But the potential risks should be considered. -[[User:Norwinu|norwizzle]] ([[User talk:Norwinu|talk]])
   
:* An alternative could be to recycle the physical machine of one of the servers that gets virtualized.
+
:* An alternative could be to recycle the physical machine of one of the servers that gets virtualized. Consider that, while vCenter is a very important part of the vSphere infrastructure, connecting to the hosts directly to perform emergency tasks is possible through vSphere Client. So even if vCenter is down, you can access the hosts.
   
   

Revision as of 11:42, 14 February 2017

Overview

This article is a drawing board or "pinterest" for the server virtualization project. The phases loosely represent periods of time where there was concentrated effort put into the project i.e. ~2015-2016 for initial discussions prior to training, 2017 for discussion throughout training etc.

Discussion

Phase 1 Planning

This was written before VMware training. -norwizzle (talk)

Windows Server licensing

  • Datacenter is very expensive, but entitles us to unlimited virtual instances on a single machine with two processors.
  • Standard edition allows for 2 virtual instances
    • Worth checking to see if our Standard licenses can each convert to two virtual licenses.

Hardware

  • 2 hosts
  • 10-14 core processors most ideal
  • SAN - this will be a major decision
    • Shared storage is not required for vMotion - which is being able to migrate a virtual machine to another server.
    • Shared storage, however, is required for high availability i.e. seamless machine failover
      • This would be a nice thing to have, but our servers don't need this kind of 100% uptime. Or rather, the ones that do will not be virtualized.
    • SANs are very expensive, but if we do consider one, there is a little relief in the fact that we won't need a huge amount of GB for each virtual machine.

VMware vSphere

For clarity:

  • vSphere is the virtualization software package
    • ESX/ESXi is the hypervisor
    • vMotion, vCenter are features of vSphere
  • There are two main licensing categories: Essentials and Operations Management

This is another major decision that has a significant impact going forward

    • Essentials entitles you to three hosts, or 6 CPUs, but you will not be able to expand beyond this without practically purchasing a new license altogether.
    • Essentials Plus (which is what we would consider if we went the Essentials route), entitles us to vMotion.

Phase 2 Planning

Ideas throughout VMware training:

It is apparent that this project still carries the potential to be very expensive. That is my main concern halfway through the training - that the expense of implementation might exceed the value savings of  virtualization. Scale needs to be a major factor during implementation planning, otherwise it will make more sense to not virtualize, and just replace physical servers at the current rate of replacement.  -norwizzle (talk)
  • Cost/features. The areas that need to be scrutinized for cost are:
  1. Storage: Shared or local storage? If shared, fibre channel can be ruled out - it makes sense with an existing FC infrastructure, but it's too expensive to get going from scratch. Right now it's shared SAS or iSCSI. Being a networking guy, I'd love to do iSCSI. Ipso facto, I'd hate to deal with the coupling of network and infrastructure downtime, should either happen.
    • There is actually a big case here for scrapping the idea of shared storage, and just using local storage i.e. redundant arrays on each host. The new-ish vSphere feature that makes local storage a feasible idea is cross-host vMotion. Along with cold-migration, this may be all we need to meet or even exceed the current level of service provided by our current infrastructure. Currently, I see shared storage as representing perhaps 1/3 of the project cost. While it would still be good to have in the future, perhaps the acquisition of shared storage can be implemented as a separate and future phase of this project (i.e. migration to shared storage in year 20xx).
  1. vSphere license.: Essentials Plus still appears to be the right flavor for us. No DRS, no FT, no Distributed Switch, but it does have vMotion (although no storage vMotion). This license allows for 3 hosts.
  1. Host hardware configuration. We definitely want two hosts for redundancy, but with 12-core CPUs, I'm not sure if we'll need dual CPUs per host. RAM can be scaled back too. Perhaps 64GB per host. Overprovisioning resources to VMs seems to be a standard practice anyways, and our compute needs are consistent (not fluctuating).
  • High availability/Disaster recovery. Deploying VMware vSphere (actually, doing virtualization in general) raises the baseline for high-availability/redundancy i.e. virtualizing an entire infrastructure immediately makes it more redundant and better able to cope with downtime (planned or disaster-related). However, HA/DR has many faces, and it will be important to weed out the features that either are not feasible or not ideal in our infrastructure, so we can establish that "baseline". For example, the following features are likely not a good fit:
  1. vSphere HA: This feature is a maybe, or fits under "would be nice". But 'compute HA' is not something we currently have anyway. Requires shared storage.
  2. vSphere FT: Requires multiple copies of the virtual machine. High computer overhead. We do not require the level of uptime afforded by this feature.
  3. vSphere Replication/Data Protection: These are separate products, and may be better considered down the road, if we feel the need to backup virtual machines off-site.
  • vCenter Deployment. vCenter deployment can be a Catch-22: we are trying to virtualize our entire infrastructure, yet need a host that is external to the entire infrastructure to manage it. There are several online resources (Google it) discussing vCenter as a VM within the cluster (how meta!). That is, vCenter as a VM managing the infrastructure on which it itself is virtualized!
I'm actually doing this in my home lab. It works very well, and I've even vMotion-ed it a bunch of times to perform maintenance on the other host. But the potential risks should be considered. -norwizzle (talk)
  • An alternative could be to recycle the physical machine of one of the servers that gets virtualized. Consider that, while vCenter is a very important part of the vSphere infrastructure, connecting to the hosts directly to perform emergency tasks is possible through vSphere Client. So even if vCenter is down, you can access the hosts.