Requirements For Using Virtualization In Windows Server
As already mentioned in the previous post, the basic and mandatory requirement for the use of virtualization technology with Windows Server 2008 Hyper-V is a processor, which must have a 64-bit architecture and the need – hardware support for DEP and virtualization (Intel VT or AMD-V). Other requirements are highly dependent on the tasks that are planned to perform. Processing power, memory and disk space should be selected on the basis of the necessary facilities to run the required virtual machines for all applications. Sizing the virtual environment – also very interesting and a great theme, and maybe soon I’ll write about it.
Resiliency
Here are the ways to increase the reliability of components, e.g., hard disk drives used in servers have a much longer MTBF than those used in home computers. The another way – redundancy: all, or a particularly critical components are duplicated, for example – hard drives work in “mirror mode” (RAID1), and in case of failure of one hard drive – the server continues to work on the second disc, and realizes that it is only a system administrator but not users of the system. It should be noted that these two pathways are not mutually exclusive, and vice versa – complementary. It is clear that any increase resiliency automatically leads to higher prices for the whole system, so it is important to find a middle ground. First and foremost, you must evaluate, to some damage in monetary terms that can lead to system failure, and increase resiliency in proportion to that amount. For example, the failure of the hard drive on home computer, which stores only some photos and a bunch of different “information” will not lead to great disaster, the maximum is to pay for a new hard drive. I would rather save important information, such as another hard drive or DVD-RW.
Failover clustering
In addition to individual server components – hard drives, memory modules, etc. – Can be backed up with the whole server. In this case two or more servers are operating in a group, and the user is presented as a single server that handles some custom applications, and responding. General information about the cluster configuration is stored on some kind of shared disk resource that is referred to as a quorum. Cluster require continue access to this resource for all the cluster nodes. As the quorum resource can be used by data storage system with interfaces iSCSI, SAS or FibreChannel.
In case of failure of one of the servers (they are called “cluster nodes”), custom applications are automatically restarted on the functioning nodes and the application either does not terminate or terminates in a fairly short time to just not entailed large losses. The process of moving applications from the failed node to a workable called Failover.
In order to timely identify bad sites – all nodes in the cluster periodically exchange information among themselves under the name “heartbeat”. If one node does not send heartbeat – this means that the failure occurred, and the process starts with Failover.
Process Failover – transport service from the failed node
In some cases, depending on the settings in the recovery efficiency Faulting application node can be moved back at him – a process called Failback:
Process Failback – Transfer service after disaster recovery site
Requirements for creating a failover cluster in Windows Server 2008
So, what do we need to create a cluster in Windows dedicated Server 2008? First, we need a shared disk resource that will be used as a quorum, as well as for data storage. It can be any data storage system (SAN) protocols are supported iSCSI, SAS or FibreChannel. Of course, all the nodes in the cluster must have the appropriate adapters to connect the storage system. All servers that operate as nodes in the cluster must have completely identical hardware (this is ideal), if not, then at least – one processor manufacturer. It is also very desirable that the operation of the cluster is desirable that all nodes in the cluster communicate with each other more than one network interface. It will be used as an additional channel for the exchange of heartbeat, and in some cases – and not just for this purpose (for example, when using Live Migration). If we’re going to use virtualization – all units must also meet the system requirements for Hyper-V (in particular – the processors).
Complex solution: virtualization + failover cluster
Thus, as already mentioned – a solution based on virtualization can be deployed on the platform of a failover cluster. What do we give it? It’s simple: we can use all the advantages of virtualization, while getting rid of the main drawbacks – single point of failure in the form of the hardware server. In case one of the servers, or any planned outages – replacement of iron, installing OS updates to reboot, etc. – Virtual machines can be moved to a workable site quickly enough, or even invisible to the user. Thus, the recovery time from a system failure will be measured by minutes, and scheduled shutdowns of servers, users will not notice at all. The disadvantage here is one: more expensive systems. First, it will probably need to buy storage that is certain, and sometimes a lot of money. Secondly – at least one other server. Thirdly, to work in a cluster will need a more expensive version of the OS – Enterprise or Data center Edition. In principle, this is compensated by free right to run a certain number of guest operating systems (up to 4 on the server – in Enterprise and without restrictions on the cloud server hosting – in Data center), or else you can use a free product – Microsoft Hyper-V Server R2, in the R2 release, which support clusters.
Ways to move virtual machines between nodes in a cluster
So, let’s say we have a cluster running virtual machines. Recently users started complaining that he was not enough system performance. Performance analysis has shown that applications do not have enough RAM, and sometimes – CPU power. It was decided to add the server several modules of RAM, and an additional processor. Once the processor and memory modules come from the vendor – the problem arose, in fact, produce a replacement. As we know, this requires the server off for a while. Nevertheless, users need to work, and simple, even 10 minutes is fraught with losses. Fortunately, we have previously raised the cluster, and therefore remain after the work we do not need, but you just have to move running virtual machines to another server.
How can this be done? There are three ways:
1) Move – simply moving virtual machines from one host to another. Itself a virtual machine at this translated into a state of Offline (through the completion of the work or the preservation of state), and then – run on another node. Least a simple way, but the most lengthy and “sensitive” to the user. Before moving to notify all users so they can save their data and exit the application.
2) Quick Migration – the contents of RAM is stored entirely on disk, and then on the target host is running a virtual machine with the restoration of the contents of memory from disk.
3) Live Migration – one of the most exciting new technologies of Windows Server 2008 R2. Live Migration, is a direct copy of the contents to a memory of virtual machine on the network from one host to another, bypassing discs. The process is somewhat similar to the creation of shadow copies of files open (VSS). The whole process restart in a second, less than the timeout of TCP-connection, so users do not notice anything at all. Thus, all scheduled maintenance work requiring shutting down the system can be carried out in normal working hours, without distracting users from their work.
It is also necessary to note that any of these methods, and Live Migration including – are regular features of Windows Server 2008 R2, and does not require the use of buying any additional software and licenses , each set running in a virtual machine also requires a license .
Conclusion
Server virtualization allows a single server, where before you had to use ten. Of course, this will allow a good save on virtually everything: on the hardware, software and licenses, and on overheads. Nevertheless, when using virtualization drops sharply overall system reliability. In this article we learned how to improve the reliability of the system through the use of failover clusters. The following article is entirely devoted to one of the “highlights” of Windows Server 2008 R2 – Live Migration.
- How Cloud Computing Is Changing The Labor Market - March 25, 2015
- Adopting Infrastructure as a Service Can be a Good Deal - March 17, 2015
- Will Virtualize? Take These Six Points Into Consideration - March 12, 2015