technical report netapp deployment guidelines and … · best practice guide ... it is a unified...
TRANSCRIPT
Technical Report
NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 Best Practice Guide
Brahmanna Chowdary Kodavali and Shashanka SR, NetApp
November 2016 | TR-4568
NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 Best Practice Guide Brahmanna Chowdary Kodavali and Shashanka SR, NetApp
October 2016 | TR-XXXX
Abstract
This technical report discusses the value of NetApp® storage for Windows Server 2016. This
report also provides best practices and deployment guidance for NetApp storage in Windows
Server 2016 environments. Important features offered by NetApp to complement Windows
Server 2016 are also covered.
Abstract
This technical report provides insight into the NetApp storage value proposition for Windows
1 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
TABLE OF CONTENTS
1 Overview ................................................................................................................................................ 4
1.1 Purpose and Scope ........................................................................................................................................ 4
1.2 Intended Audience .......................................................................................................................................... 4
2 NetApp Storage and Windows Server 2016 Environment ................................................................ 4
2.1 ONTAP 9.0 Overview ..................................................................................................................................... 5
2.2 Storage Virtual Machines ................................................................................................................................ 6
3 Windows Server 2016 Enhancements ................................................................................................ 7
3.1 Hyper-V Improvements ................................................................................................................................... 7
3.2 Guarded Fabric and Shielded Virtual Machines ........................................................................................... 11
3.3 Network Controller ........................................................................................................................................ 13
3.4 Failover Clustering Improvements ................................................................................................................ 14
3.5 Nano Server ................................................................................................................................................. 15
3.6 Containers .................................................................................................................................................... 16
4 Provisioning NetApp Storage for Windows Server 2016 ................................................................ 17
4.1 Managing NetApp Storage ........................................................................................................................... 17
4.2 NetApp PowerShell Toolkit ........................................................................................................................... 17
4.3 NetApp SMI-S Provider ................................................................................................................................ 18
4.4 Networking Best Practices ............................................................................................................................ 18
5 Provisioning in SAN Environments .................................................................................................. 19
5.1 Provisioning NetApp LUN on Windows Server 2016 .................................................................................... 19
5.2 Provisioning NetApp LUNs on Nano Server ................................................................................................. 22
5.3 Boot from SAN .............................................................................................................................................. 24
6 Provisioning in SMB Environments .................................................................................................. 26
6.1 Provisioning SMB Share on Windows Server 2016 ...................................................................................... 26
6.2 Provisioning SMB Share on Nano Server ..................................................................................................... 28
7 Hyper-V Storage Infrastructure on NetApp ...................................................................................... 29
7.1 Hyper-V Clustering: High Availability and Scalability for Virtual Machines .................................................... 32
7.2 Hyper-V Live Migration: Migration of VMs .................................................................................................... 33
7.3 Hyper-V Replica: Disaster Recovery for Virtual Machines ............................................................................ 37
7.4 Hyper-V Centralized Management: Microsoft System Center Virtual Machine Manager .............................. 39
7.5 Azure Site Recovery: Cloud Orchestrated Disaster Recovery for Hyper-V Assets ....................................... 39
8 Storage Efficiency ............................................................................................................................... 40
2 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
8.1 NetApp Deduplication ................................................................................................................................... 40
8.2 Thin Provisioning .......................................................................................................................................... 41
8.3 Quality of Service .......................................................................................................................................... 41
9 Security ................................................................................................................................................ 42
9.1 Windows Defender ....................................................................................................................................... 42
9.2 BitLocker ....................................................................................................................................................... 42
Appendix A: Deploy Nested Virtualization ............................................................................................. 42
Prerequisites ......................................................................................................................................................... 42
Deployment ........................................................................................................................................................... 43
Appendix B: Deploy Nano Server ............................................................................................................ 43
Deployment ........................................................................................................................................................... 43
Connect to Nano Server ........................................................................................................................................ 44
Appendix C: Deploy Hyper-V Cluster ...................................................................................................... 45
Prerequisites ......................................................................................................................................................... 45
Deployment ........................................................................................................................................................... 45
Appendix D: Deploy Hyper-V Live Migration in a Clustered Environment ......................................... 47
Prerequisites ......................................................................................................................................................... 47
Deployment ........................................................................................................................................................... 47
Appendix E: Deploy Hyper-V Live Migration Outside a Clustered Environment ............................... 47
Prerequisites ......................................................................................................................................................... 47
Deployment ........................................................................................................................................................... 47
10 Appendix F: Deploy Hyper-V Storage Live Migration ..................................................................... 48
Prerequisites ......................................................................................................................................................... 48
Deployment ........................................................................................................................................................... 48
Appendix G: Deploy Hyper-V Replica Outside a Clustered Environment ........................................... 49
Prerequisites ......................................................................................................................................................... 49
Deployment ........................................................................................................................................................... 49
Replication ............................................................................................................................................................ 50
Appendix H: Deploy Hyper-V Replica in a Clustered Environment ..................................................... 50
Prerequisites ......................................................................................................................................................... 50
Deployment ........................................................................................................................................................... 50
Replication ............................................................................................................................................................ 51
3 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
References ................................................................................................................................................. 51
Version History .......................................................................................................................................... 52
LIST OF TABLES
Table 1) Virtual machine file types. .............................................................................................................................. 10
Table 2) Virtual machine configuration versions........................................................................................................... 10
LIST OF FIGURES
Figure 1) NetApp storage deployment in Windows Server 2016 environment. .............................................................. 6
Figure 2) ONTAP storage virtual machine. .................................................................................................................... 7
Figure 3) Hyper-V nested virtualization and NetApp. ..................................................................................................... 9
Figure 4) Guarded fabric with shielded virtual machines. ............................................................................................. 12
Figure 5) Network controller. ........................................................................................................................................ 13
Figure 6) Containers. ................................................................................................................................................... 16
Figure 7) Multiple paths in SAN environment. .............................................................................................................. 20
Figure 8) Boot LUNs using NetApp FlexClone. ............................................................................................................ 24
Figure 9) Hyper-V storage infrastructure on NetApp. ................................................................................................... 29
Figure 10) Hyper-V failover cluster and NetApp. .......................................................................................................... 32
Figure 11) Live migration in a clustered environment. .................................................................................................. 33
Figure 12) Shared live migration in a nonclustered environment. ................................................................................ 34
Figure 13) Shared nothing live migration in a nonclustered environment to SMB shares. ........................................... 35
Figure 14) Shared nothing live migration in a nonclustered environment to LUNs. ...................................................... 35
Figure 15) Hyper-V storage live migration. ................................................................................................................... 37
Figure 16) Hyper-V Replica. ......................................................................................................................................... 38
Figure 17) Azure Site Recovery. .................................................................................................................................. 40
Figure 18) Storage virtual machine with its own QoS policy. ....................................................................................... 41
4 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
1 Overview
Microsoft Windows Server 2016 is an enterprise-class operating system (OS) that covers networking,
security, virtualization, private cloud, hybrid cloud, virtual desktop infrastructure, access protection,
information protection, web services, application platform infrastructure, and much more. This OS also
introduces many new features, including a minimal-footprint headless version called Nano Server,
guarded fabric, shielded virtual machines (VMs), containers, and improvements in Hyper-V. Other new
features include failover clustering, identity and access, management and automation, networking,
security, and storage areas.
NetApp ONTAP® 9.0 management software runs on NetApp storage controllers. It is a unified architecture
supporting both file and block protocols, which enables the storage controllers to act as both NAS and
SAN devices. ONTAP 9.0 provides NetApp storage efficiency features such as NetApp Snapshot®
technology, cloning, deduplication, thin provisioning, thin replication, compression, virtual storage tiering,
and much more with enhanced performance and efficiency.
Together, Windows Server 2016 and ONTAP 9.0 can operate in large environments and bring immense
value to data center consolidation and private or hybrid cloud deployments. This combination also
provides nondisruptive workloads efficiently and supports seamless scalability.
1.1 Purpose and Scope
This document provides technical insight into the NetApp storage value proposition for Windows Server
2016. The document discusses best practices and deployment guidance for NetApp storage in Windows
Server 2016 environments. It also discusses important features provided by NetApp to complement
Windows Server 2016 and to help reduce costs and increase efficiency, storage utilization, and fault
tolerance.
1.2 Intended Audience
This document is intended for system and storage architects who design NetApp storage solutions for the
Windows Server 2016 OS.
We make the following assumptions in this document:
The reader has general knowledge of NetApp hardware and software solutions. See the System Administration Guide for Cluster Administrators for details.
The reader has general knowledge of block-access protocols, such as iSCSI, FC, and FCoE, and the file-access protocol SMB/CIFS. See the Clustered Data ONTAP SAN Administration Guide and the Clustered Data ONTAP SAN Configuration Guide for SAN-related information. See the Best Practices Guide for Windows File Services and the CIFS/SMB Configuration Express Guide for CIFS/SMB-related information.
The reader has general knowledge of the Windows Server 2016 OS and Hyper-V.
For a complete, regularly updated matrix of tested and supported SAN and NAS configurations, see the
Interoperability Matrix Tool (IMT) on the NetApp Support site. With the IMT, you can determine the exact
product and feature versions that are supported for your specific environment. The NetApp IMT defines
the product components and versions that are compatible with NetApp supported configurations. Specific
results depend on each customer's installation in accordance with published specifications.
2 NetApp Storage and Windows Server 2016 Environment
NetApp storage controllers provide a truly unified architecture that supports both file and block protocols,
including CIFS, iSCSI, FC, FCoE, and NFS, and they create unified client and host access. The same
storage controller can concurrently deliver block storage service in the form of SAN LUNs and file service
5 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
as NFS and SMB/CIFS. A NetApp storage controller running ONTAP software can support the following
workloads in a Windows Server 2016 environment:
VMs hosted on continuously available SMB 3.0 shares
VMs hosted on Cluster Shared Volume (CSV) LUNs running on iSCSI or FC
SQL Server databases on SMB 3.0 shares
SQL Server databases on iSCSI or FC
Other application workloads
In addition, NetApp storage efficiency features such as deduplication, NetApp FlexClone® copies, NetApp
Snapshot technology, thin provisioning, compression, and storage tiering provide significant value for
workloads running on Windows Server 2016.
2.1 ONTAP 9.0 Overview
ONTAP 9.0 is management software that runs on a NetApp storage controller. Referred to as a node, a
NetApp storage controller is a hardware device with a processor, RAM, and NVRAM. The node can be
connected to SATA, SAS, or SSD disk drives or a combination of those drives.
Multiple nodes are aggregated into a clustered system. The nodes in the cluster communicate with each
other continuously to coordinate cluster activities. The nodes can also move data transparently from node
to node by using redundant paths to a dedicated cluster network consisting of two 10Gb Ethernet
switches. The nodes in the cluster can take over one another to provide high availability during any
failover scenarios. Clusters are administered on a whole-cluster rather than a per-node basis, and data is
served from one or more storage virtual machines (SVMs). A cluster must have at least one SVM to serve
data.
The basic unit of a cluster is the node, and nodes are added to the cluster as part of a high-availability
(HA) pair. HA pairs enable high availability by communicating with each other over an HA interconnect
(separate from the dedicated cluster network) and by maintaining redundant connections to the HA pair’s
disks. Disks are not shared between HA pairs, although shelves might contain disks that belong to either
member of an HA pair. Figure 1 depicts a NetApp storage deployment in a Windows Server 2016
environment.
6 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Figure 1) NetApp storage deployment in Windows Server 2016 environment.
2.2 Storage Virtual Machines
An ONTAP SVM (formerly known as a Vserver) is a logical storage server that provides data access to
LUNs and/or a NAS namespace from one or more logical interfaces (LIFs). Each SVM is configured to
own storage volumes provisioned from a physical aggregate and logical interfaces (LIFs) assigned either
to a physical Ethernet network or to FC target ports.
Logical disks (LUNs) or CIFS shares are created inside an SVM’s volumes and are mapped to Windows
hosts and clusters to provide them with storage space, as shown in Figure 2. SVMs are node
independent and cluster based; they can use physical resources such as volumes or network ports
anywhere in the cluster.
7 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Figure 2) ONTAP storage virtual machine.
Best Practice
NetApp recommends creating at least four LIFs per SVM: two data LIFs, one management LIF,
and one intercluster LIF (for intercluster replication) per node.
Further Reading
For information about SVMs, see the ONTAP System Administration Guide.
3 Windows Server 2016 Enhancements
3.1 Hyper-V Improvements
Connected Standby
Connected standby mode provides a connected standby power state for Hyper-V servers by using the
Always On/Always Connected power model.
8 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Discrete Device Assignment
Discrete device assignment provides a VM with direct access to some PCIe hardware devices, bypassing
the Hyper-V virtualization stack, which results in faster access to the devices.
Host Resource Protection
Host resource protection prevents a VM from using more than its share of system resources by
monitoring the VM for excessive activity. Doing so helps prevent performance degradation for the host or
other VMs. This feature is turned off by default. To enable this feature for a VM, run the following
PowerShell cmdlet on a Hyper-V server:
Set-VMProcessor -EnableHostResourceProtection $true
Hot Add/Remove Network Adapters and Memory
This feature allows you to add or remove a network adapter to the VM while the VM is running. This
feature is applicable only to generation-2 VMs. The feature also enables you to adjust the amount of
memory assigned to the VM while the VM is running, which is applicable to both generation-1 and
generation-2 VMs.
Hyper-V Manager Improvements
These improvements enable you to use an alternate set of credentials to connect to other Hyper-V hosts.
You can also manage Hyper-V servers running Windows Server 2102, 2012 R2, and Windows 8 and 8.1.
In addition, these improvements allow a Web Services Management protocol to communicate with a
remote Hyper-V server.
Linux Secure Boot
This feature enables Linux VMs to boot by using the Secure Boot option. This feature is applicable only to
generation-2 VMs. To enable secure boot for a Linux VM, run the following PowerShell cmdlet on a
Hyper-V server:
Set-VMFirmware vmname -SecureBootTemplate MicrosoftUEFICertificateAuthority
Nested Virtualization
Nested virtualization enables a VM to act as a virtualized Hyper-V host on top of which other VMs can be
hosted. Storage infrastructure for the Hyper-V physical host and the virtualized hosts can be hosted on
NetApp storage systems. Storage for the VM’s files and disks can be provided by NetApp LUNs or
NetApp CIFS shares, as shown in Figure 3. Configuring NetApp storage infrastructure for nested Hyper-V
hosts is similar to configuration on a physical host.
Further Reading
For information about deploying nested virtualization, see Appendix A: “Deploy Nested Virtualization.”
For information on provisioning storage for a Hyper-V infrastructure, refer to the section “Hyper-V Storage Infrastructure on NetApp.”
For further details and instructions, see the Microsoft Nested Virtualization page.
9 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Figure 3) Hyper-V nested virtualization and NetApp.
Production Checkpoints
Production checkpoints provide point-in-time images of a VM based on backup technology inside the
guest VM instead of a saved state. Windows VMs use the Volume Snapshot Service (VSS) to create a
checkpoint, whereas Linux VMs use file system buffers.
Shielded Virtual Machines
This feature protects VMs from unauthorized access by encrypting the virtual disks. Protection is
extended even to the Hyper-V administrators. This feature is provided by a new role called the host
guardian service in Windows Server 2016.
Further Reading
For further information, refer to the section “Guarded Fabric and Shielded Virtual Machines.”
Storage Quality of Service
This feature allows you to monitor and manage storage performance for VMs using Hyper-V and the
scale-out file server role. The feature improves storage-resource balance between multiple VMs that are
sharing storage. It also allows policy-based minimum and maximum IOPS for the VMs. Storage quality of
service (QoS) supports the following two deployment scenarios:
Hyper-V using a scale-out file server. This scenario is beyond the scope of this document. For more information about this scenario, see Storage Quality of Service.
Hyper-V using a CSV. This feature requires a Hyper-V failover cluster with CSV as the shared storage. When a new failover cluster and a CSV are configured, the storage QoS feature is set up automatically.
10 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
A storage QoS can be verified by using Failover Cluster Manager. Click the cluster and verify that the status of the storage QoS resource is shown as online in Cluster Core Resources. The cluster can also be verified by running the following PowerShell cmdlet:
Get-ClusterResource -Name "Storage Qos Resource"
To view the storage performance metrics, use the Get-StorageQoSFlow and Get-
StorageQoSVolume cmdlets. To create and monitor the storage QoS policies, use the New-
StorageQosPolicy cmdlet.
To learn more about viewing performance metrics and creating storage QoS policies using these cmdlets, see the Microsoft Storage Quality of Service site.
Virtual Machine Configuration File Format
VM configuration files use a new format in Windows Server 2016. VM configuration data files use the
.vmcx file name extension and the VM run-time state data files use the .vmrs file extension. These new
file formats read and write the configuration data more efficiently. Also, these file formats are binary in
nature and cannot be edited. Table 1 shows the files used by a Hyper-V VM.
Table 1) Virtual machine file types.
VM File Type Description File Name Extension
Configuration VM configuration information stored in binary format
.vmcx
Run-time state VM run-time state information stored in binary file format
.vmrs
Virtual hard disk Virtual hard disks of the VM .vhd or .vhdx
Automatic virtual hard disk Differencing disk files of the VM .avhdx
Checkpoint Each checkpoint creates a configuration file and run-time state file
.vmrs and .vmcx
VM Configuration Version
The VM configuration version provides compatibility information for VMs with other versions of Hyper-V.
The configuration version for the VMs created on Hyper-V 2016 is 7.1. Table 2 shows the supported VM
configuration versions for the various Hyper-V hosts.
Table 2) Virtual machine configuration versions.
Hyper-V Host Supported VM Configuration versions
Windows Server 2016 5.0, 6.2, 7.0, 7.1
Windows 10 build 10565 or later 5.0, 6.2, 7.0
Windows 10 builds earlier than 10565 5.0, 6.2
Windows Server 2012 R2 5.0
Windows 8.1 5.0
To query the VM configuration versions supported by the Hyper-V host, run the following PowerShell
cmdlet:
Get-VMHostSupportedVersion
11 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
To query the configuration version of all the VMs on a Hyper-V host, run the following PowerShell cmdlet:
Get-VM * | Format-Table Name, Version
As shown in Table 2, VMs created on Windows Server 2016 cannot be run on earlier versions of Hyper-
V, whereas the reverse is true. When a VM from an earlier Hyper-V version is moved or imported to
Hyper-V 2016, the VM retains its old VM configuration version and is not eligible to use the new features
of Hyper-V 2016. For the VM to use the new features, it must be manually upgraded to the new VM
configuration version 7.1 in Hyper-V Manager. Right-click the VM and click Upgrade Configuration
Version. The VM configuration version can be updated but cannot be downgraded.
PowerShell Direct
PowerShell Direct enables a Hyper-V host to run PowerShell commands on a VM without a remote
management configuration or networking or firewall exceptions. A PowerShell direct session can be
established from the Hyper-V host to its VM using the VM name alone and the Enter-PSSession or
Invoke-Command cmdlets.
Enter-PSSession -VMName <VM name>
#or
Enter-PSSession -VMGUID <VMGUID>
Invoke-Command -VMName <VM name> -FilePath <script path>
#or
Enter-PSSession -VMName <VM name> -ScriptBlock {<script>}
Rolling Hyper-V Cluster Upgrade
This function allows you to upgrade the OS of all the nodes in a cluster from Windows Server 2012 R2 to
Windows Server 2016 without disrupting services. This function provides a seamless upgrade of a cluster
without downtime.
Further Reading
For further information, see the Microsoft Cluster Operating System Rolling Upgrade page.
Storage Resiliency
Storage resiliency is improved for VMs in Windows Server 2016. In the event of storage disruption, the
VM state is preserved. When a VM disconnects from its underlying storage, the VM is paused and waits
for the storage to recover. When the VM’s connection is restored to its underlying storage, the VM returns
to its running state.
3.2 Guarded Fabric and Shielded Virtual Machines
Windows Server 2016 offers a new role called host guardian service (HGS). This service introduces the
concept of a guarded fabric, which provides a safe hosting environment for VMs. The service protects the
VMs from compromised storage, network attacks, rogue host administrators, and malware attacks. The
VMs protected by the guarded fabric are known as shielded VMs. This service is supported only for
generation-2 VMs.
Host Guardian Service (HGS)
HGS is a new role in Windows Server 2016 that manages and authorizes the release of the encryption
keys used to shield VMs. The encryption keys are never revealed to anyone or anything other than the
12 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
VM they protect. These keys are needed when powering on or live migrating shielded VMs. HGS can be
managed from System Center Virtual Machine Manager (SCVMM) 2016. HGS uses two services:
Attestation service. HGS ensures that only known and healthy hosts can run shielded VMs. When a host requests attestation, HGS evaluates the validity of the host and, on successful validation, sends the attestation certificate to the host.
Key protection service. HGS securely releases the keys for shielded VMs. After receiving the attestation certificate from the HGS, a host can request a key for unlocking a VM. Upon receiving the key request, HGS determines whether to release a key to the host to access the shielded VM.
Guarded Fabric
Guarded fabric describes the fabric of Hyper-V hosts and their HGS that can manage and run shielded
VMs. Guarded fabric consists of these components:
One HGS (typically a cluster of three nodes)
One or more guarded hosts (Hyper-V hosts on which shielded VMs can run)
Shielded VMs
Shielded Virtual Machines
Shielded VMs can run only on guarded hosts and are protected from inspection, tampering, and theft. A
shielded VM provides these benefits:
BitLocker encrypted disks
A hardened VM worker process that helps prevent inspection and tampering
Automatically encrypted live migration traffic and encryption of the shielded VM’s run-time state file, saved state, checkpoints, and Hyper-V Replica files
Blocking of all services that provide paths from a user or process with administrative privileges to the VM, such as console access, PowerShell Direct, guest file copy integration components, and so on
Figure 4) Guarded fabric with shielded virtual machines.
Further Reading
For further information, see the Guarded Fabric Deployment Guide.
13 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
3.3 Network Controller
Windows Server 2016 offers a new role called network controller that provides a centralized,
programmable point of automation to manage, configure, monitor, and troubleshoot virtual and physical
network infrastructure in a data center. Network controller allows you to automate the configuration of
network infrastructure. Network controller can be deployed both in domain and in nondomain
environments.
Network controller provides the following APIs:
Southbound API. Network controller communicates with network devices, services, and components to gather all network information.
Northbound API. Network controller provides the user with the gathered information for monitoring and configuring the network.
You can manage a data center network with network controller by using the following management
applications to configure, monitor, program, and troubleshoot the network infrastructure:
SCVMM
System Center Operations Manager (SCOM)
Figure 5) Network controller.
Further Reading
For further information, see the Microsoft Network Controller page.
14 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
3.4 Failover Clustering Improvements
Cluster Operating System Rolling Upgrade
This feature allows you to upgrade the OS of all the nodes in a cluster from Windows Server 2012 R2 to
Windows Server 2016 without stopping Hyper-V or scale-out file server workloads.
In Windows Server 2012 R2, the migration of clusters from the previous version was cumbersome and
required downtime. The process involved taking the cluster offline and reinstalling the new OS for each
node and then bringing the cluster back online.
However, in Windows Server 2016, you do not need to take the cluster offline during the upgrading
process. The following steps are necessary to complete this process for each node in the cluster:
1. The node is paused and drained of all VMs running on it.
2. All of the VMs are migrated to another node in the cluster.
3. The node is evicted from the cluster and reformatted and Windows Server 2016 is installed.
4. The node is added back to the cluster. Now the cluster is running in a mixed mode, because it has nodes running either Windows Server 2012 R2 or Windows Server 2016.
5. Eventually all nodes are upgraded to Windows Server 2016.
6. Even though all the nodes are upgraded, the cluster operates at the Windows Server 2012 R2 level unless the cluster functional level is upgraded to Windows Server 2016. To upgrade the cluster
functional level to Windows Server 2016 and use its features, you must use the Update-
ClusterFunctionalLevel cmdlet.
Further Reading
For more information, see the Microsoft Cluster Operating System Rolling Upgrade page.
Virtual Machine Compute Resiliency
This feature increases VM compute resiliency to help reduce intracluster communication issues within a
cluster. The feature enables you to configure the VM resiliency options that define the behavior of the
VMs during transient failures.
Further Reading
For further information, see the Microsoft Virtual Machine Compute Resiliency in Windows
Server 2016 page.
Site-Aware Failover Clusters
These clusters allow you to group the nodes in a stretched cluster based on their physical location (site).
Each node can be configured with a site location so that similar site nodes can be grouped and efficiently
managed for failover, placement policies, heartbeat check, and quorum behavior.
Further Reading
For further information, see the Microsoft Site-Aware Failover Clusters in Windows Server 2016
page.
15 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Workgroup and Multidomain Clusters
This feature creates a failover cluster without Active Directory dependencies. In Windows Server 2012 R2
and previous versions, a cluster can be created only between nodes joined to the same domain. In
Windows Server 2016, failover clusters can be created with the following configurations:
Single-domain clusters. All nodes in the cluster are joined to the same domain.
Multidomain clusters. Nodes in the cluster can be members of different domains.
Workgroup clusters. Nodes in the cluster are not joined to any domain.
Further Reading
For further information, see the Workgroup and Multi-Domain Clusters in Windows Server 2016
page.
Virtual Machine Node Fairness
This feature provides seamless load balancing of VMs across the nodes in a cluster. Overcommitted
nodes are identified and their VMs are moved (live migrated) to less committed nodes in the cluster.
Simplified SMB Multichannel and Multi-NIC Cluster Networks
This feature allows a failover cluster to span multiple NICs per subnet or network. The network is
configured automatically and every NIC on the subnet can be used for cluster and workload traffic,
allowing maximum network throughput for the Hyper-V failover cluster and other SMB workloads.
3.5 Nano Server
Windows Server 2016 offers a new installation option called Nano Server for the Standard and
Datacenter editions. A Nano Server has the following characteristics:
A remotely administered server OS
Optimization for private clouds and data centers
Support for only 64-bit applications, tools, and agents
A requirement for far less disk space and fewer updates
Significantly faster setup and restart
A Nano Server is ideal for the following scenarios:
As a compute host for Hyper-V VMs
As a storage host for a scale-out file server
As a Domain Name System (DNS) server
As a web server running Internet Information Services
As a host for containers
This document discusses only Nano Server as a Hyper-V host. To learn more about Nano Server and its
deployment, see the Microsoft Getting Started with Nano Server page.
Hyper-V on Nano Server
Hyper-V on Nano Server works the same way as Hyper-V on Windows Server in server core mode,
except for the following issues:
Hyper-V on Nano Server is always managed through remote management.
The remotely managing computer must run the same build of Windows Server as the Nano Server.
16 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
RemoteFX is not available.
Nano Server Management
A Nano Server can be managed in the following ways:
You can create a direct connection through the Recovery Console, which has a full-screen text-mode login prompt. Logon credentials are provided in Step 3 of deployment. This connection provides fewer options, such as firewall rules, network adapters, and TCP/IP settings.
You can create a remote connection from other Windows 2016 servers using Windows PowerShell, Windows Management Instrumentation, Windows Remote Management, and the Emergency Management Services. Doing so requires the IP address of the Nano Server.
You can create a remote connection from other Windows 2016 servers using GUI management tools such as server manager, services, Hyper-V manager, and failover cluster manager.
Further Reading
For information about deploying Nano Server, see the section “Deploy Nano Server.”
For information about provisioning LUNs on Nano Server, see the section “Provisioning NetApp LUN on Nano Server.”
For information about provisioning an SMB share on Nano Server, see the section “Provisioning SMB Share on Nano Server.”
3.6 Containers
Windows Server 2016 introduces containers, which provide OS-level virtualization that enables multiple
isolated applications to be run on a single system. A container is an isolated, resource-controlled, and
portable operating environment in which an application can run without affecting the rest of the system
and without being affected by the system. There are two types of containers in Windows Server 2016:
Windows Server containers. These containers provide isolation through namespace and process isolation. The containers share the kernel with the host and other containers running on the host.
Hyper-V containers. These containers encapsulate each container in a light-weight, optimized VM. The containers do not share the kernel with the host.
Further Reading
For further information, see the Windows Containers Documentation and Containers: Docker,
Windows and Trends.
Figure 6) Containers.
17 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
4 Provisioning NetApp Storage for Windows Server 2016
Storage can be provisioned to Windows Server 2016 in both SAN and NAS environments. In a SAN
environment, the storage is provided as disks from LUNs on NetApp volume as block storage. In a NAS
environment, the storage is provided as CIFS/SMB shares on NetApp volumes as file storage. These
disks and shares can be applied in Windows Server 2016 as follows:
Storage for Windows Server 2016 hosts for application workloads
Storage for Nano Server and containers
Storage for individual Hyper-V hosts to store VMs
Shared storage for Hyper-V clusters in the form of CSVs to store VMs
Storage for SQL Server databases
4.1 Managing NetApp Storage
To connect, configure, and manage NetApp storage from Windows Server 2016, use one of the following
methods:
Secure Shell (SSH). Use any SSH client on Windows Server to run NetApp CLI commands.
OnCommand System Manager. This is NetApp’s GUI-based manageability product.
NetApp PowerShell Toolkit. This is the NetApp PowerShell Toolkit for automating and implementing custom scripts and workflows.
NetApp Storage Management Initiative Specification (SMI-S) Provider. This tool is a command-based interface that detects and manages platforms that run ONTAP.
4.2 NetApp PowerShell Toolkit
NetApp PowerShell Toolkit (PSTK) is a PowerShell module that provides end-to-end automation and
enables storage administration of NetApp storage controllers. PSTK is a unified package containing the
PowerShell modules for ONTAP and NetApp SANtricity® software. The ONTAP module contains over
2,000 cmdlets and helps with the administration of FAS, NetApp All Flash FAS (AFF), commodity
hardware, and cloud resources. The SANtricity module contains over 250 cmdlets and helps with the
administration of E-Series and EF-Series AFF systems.
Things to Remember
NetApp does not support Windows Server 2016 storage spaces. Storage spaces are used only for JBOD (just a bunch of disks) and does not work with any type of RAID (direct-attached storage [DAS] or SAN). ONTAP implements the NetApp RAID DP® storage technology or RAID 4. Also, storage spaces do not support boot volumes and cannot replicate data.
Clustered storage pools in Windows Server 2016 are not supported by ONTAP.
NetApp supports the shared virtual hard disk format (VHDX) for guest clustering in Windows SAN environments.
Windows Server 2016 does not support creating storage pools using iSCSI or FC LUNs.
Further Reading
For more information about ONTAP administration, visit the NetApp Support Site.
For more information about NetApp OnCommand System Manager, visit the NetApp Support Site.
18 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
For a first-steps guide on NetApp OnCommand System Manager, see TR-4214: OnCommand System Manager Workflow Guide.
For more information about the NetApp PowerShell Toolkit, visit the NetApp Support Site.
For a first-steps guide on the NetApp PowerShell Toolkit, see the ONTAP PowerShell Toolkit Primer.
For information about NetApp PowerShell Toolkit best practices, see TR-4475: NetApp PowerShell Toolkit Best Practices Guide.
4.3 NetApp SMI-S Provider
NetApp SMI-S Provider allows you to manage and monitor the storage systems, LUNs, and volumes of
storage systems. NetApp SMI-S Provider is a command-based interface that detects and manages
platforms that run ONTAP. The interface uses Web-Based Enterprise Management protocols to manage,
monitor, and report about storage elements by type. SMI-S Agent follows schemas standardized by two
organizations: the Distributed Management Task Force (DMTF) and the Storage Networking Industry
Association (SNIA). NetApp SMI-S Provider replaces using multiple managed-object models, protocols,
and transports with a single object-oriented model for all components in a storage network.
Further Reading
For more information about NetApp SMI-S Provider, see the NetApp Support Site.
For information on NetApp SMI-S Provider best practices and implementation details, see TR-4271: NetApp SMI-S Provider Best Practices and Implementation Guide.
For information about DMTF, see the Distributed Management Task Force home page.
For information about SNIA, see the SNIA home page.
4.4 Networking Best Practices
Ethernet networks can be broadly segregated into the following groups:
A client network for the VMs
A storage network (iSCSI or SMB 3.0 connecting to the storage systems)
A cluster communication network (heartbeat and other communication between the nodes of the cluster)
A management network (to monitor and troubleshoot the system)
A migration network (for host live migration)
VM replication (a Hyper-V Replica)
Best Practices
NetApp recommends having dedicated physical ports for each of the preceding functionalities for network isolation and performance.
For each of the preceding network requirements (except for the storage requirements), multiple physical network ports can be aggregated to distribute load or provide fault tolerance.
NetApp recommends having a dedicated virtual switch created on the Hyper-V host for guest storage connection within the VM.
Make sure that the Hyper-V host and guest iSCSI data paths use different physical ports and virtual switches for secure isolation between the guest and the host.
NetApp recommends avoiding NIC teaming for iSCSI NICs.
19 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
NetApp recommends using ONTAP multipath input/output (MPIO) configured on the host for storage purposes. For more information, see TR-3441: Windows Multipathing Options with ONTAP – Fibre Channel and iSCSI.
NetApp recommends using MPIO within a guest VM if using guest iSCSI initiators. MPIO usage must be avoided within the guest if you use pass-through disks. In this case, installing MPIO on the host should suffice.
When using third-party extensible switch plug-ins such as Cisco 1000V in Windows Server 2016, see the best practices from Cisco to configure the IP storage traffic.
NetApp recommends not applying QoS policies to the virtual switch assigned for the storage network.
NetApp recommends not using automatic private IP addressing (APIPA) on physical NICs because APIPA is nonroutable and not registered in the DNS.
NetApp recommends turning on jumbo frames for CSV, iSCSI, and live migration networks to increase the throughput and reduce CPU cycles.
NetApp recommends unchecking the option Allow Management Operating System to Share This Network Adapter for the Hyper-V virtual switch to create a dedicated network for the VMs.
NetApp recommends creating redundant network paths (multiple switches) for live migration and the iSCSI network to provide resiliency and QoS.
5 Provisioning in SAN Environments
ONTAP SVMs support the block protocols iSCSI and FC/FCoE. When an SVM is created with block
protocol iSCSI or FC/FCoE, the SVM gets either an iSCSI Qualified Name (IQN) or an FC worldwide
name (WWN), respectively. This identifier presents a SCSI target to hosts that access NetApp block
storage.
5.1 Provisioning NetApp LUN on Windows Server 2016
Prerequisites
Using NetApp storage in SAN environments in Windows Server 2016 has the following requirements:
A NetApp cluster is configured with one or more NetApp storage controllers.
The NetApp cluster or storage controllers have a valid iSCSI license.
iSCSI and/or FC/FCoE configured ports are available.
FC zoning is performed on an FC switch for FC/FCoE.
At least one aggregate is created.
At least one data LIF per cluster node is created and the data LIF must be configured for iSCSI or FC/FCoE.
Deployment
1. Create a new SVM with block protocol iSCSI and/or FC/FCoE enabled. A new SVM can be created with any of the following methods:
CLI commands on NetApp storage
NetApp OnCommand® System Manager
NetApp PowerShell Toolkit
2. Configure the iSCSI and/or FC/FCoE protocol.
20 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
3. Assign the SVM with LIFs on each cluster node.
4. Start the iSCSI and/or FC/FCoE service on the SVM.
5. Create a volume from the aggregate.
6. Create iSCSI and/or FC/FCoE port sets using the SVM LIFs.
7. Create an iSCSI and/or FC/FCoE initiator group for Windows using the port set created.
8. Add an initiator to the initiator group. The initiator is the IQN for iSCSI and WWPN for FC/FCoE. They
can be queried from Windows Server 2016 by running the PowerShell cmdlet Get-InitiatorPort.
# Get the IQN for iSCSI
Get-InitiatorPort | Where {$_.ConnectionType -eq 'iSCSI'} | Select-Object -Property NodeAddress
# Get the WWPN for FC/FCoE
Get-InitiatorPort | Where {$_.ConnectionType -eq 'Fibre Channel'} | Select-Object -Property
PortAddress
# While adding initiator to the initiator group in case of FC/FCoE, make sure to provide the
initiator(PortAddress) in the standard WWPN format
The IQN for iSCSI on Windows Server 2016 can also be checked in the configuration of the iSCSI initiator properties.
Create a LUN on the volume and associate it with the initiator group created.
Host Integration
Windows Server 2016 uses Asymmetrical Logical Unit Access (ALUA) extension MPIO to determine
direct and indirect paths to LUNs. Even though every LIF owned by an SVM accepts read/write requests
for its LUNs, only one of the cluster nodes actually owns the disks backing that LUN at any given
moment. This divides available paths to a LUN into two types, direct or indirect, as shown in Figure 7.
A direct path for a LUN is a path on which an SVM’s LIFs and the LUN being accessed reside on the
same node. To go from a physical target port to disk, it is not necessary to traverse the cluster network.
Indirect paths are data paths on which an SVM’s LIFs and the LUN being accessed reside on different
nodes. Data must traverse the cluster network to go from a physical target port to disk.
Figure 7) Multiple paths in SAN environment.
21 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
MPIO
NetApp storage appliances provide highly available storage in which multiple paths from the storage
controller to the Windows Server 2016 can exist. Multipathing is the ability to have multiple data paths
from a server to a storage array. Multipathing protects against hardware failures (cable cuts, switch and
host bus adapter [HBA] failure, and so on), and it can provide higher performance limits by using the
aggregate performance of multiple connections. When one path or connection becomes unavailable, the
multipathing software automatically shifts the load to one of the other available paths. The MPIO feature
combines the multiple physical paths to the storage as a single logical path that is used for data access to
provide storage resiliency and load balancing. To use this feature, the MPIO feature must be enabled on
Windows Server 2016.
Enable MPIO
To enable MPIO on Windows Server 2016, complete the following steps:
1. Log in to Windows Server 2016 as a member of the administrator group.
2. Start Server Manager.
3. In the Manage section, click Add Roles and Features.
4. In the Select Features page, select Multipath I/O.
Configure MPIO
When using the iSCSI protocol, you must tell Windows Server 2016 to apply multipath support to iSCSI
devices in the MPIO properties.
To configure MPIO on Windows Server 2016, complete the following steps:
1. Log on to Windows Server 2016 as a member of the administrator group.
2. Start Server Manager.
3. In the Tools section, click MPIO.
4. In MPIO Properties on Discover Multi-Paths, select Add Support for iSCSI Devices and click Add. A prompt then asks you to restart the computer.
5. Reboot Windows Server 2016 to see the MPIO device listed in the MPIO Devices section of MPIO Properties.
Configure iSCSI
To detect iSCSI block storage on Windows Server 2016, complete the following steps:
1. Log on to Windows Server 2016 as a member of the administrator group.
2. Start Server Manager.
3. In the Tools section, click iSCSI Initiator.
4. Under the Discovery tab, click Discover Portal.
5. Provide the IP address of the LIFs associated with the SVM created for the NetApp storage for SAN protocol. Click Advanced, configure the information in the General tab, and click OK.
6. The iSCSI initiator automatically detects the iSCSI target and lists it in the Targets tab.
7. Select the iSCSI target in Discovered Targets. Click Connect to open the Connect To Target window.
8. You must create multiple sessions from the Windows Server 2016 host to the target iSCSI LIFs on the NetApp storage cluster. To do so, complete the following steps:
a. In the Connect to Target window, select Enable MPIO and click Advanced.
b. In Advanced Settings under the General tab, select the local adapter as the Microsoft iSCSI initiator and select the Initiator IP and Target Portal IP.
22 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
c. You must also connect using the second path. Therefore, repeat step 7 through step 9, but this time select the Initiator IP and Target Portal IP for the second path.
d. Select the iSCSI target in Discovered Targets on the iSCSI Properties main window and click Properties.
e. The Properties window shows that multiple sessions have been detected. Select the session, click Devices, and then click the MPIO to configure the load balancing policy. All the paths configured for the device are displayed and all load balancing policies are supported. NetApp generally recommends round robin with subset, and this setting is the default for arrays with ALUA enabled. Round robin is the default for active-active arrays that do not support ALUA.
Detect Block Storage
To detect iSCSI or FC/FCoE block storage on Windows Server 2016, complete the following steps:
1. Click Computer Management in the Tools section of the Server Manager.
2. In Computer Management, click the Disk Management in Storage section and then click More Actions and Rescan Disks. Doing so displays the raw iSCSI LUNs.
3. Click the discovered LUN and make it online. Then select Initialize Disk using the MBR or GPT partition. Create a new simple volume by providing the volume size and drive letter and format it using FAT, FAT32, NTFS, or the Resilient File System (ReFS).
Best Practices
NetApp recommends enabling thin provisioning on the volumes hosting the LUNs.
NetApp recommends enabling offloaded data transfer (ODX) on the storage system. Doing so significantly increases the copy and provisioning performance in Windows Server 2016 environments.
To avoid multipathing problems, NetApp recommends using either all 10Gb sessions or all 1Gb sessions to a given LUN.
NetApp recommends that you confirm that ALUA is enabled on the storage system. ALUA is enabled by default with Data ONTAP systems. However, ALUA must be enabled manually for the FC-based igroups on systems operating in 7-Mode.
On the Windows Server 2016 host to where the NetApp LUN is mapped, enable iSCSI
Service (TCP-In) for Inbound and iSCSI Service (TCP-Out) for
Outbound in the firewall settings. These settings allow iSCSI traffic to pass to and from the
Hyper-V host and NetApp controller.
Further Reading
For information about Fibre Channel SAN best practices, see TR-4017: Fibre Channel SAN
Best Practices.
5.2 Provisioning NetApp LUNs on Nano Server
Prerequisites
In addition to the prerequisites mentioned in the previous section, the storage role must be enabled from
the Nano Server side. For example, Nano Server must be deployed using the -Storage option. To
deploy Nano Server, see the section “Deploy Nano Server.”
23 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Deployment
To provision NetApp LUNs on a Nano Server, complete the following steps:
1. Connect to the Nano Server remotely using instructions in the section “Connect to Nano Server.”
2. To configure iSCSI, run the following PowerShell cmdlets on the Nano Server:
# Start iSCSI service, if it is not already running
Start-Service msiscsi
# Create a new iSCSI target portal
New-IscsiTargetPortal –TargetPortalAddress <SVM LIF>
# View the available iSCSI targets and their node address
Get-IscsiTarget
# Connect to iSCSI target
Connect-IscsiTarget -NodeAddress <NodeAddress>
# NodeAddress is retrived in above cmdlet Get-IscsiTarget
# OR
Get-IscsiTarget | Connect-IscsiTarget
# View the established iSCSI session
Get-IscsiSession
# Note the InitiatorNodeAddress retrieved in the above cmdlet Get-IscsiSession. This is the IQN
for Nano server and this needs to be added in the Initiator group on NetApp Storage
# Rescan the disks
Update-HostStorageCache
3. Add an initiator to the initiator group.
Add the InitiatorNodeAddress retrieved from the cmdlet Get-IscsiSession to the Initiator Group on
NetApp Controller
4. Configure MPIO.
# Enable MPIO Feature
Enable-WindowsOptionalFeature -Online -FeatureName MultipathIo
# Get the Network adapters and their IPs
Get-NetIPAddress –AddressFamily IPv4 –PrefixOrigin <Dhcp or Manual>
# Create one MPIO-enabled iSCSI connection per network adapter
Connect-IscsiTarget -NodeAddress <NodeAddress> -IsPersistent $True –IsMultipathEnabled $True –
InitiatorPortalAddress <IP Address of ethernet adapter>
# # NodeAddress is retrieved from the cmdlet Get-IscsiTarget
# IPs are retrieved in above cmdlet Get-NetIPAddress
# View the connections
Get-IscsiConnection
5. Detect block storage.
# Rescan disks
Update-HostStorageCache
# Get details of disks
Get-Disk
# Initialize disk
Initialize-Disk -Number <DiskNumber> -PartitionStyle <GPT or MBR>
# DiskNumber is retrived in the above cmdlet Get-Disk
# Bring the disk online
Set-Disk -Number <DiskNumber> -IsOffline $false
# Create a volume with maximum size and default drive letter
New-Partition -DiskNumber <DiskNumber> -UseMaximumSize -AssignDriveLetter
# To choose the size and drive letter use -Size and -DriveLetter parameters
24 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
# Format the volume
Format-Volume -DriveLetter <DriveLetter> -FileSystem <FAT32 or NTFS or REFS>
5.3 Boot from SAN
A physical host (server) or a Hyper-V VM can boot the Windows Server 2016 OS directly from a NetApp
LUN instead of its internal hard disk. In the boot-from-SAN approach, the OS image to boot from resides
on a NetApp LUN that is attached to a physical host or VM. For a physical host, the HBA of the physical
host is configured to use the NetApp LUN for booting. For a VM, the NetApp LUN is attached as a pass-
through disk for booting.
NetApp FlexClone Approach
Using NetApp FlexClone technology, boot LUNs with an OS image can be cloned instantly and attached
to the servers and VMs to rapidly provide clean OS images, as show in Figure 4.
Figure 8) Boot LUNs using NetApp FlexClone.
Boot from SAN for Physical Host
Prerequisites
The physical host (server) has a proper iSCSI or FC HBA.
You have downloaded a suitable HBA device driver for the server supporting Windows Server 2016.
The server has a suitable CD/DVD drive or virtual media to insert the Windows Server 2016 ISO image and the HBA device driver has been downloaded.
A NetApp iSCSI or FC/FCoE LUN is provisioned on the NetApp storage controller. See the section “Provisioning in SAN Environments” for more information.
Deployment
To configure booting from SAN for a physical host, complete the following steps:
25 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
1. Enable BootBIOS on the server HBA.
2. For iSCSI HBAs, configure the Initiator IP, iSCSI node name, and adapter boot mode in the boot BIOS settings.
3. When creating an initiator group for iSCSI and/or FC/FCoE on a NetApp storage controller, add the server HBA initiator to the group. The HBA initiator of the server is the WWPN for the FC HBA or iSCSI node name for iSCSI HBA.
4. Create a LUN on the NetApp storage controller with a LUN ID of 0 and associate it with the initiator group created in the previous step. This LUN serves as a boot LUN.
5. Restrict the HBA to a single path to the boot LUN. Additional paths can be added after Windows Server 2016 is installed on the boot LUN to exploit the multipathing feature.
6. Use the HBA’s BootBIOS utility to configure the LUN as a boot device.
7. Reboot the host and enter the host BIOS utility.
8. Configure the host BIOS to make the boot LUN the first device in the boot order.
9. From the Windows Server 2016 ISO, launch the installation setup.
10. When the installation asks, “Where Do You Want to Install Windows?,” click Load Driver at the bottom of the installation screen to launch the Select Driver to Install page. Provide the path of the HBA device driver downloaded earlier and finish the installation of the driver.
11. Now the boot LUN created previously must be visible on the Windows installation page. Select the boot LUN for installation of Windows Server 2016 on the boot LUN and finish the installation.
Boot from SAN for Virtual Machine
To configure booting from SAN for a VM, complete the following steps:
Deployment
1. When creating an initiator group for iSCSI or FC/FCoE on a NetApp storage controller, add the IQN for iSCSI or the WWN for FC/FCoE of the Hyper-V server to the controller. Review the section “Provisioning NetApp LUN on Windows Server 2016” for more details on provisioning LUNs for Windows Server 2016.
2. Create LUNs or LUN clones on the NetApp storage controller and associate them with the initiator group created in the previous step. These LUNs serve as boot LUNs for the VMs.
3. Detect the LUNs on the Hyper-V server, bring them online, and initialize them.
4. Bring the LUNs offline.
5. Create VMs with the option Attach a Virtual Hard Disk Later on the Connect Virtual Hard Disk page.
6. Add a LUN as a pass-through disk to a VM.
a. Open the VM settings.
b. Click IDE Controller 0, select Hard Drive, and click Add. Selecting IDE Controller 0 makes this disk the first boot device for the VM.
c. Select Physical Hard Disk in the Hard Disk options and select a disk from the list as a pass-through disk. The disks are the LUNs configured in the previous steps.
7. Install Windows Server 2016 on the pass-through disk.
Best Practices
Make sure that the LUNs are offline. Otherwise, the disk cannot be added as a pass-through disk to a VM.
When multiple LUNs exist, be sure to note the disk number of the LUN in disk management. Doing so is necessary because disks listed for the VM are listed with the
26 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
disk number. Also, the selection of the disk as a pass-through disk for the VM is based on this disk number.
NetApp recommends avoiding NIC teaming for iSCSI NICs.
NetApp recommends using ONTAP MPIO configured on the host for storage purposes. For more information, see TR-3441: Windows Multipathing Options with ONTAP – Fibre Channel and iSCSI.
Further Reading
For more information about booting from SAN, see TR-3922: NetApp and Microsoft Hyper-V in
Software Development and Testing Environments.
6 Provisioning in SMB Environments
NetApp ONTAP SVMs support the Microsoft NAS CIFS protocol, formerly known as the SMB protocol.
When an SVM is created with the CIFS protocol, a CIFS server runs on top of the SVM that is part of the
Windows Active Directory Domain. SMB shares can be used for a home directory and to host Hyper-V
and SQL Server workloads. The following SMB 3.0 features are supported in ONTAP:
Persistent handles (continuously available file shares)
Witness protocol
Clustered client failover
Scale-out awareness
ODX
Remote VSS
6.1 Provisioning SMB Share on Windows Server 2016
Prerequisites
Using NetApp storage in NAS environments in Windows Server 2016 has the following requirements:
A NetApp cluster is configured with one or more NetApp storage controllers.
The NetApp cluster or storage controllers have a valid CIFS license.
At least one aggregate is created.
At least one data logical interface (LIF) per cluster node is created and the data LIF must be configured for CIFS.
A DNS-configured Windows Active Directory domain server and domain administrator credentials are present.
Each node in the NetApp cluster is time synchronized with the Windows domain controller.
Active Directory Domain Controller
A NetApp storage controller can be joined to and operate within an Active Directory similar to a Windows
Server. During the creation of the SVM, you can configure the DNS by providing the domain name and
name server details. The SVM attempts to search for an Active Directory domain controller by querying
the DNS for an Active Directory/Lightweight Directory Access Protocol (LDAP) server in a manner similar
to Windows Server.
27 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
For the CIFS setup to work properly, the NetApp storage controllers must be time synchronized with the
Windows domain controller. NetApp recommends having a time skew between the Windows domain
controller and the NetApp storage controller of not more than five minutes. It is a best practice to
configure the Network Time Protocol (NTP) server for the ONTAP cluster to synchronize with an external
time source. To configure the Windows domain controller as the NTP server, run the following command
on your ONTAP cluster:
$domainControllerIP = “<input IP Address of windows domain controller>”
cluster::> system services ntp server create –server $domainControllerIP
Deployment
1. Create a new SVM with the NAS protocol CIFS enabled. A new SVM can be created with any of the following methods:
CLI commands on NetApp storage
NetApp OnCommand System Manager
The NetApp PowerShell Toolkit
2. Configure the CIFS protocol
a. Provide the CIFS server name.
b. Provide the Active Directory to which the CIFS server must be joined. You must have the domain administrator credentials to join the CIFS server to the Active Directory.
3. Assign the SVM with LIFs on each cluster node.
4. Start the CIFS service on the SVM.
5. Create a volume with the NTFS security style from the aggregate.
6. Create a qtree on the volume (optional).
7. Create shares that correspond to the volume or qtree directory so that they can be accessed from Windows Server 2016. Select Enable Continuous Availability for Hyper-V during the creation of the share if the share is used for Hyper-V storage. Doing so enables high availability for file shares.
8. Edit the share created and modify the permissions as required for accessing the share. The permissions for the SMB share must be configured to grant access for the computer accounts of all the servers accessing this share.
Host Integration
The NAS protocol CIFS is natively integrated into ONTAP. Therefore, Windows Server 2016 does not
require any additional client software to access data on NetApp storage controllers. A NetApp storage
controller appears on the network as a native file server and supports Microsoft Active Directory
authentication.
To detect the CIFS share created previously with Windows Server 2016, complete the following steps:
1. Log in to Windows Server 2016 as a member of the administrator group.
2. Go to run.exe and type the complete path of the CIFS share created to access the share.
3. To permanently map the share onto the Windows Server, right-click This PC, click Map Network Drive, and provide the path of the CIFS share.
4. Certain CIFS management tasks can be performed using Microsoft Management Console (MMC). Before performing these tasks, you must connect the MMC to the NetApp storage using the MMC menu commands.
a. To open the MMC in Windows Server 2016, click Computer Management in the Tools section of Server Manager.
b. Click More Actions and Connect to Another Computer, which opens the Select Computer dialog.
28 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
c. Enter the name of the CIFS server or the IP address of the SVM LIF to connect to the CIFS server.
d. Expand System Tools and Shared Folders to view and manage open files, sessions, and shares.
Best Practices
To confirm that there is no downtime when a volume is moved from one node to another or in the case of a node failure, NetApp recommends that you enable the continuous availability option on the file share.
When provisioning VMs for a Hyper-V-over-SMB environment, NetApp recommends that you enable copy offload on the storage system. Doing so reduces the VMs’ provisioning time.
If the storage cluster hosts multiple SMB workloads such as SQL Server, Hyper-V, and CIFS servers, NetApp recommends hosting different SMB workloads on separate SVMs on separate aggregates. This configuration is beneficial because each of these workloads warrants unique storage networking and volume layouts.
NetApp recommends connecting Hyper-V hosts and the NetApp array with a 10GB network if one is available. In the case of 1GB network connectivity, NetApp recommends creating an interface group consisting of multiple 1GB ports.
For noncritical SMB 3.0 deployments, NetApp recommends using SVMs that have aggregates created with SATA drives. This approach is not possible with storage spaces that depend on shared SAS drives.
When migrating VMs from one SMB 3.0 share to another, NetApp recommends enabling the CIFS copy offload functionality on the storage system so that migration is faster.
Things to Remember
When you provision volumes for SMB environments, the volumes must be created with the NTFS security style.
Time settings on nodes in the cluster should be set up accordingly. Use the NTP if the NetApp CIFS server must participate in the Windows Active Directory domain.
Persistent handles work only between nodes in an HA pair.
The witness protocol works only between nodes in an HA pair.
Continuously available file shares are supported only for Hyper-V workloads.
The SMB 3 features multichannel, SMB direct, and SMB encryption are not supported.
RDMA is not supported.
ReFS is not supported.
Further Reading
For information about Windows File Services best practices, see TR-4191: Best Practices
Guide for ONTAP 8.2.x and 8.3.x Windows File Services.
6.2 Provisioning SMB Share on Nano Server
Nano Server does not require additional client software to access data on the CIFS share on a NetApp
storage controller.
To copy files from Nano Server to a CIFS share, run the following cmdlets on the remote server:
29 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
$ip = “<input IP Address of the Nano Server>”
# Create a New PS Session to the Nano Server
$session = New-PSSession -ComputerName $ip -Credential ~\Administrator
Copy-Item -FromSession $s -Path C:\Windows\Logs\DISM\dism.log -Destination \\cifsshare
cifsshare is the CIFS share on the NetApp storage controller.
To copy files to Nano Server, run the following cmdlet:
Copy-Item -ToSession $s -Path \\cifsshare\<file> -Destination C:\
To copy the entire contents of a folder, specify the folder name and use the -Recurse parameter at the
end of the cmdlet.
7 Hyper-V Storage Infrastructure on NetApp
A Hyper-V storage infrastructure can be hosted on NetApp storage systems. Storage for Hyper-V to store
the VM files and its disks can be provided using NetApp LUNs or NetApp CIFS shares, as shown in
Figure 9.
Figure 9) Hyper-V storage infrastructure on NetApp.
Hyper-V Storage on NetApp LUNs
Provision a NetApp LUN on the Hyper-V server machine. For more information, see the section “Provisioning in SAN Environments.”
Open Hyper-V Manager from the Tools section of Server Manager.
Select the Hyper-V server and click Hyper-V Settings.
Specify the default folder to store the VM and its disk as the LUN. Doing so sets the default path as the LUN for the Hyper-V storage. If you want to specify the path explicitly for a VM, then you can do so during VM creation.
30 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Hyper-V Storage on NetApp CIFS Share
Before beginning the steps listed in this section, review the section “Provisioning in SMB Environments.”
To configure Hyper-V storage on the NetApp CIFS share, complete the following steps:
1. Open Hyper-V Manager from the Tools section of Server Manager.
2. Select the Hyper-V server and click Hyper-V Settings.
3. Specify the default folder to store the VM and its disk as the CIFS share. Doing so sets the default path as the CIFS share for the Hyper-V storage. If you want to specify the path explicitly for a VM, then you can do so during VM creation.
Each VM in Hyper-V can in turn be provided with the NetApp LUNs and CIFS shares that were provided
to the physical host. This procedure is the same as for any physical host. The following methods can be
used to provision storage to a VM:
Adding a storage LUN by using the FC initiator within the VM
Adding a storage LUN by using the iSCSI initiator within the VM
Adding a pass-through physical disk to a VM
Adding VHD/VHDX to a VM from the host
Best Practices
When a VM and its data are stored on NetApp storage, NetApp recommends running NetApp deduplication at the volume level at regular intervals. This practice results in significant space savings when identical VMs are hosted on a CSV or SMB share. Deduplication runs on the storage controller and it does not affect the host system and VM performance.
When using iSCSI LUNs for Hyper-V, make sure to enable iSCSI Service (TCP-In)
for Inbound and iSCSI Service (TCP-Out) for Outbound in the firewall settings
on the Hyper-V host. Doing so allows iSCSI traffic to pass to and from the Hyper-V host and the NetApp controller.
NetApp recommends unchecking the option Allow Management Operating System to Share This Network Adapter for the Hyper-V virtual switch. Doing so creates a dedicated network for the VMs.
Things to Remember
Provisioning to a VM by using virtual Fibre Channel requires an N_Port ID Virtualization–enabled FC HBA. A maximum of four FC ports is supported.
If the host system is configured with multiple FC ports and presented to the VM, then MPIO must be installed in the VM to enable multipathing.
Pass-through disks cannot be provisioned to the host if MPIO is being used on that host, because pass-through disks do not support MPIO.
Disk used for VHD/VHDx files should use 64K formatting for allocation.
Further Reading
For information about FC HBAs, see the NetApp Interoperability Matrix.
For more information about virtual Fibre Channel, see the Microsoft Hyper-V Virtual Fibre Channel Overview page.
31 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
For more information about Hyper-V over SMB 3.0 best practices, see the guide Microsoft Hyper-V over SMB 3.0 with Clustered Data ONTAP: Best Practices.
For more information about Hyper-V over SMB 3.0 requirements, see the Hyper-V over SMB 3.0 Requirements.
Offloaded Data Transfer
Microsoft ODX, also known as copy offload, enables direct data transfers within a storage device or
between compatible storage devices without transferring the data through the host computer. NetApp
ONTAP supports the ODX feature for both CIFS and SAN protocols. ODX increases performance
significantly with faster copying, reduced utilization of CPU and memory on the client, and reduced
network I/O bandwidth utilization.
With ODX, it is faster and efficient to copy files within the SMB shares, within the LUNs, and between the
SMB shares and LUNs. This approach is more helpful in a scenario for which multiple copies of the
golden image of an OS (VHD/VHDX) are required. Several copies of the same golden image can be
made in significantly less time. ODX is also applied in Hyper-V storage live migration for moving VM
storage.
To enable the ODX feature on CIFS, run the following CLI commands on the NetApp storage controller:
1. Enable ODX for CIFS.
#set the privilege level to diagnostic
cluster::> set -privilege diagnostic
#enable the odx feature
cluster::> vserver cifs options modify -vserver <vserver_name> -copy-offload-enabled true
#return to admin privilege level
cluster::> set privilege admin
2. To enable the ODX feature on SAN, run the following CLI commands on the NetApp storage controller:
#set the privilege level to diagnostic
cluster::> set -privilege diagnostic
#enable the odx feature
cluster::> copy-offload modify -vserver <vserver_name> iscsi enabled
#return to admin privilege level
cluster::> set privilege admin
Things to Remember
For CIFS, ODX is available only when both the client and the storage server support SMB 3.0 and the ODX feature.
For SAN environments, ODX is available only when both the client and the storage server support the ODX feature.
Further Reading
For information about ODX, see Improving Microsoft Remote Copy Performance and Windows Offloaded Data Transfers Overview.
32 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
7.1 Hyper-V Clustering: High Availability and Scalability for Virtual Machines
Failover clusters provide high availability and scalability to Hyper-V servers. A failover cluster is a group of
independent Hyper-V servers that work together to increase availability and scalability for the VMs.
Hyper-V clustered servers (called nodes) are connected by the physical network and by cluster software.
These nodes use shared storage to store the VM files, which include configuration, virtual hard disk
(VHD) files, and Snapshot copies. The shared storage can be a NetApp SMB/CIFS share or a CSV on
top of a NetApp LUN, as shown in Figure 10. This shared storage provides a consistent and distributed
namespace that can be accessed simultaneously by all the nodes in the cluster. Therefore, if one node
fails in the cluster, the other node provides service by a process called failover. Failover clusters can be
managed by using the Failover Cluster Manager snap-in and the failover clustering Windows PowerShell
cmdlets.
Cluster Shared Volumes
CSVs enable multiple nodes in a failover cluster to simultaneously have read/write access to the same
NetApp LUN that is provisioned as an NTFS or ReFS volume. With CSVs, clustered roles can fail over
quickly from one node to another without requiring a change in drive ownership or dismounting and
remounting a volume. CSVs also simplify the management of a potentially large number of LUNs in a
failover cluster. CSVs provide a general-purpose clustered file system that is layered above NTFS or
ReFS.
Figure 10) Hyper-V failover cluster and NetApp.
Best Practices
NetApp recommends turning off cluster communication on the iSCSI network to prevent internal cluster communication and CSV traffic from flowing over the same network.
NetApp recommends having redundant network paths (multiple switches) to provide resiliency and QoS.
33 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Things to Remember
Disks used for CSV must be partitioned with NTFS or ReFS. Disks formatted with FAT or FAT32 cannot be used for a CSV.
Disks used for CSVs should use 64K formatting for allocation.
Further Reading
For information about deploying a Hyper-V cluster, see Appendix C: Deploy Hyper-V Cluster.
7.2 Hyper-V Live Migration: Migration of VMs
It is sometimes necessary during the lifetime of VMs to move them to a different host on the Windows
cluster. Doing so might be required if the host is running out of system resources or if the host is required
to reboot for maintenance reasons. Similarly, it might be necessary to move a VM to a different LUN or
SMB share. This might be required if the present LUN or share is running out of space or yielding lower
than expected performance. Hyper-V live migration moves running VMs from one physical Hyper-V server
to another with no effect on VM availability to users. You can live migrate VMs between Hyper-V servers
that are part of a failover cluster or between independent Hyper-V servers that are not part of any cluster.
Live Migration in a Clustered Environment
VMs can be moved seamlessly between the nodes of a cluster. VM migration is instantaneous because
all the nodes in the cluster share the same storage and have access to the VM and its disk. Figure 11
depicts live migration in a clustered environment.
Figure 11) Live migration in a clustered environment.
Best Practice
Have a dedicated port for live migration traffic.
34 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Have a dedicated host live migration network to avoid network-related issues during migration.
Further Reading
For information about deploying live migration in a clustered environment, see Appendix D:
Deploy Hyper-V Live Migration in a Clustered Environment.
Live Migration Outside a Clustered Environment
You can live migrate a VM between two nonclustered, independent Hyper-V servers. This process can
use either shared or shared nothing live migration.
In shared live migration, the VM is stored on an SMB share. Therefore, when you live migrate a VM, the VM’s storage remains on the central SMB share for instant access by the other node, as shown in Figure 12.
Figure 12) Shared live migration in a nonclustered environment.
In shared nothing live migration, each Hyper-V server has its own local storage (it can be an SMB share, a LUN, or DAS), and the VM’s storage is local to its Hyper-V server. When a VM is live migrated, the VM’s storage is mirrored to the destination server over the client network and then the VM is migrated. The VM stored on DAS, a LUN, or an SMB/CIFS share can be moved to an SMB/CIFS share on the other Hyper-V server, as shown in Figure 13. It can also be moved to a LUN, as shown in Figure 14. Shared nothing live migration applies the copy offload feature if the source and destination volumes are on the same NetApp storage cluster, accelerating the overall process.
35 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Figure 13) Shared nothing live migration in a nonclustered environment to SMB shares.
Figure 14) Shared nothing live migration in a nonclustered environment to LUNs.
36 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Further Reading
For information about deploying live migration outside a clustered environment, see Appendix E: Deploy Hyper-V Live Migration Outside of a Clustered Environment.
Hyper-V Storage Live Migration
During the lifetime of a VM, you might need to move the VM storage (VHD/VHDX) to a different LUN or
SMB share. This might be required if the present LUN or share is running out of space or yielding lower
than expected performance.
The LUN or the share that currently hosts the VM can run out of space, be repurposed, or provide
reduced performance. Under these circumstances, the VM can be moved without downtime to another
LUN or share on a different volume, aggregate, or cluster. This process is faster if the storage system has
copy-offload capabilities. NetApp storage systems are copy-offload enabled by default for CIFS and SAN
environments.
The ODX feature performs full-file or sub-file copies between two directories residing on remote servers.
A copy is created by copying data between the servers (or the same server if both the source and the
destination files are on the same server). The copy is created without the client reading the data from the
source or writing to the destination. This process reduces processor and memory use for the client or
server and minimizes network I/O bandwidth. Before proceeding with a copy operation on the host,
confirm that the copy offload settings are configured on the storage system.
When VM storage live migration is initiated from a host, the source and the destination are identified and
the copy activity is offloaded to the storage system. Because the activity is performed by the storage
system, there is negligible use of the host CPU, memory, or network.
NetApp storage controllers support the following different ODX scenarios:
IntraSVM. The data is owned by the same SVM:
Intravolume, intranode. The source and destination files or LUNs reside within the same volume. The copy is performed with FlexClone file technology, which provides additional remote copy performance benefits.
Intervolume, intranode. The source and destination files or LUNs are on different volumes that are on the same node.
Intervolume, internodes. The source and destination files or LUNs are on different volumes that are located on different nodes.
InterSVM. The data is owned by different SVMs.
Intervolume, intranode. The source and destination files or LUNs are on different volumes that are on the same node.
Intervolume, internodes. The source and destination files or LUNs are on different volumes that are on different nodes.
Intercluster. Beginning with ONTAP 9.0, ODX is also supported for intercluster LUN transfers in SAN environments. Intercluster ODX is supported for SAN protocols only, not for SMB.
After the migration is complete, the backup and replication policies must be reconfigured to reflect the
new volume holding the VMs. Any previous backups that were taken cannot be used.
VM storage (VHD/VHDX) can be migrated between the following storage types:
DAS and the SMB share
DAS and LUN
An SMB share and a LUN
Between LUNs
37 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Between SMB shares
Figure 15) Hyper-V storage live migration.
Further Reading
For information about deploying storage live migration, see Appendix F: Deploy Hyper-V
Storage Live Migration.
7.3 Hyper-V Replica: Disaster Recovery for Virtual Machines
Hyper-V Replica replicates the Hyper-V VMs from a primary site to replica VMs on a secondary site,
asynchronously providing disaster recovery for the VMs. The Hyper-V server at the primary site hosting
the VMs is known as the primary server; the Hyper-V server at the secondary site that receives replicated
VMs is known as the replica server. A Hyper-V Replica example scenario is shown in Figure 16. You can
use Hyper-V Replica for VMs between Hyper-V servers that are part of a failover cluster or between
independent Hyper-V servers that are not part of any cluster.
38 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Figure 16) Hyper-V Replica.
Replication
After Hyper-V Replica is enabled for a VM on the primary server, initial replication creates an identical VM
on the replica server. After the initial replication, Hyper-V Replica maintains a log file for the VHDs of the
VM. The log file is replayed in reverse order to the replica VHD in accordance with the replication
frequency. This log and the use of reverse order make sure that the latest changes are stored and
replicated asynchronously. If replication does not occur in line with the expected frequency, an alert is
issued.
Extended Replication
Hyper-V Replica supports extended replication in which a secondary replica server can be configured for
disaster recovery. A secondary replica server can be configured for the replica server to receive the
changes on the replica VMs. In an extended replication scenario, the changes on the primary VMs on the
primary server are replicated to the replica server. Then the changes are replicated to the extended
replica server. The VMs can be failed over to the extended replica server only when both primary and
replica servers go down.
Failover
Failover is not automatic; the process must be manually triggered. There are three types of failover:
Test failover. This type is used to verify that a replica VM can start successfully on the replica server and is initiated on the replica VM. This process creates a duplicate test VM during failover and does not affect regular production replication.
Planned failover. This type is used to fail over VMs during planned downtime or expected outages. This process is initiated on the primary VM, which must be turned off on the primary server before a planned failover is run. After the machine fails over, Hyper-V Replica starts the replica VM on the replica server.
Unplanned failover. This type is used when unexpected outages occur. This process is initiated on the replica VM and should be used only if the primary machine fails.
Recovery
When you configure replication for a VM, you can specify the number of recovery points. Recovery points
represent points in time from which data can be recovered from a replicated machine.
39 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Further Reading
For information about deploying Hyper-V Replica outside a clustered environment, see the section “Deploy Hyper-V Replica Outside of a Clustered Environment.”
For information about deploying Hyper-V Replica in a clustered environment, see the section “Deploy Hyper-V Replica in a Clustered Environment.”
7.4 Hyper-V Centralized Management: Microsoft System Center Virtual Machine Manager
System Center Virtual Machine Manager, part of Microsoft’s System Center suite, provides centralized
management of Hyper-V servers and their VMs. SCVMM supports NetApp SMI-S Provider to provide
NetApp storage integration. SCVMM uses SMI-S to communicate with NetApp storage for deploying
LUNs and SMB shares on a NetApp storage controller. ODX is supported with NetApp SMI-S Provider
and SCVMM. This capability allows you to deploy new VMs from templates significantly faster.
Further Reading
For information about NetApp SMI-S Provider best practices and implementation, see TR-4271: NetApp SMI-S Provider Best Practices and Implementation Guide.
For more information about applying the ODX feature using NetApp SMI-S Provider and SCVMM for provisioning and migration of VMs, see the Community Blog.
7.5 Azure Site Recovery: Cloud Orchestrated Disaster Recovery for Hyper-V Assets
From System Center VMM 2012 R2 Update Rollup 5.0 and later, SCVMM offloads the actual data
movement to storage devices using a technique called Azure Site Recovery SAN replication. This
process is essentially Azure Site Recovery–orchestrated disaster to fail over VMs from a primary private
cloud site to a secondary private cloud . SCVMM uses SMI-S Provider to discover the capabilities of
NetApp storage arrays and to initiate a mirror operation using NetApp SnapMirror® data replication
technology. This process protects VMs without asking the VM host to do any heavy lifting. Using Azure
Site Recovery on NetApp SVM requires NetApp SMI-S Provider 5.2 or later. Azure Site Recovery
requires no host-based software (or configuration) to protect a VM because all the configuration occurs in
SCVMM.
40 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Figure 17) Azure Site Recovery.
Further Reading
For information about Azure Site Recovery best practices, see TR-4413: Azure Site Recovery Best Practices Guide.
For more information about Azure Site Recovery SAN replication with SMI-S, see the Community Blog.
8 Storage Efficiency
8.1 NetApp Deduplication
NetApp deduplication works by removing duplicate blocks at the storage volume level, storing only one
physical copy, regardless of how many logical copies are present. Therefore, deduplication creates the
illusion that there are numerous copies of that block. Deduplication automatically removes duplicate data
blocks on a 4KB block level across an entire volume. This process reclaims storage to achieve space and
potential performance savings by reducing the number of physical writes to the disk. Deduplication can
provide more than 70% space savings in Hyper-V environments.
41 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
NetApp recommends enabling NetApp deduplication at the time of volume creation. Using the native
deduplication feature with Windows Server 2012 R2 on NetApp storage can create issues with NetApp
cloning capabilities.
8.2 Thin Provisioning
Thin provisioning is an efficient way to provision storage because the storage is not preallocated up front.
In other words, when a volume or LUN is created using thin provisioning, the space on the storage
system is unused. The space remains unused until the data is written to the LUN or volume and only the
necessary space to store the data is used. NetApp recommends enabling thin provisioning on the volume
and disabling LUN reservation.
8.3 Quality of Service
Storage QoS in clustered ONTAP enables you to group storage objects and set throughput limits on the
group. Storage QoS can be used to limit the throughput to workloads and to monitor workload
performance. With this ability, a storage administrator can separate workloads by organization,
application, business unit, or production or development environments.
In enterprise environments, storage QoS helps to achieve the following:
Prevents user workloads from affecting each other
Protects critical applications that have specific response times that must be met in IT-as-a-service (ITaaS) environments
Prevents tenants from affecting each other
Avoids performance degradation with the addition of each new tenant
QoS allows you to limit the amount of I/O sent to an SVM, a flexible volume, a LUN, or a file. I/O can be
limited by the number of operations or the raw throughput.
Figure 18 illustrates SVM with its own QoS policy enforcing a maximum throughput limit.
Figure 18) Storage virtual machine with its own QoS policy.
42 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
To configure an SVM with its own QoS policy and monitor policy group, run the following commands on
your ONTAP cluster:
# create a new policy group pg1 with a maximum throughput of 5,000 IOPS
cluster::> qos policy-group create pg1 -vserver vs1 -max-throughput 5000iops
# create a new policy group pg2 without a maximum throughput
cluster::> qos policy-group create pg2 -vserver vs2
# monitor policy group performance
cluster::> qos statistics performance show
# monitor workload performance
cluster::> qos statistics workload performance show
9 Security
9.1 Windows Defender
Windows Defender is antimalware software installed and enabled on Windows Server 2016 by default.
This software actively protects Windows Server 2016 against known malware and can regularly update
antimalware definitions through Windows Update. NetApp LUNs and SMB shares can be scanned using
Windows Defender.
Further Reading
For further information, see the Windows Defender Overview.
9.2 BitLocker
BitLocker drive encryption is a data protection feature continued from Windows Server 2012. This
encryption protects physical disks, LUNs, and CSVs.
Best Practice
Before enabling BitLocker, the CSV must be put into maintenance mode. Therefore, NetApp recommends that decisions pertaining to BitLocker-based security be made before creating VMs on the CSV to avoid downtime.
Further Reading
For more information, see the BitLocker Drive Encryption Overview and Protecting Cluster
Shared Volumes and Storage Area Networks with BitLocker.
Appendix A: Deploy Nested Virtualization
This appendix describes the deployment of nested virtualization on a Hyper-V 2016 host.
Prerequisites
You must have a Hyper-V 2016 host with an Intel processor equipped with Intel VT-x and EPT
technology.
43 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Deployment
To deploy nested virtualization, complete the following steps:
1. Create a VM on the Hyper-V 2016 host with the same Windows Server 2016 build as that of the Hyper-V host.
2. Turn off the VM and run the following PowerShell cmdlets on the Hyper-V host.
3. To enable nested virtualization using the PowerShell method, run the following commands:
#Enable Nested Virtualization for the virtual machine
Set-VMProcessor -VMName <VM Name> -ExposeVirtualizationExtensions $true
#Disable Dynamic memory for the virtual machine
Set-VMMemory -VMName <VM Name> -DynamicMemoryEnabled $false
#Enable MAC address spoofing on the first level of virtual switch for the virtual machine
Get-VMNetworkAdapter -VMName <VM Name> | Set-VMNetworkAdapter -MacAddressSpoofing On
4. Power on the VM and install the Hyper-V role on it.
Things to Remember
Only Hyper-V is supported within Hyper-V nested virtualization. Other virtualization technologies are not supported in nested virtualization.
Nested virtualization is supported only on Intel systems.
Hosts with Device Guard enabled cannot support nested virtualization.
VMs with virtualization-based security enabled cannot support nested virtualization.
A VM with nested virtualization enabled cannot support:
Run-time memory resize
Dynamic memory
Checkpoints
Live migration
Appendix B: Deploy Nano Server
Deployment
To deploy a Nano Server as a Hyper-V host, complete the following steps:
1. Log in to Windows Server 2016 as a member of the administrator group.
2. Copy the NanoServerImageGenerator folder from the \NanoServer folder in the Windows
Server 2016 ISO to the local hard drive.
3. To create a Nano Server VHD/VHDX, complete the following steps:
a. Start Windows PowerShell as an administrator, navigate to the copied
NanoServerImageGenerator folder on the local hard drive, and run the following cmdlet:
Set-ExecutionPolicy RemoteSigned
Import-Module .\NanoServerImageGenerator -Verbose
b. Create a VHD for the Nano Server as a Hyper-V host by running the following PowerShell cmdlet. This command prompts you for an administrator password for the new VHD.
New-NanoServerImage -Edition Standard -DeploymentType Guest -MediaPath <”input the path to the
root of the contents of Windows Server 2016 ISO”> -TargetPath <”input the path, including the
filename and extension where the resulting VHD/VHDX will be created”> -ComputerName <”input the
name of the nano server computer you are about to create”> -Compute
44 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
In the following example, we create a Nano Server VHD with the feature Hyper-V host with failover clustering enabled. This example creates a Nano Server VHD from an ISO mounted at
f:\. The newly created VHD is placed in a folder named NanoServer in the folder from where
the cmdlet is run. The computer name is NanoServer and the resulting VHD contains the
standard edition of Windows Server 2016.
New-NanoServerImage -Edition Standard -DeploymentType Guest -MediaPath f:\ -TargetPath
.\NanoServer.vhd -ComputerName NanoServer -Compute -Clustering
With the cmdlet New-NanoServerImage, configure parameters that set the IP address, the
subnet mask, the default gateway, the DNS server, the domain name, and so on.
4. Use the VHD in a VM or physical host to deploy Nano Server as a Hyper-V host:
a. For deployment on a VM, create a new VM in Hyper-V Manager and use the VHD created in Step 3.
b. For deployment on a physical host, copy the VHD to the physical computer and configure it to
boot from this new VHD. First, mount the VHD, run bcdboot e:\windows (where the VHD is
mounted under E:\), unmount the VHD, restart the physical computer, and boot to the Nano
Server.
5. Join the Nano Server to a domain (optional):
a. Log in to any computer in the domain and create a data blob by running the following PowerShell cmdlet:
$domain = “<input the domain to which the Nano Server is to be joined>”
$nanoserver = “<input name of the Nano Server>”
djoin.exe /provision /domain $domain /machine $nanoserver /savefile C:\temp\odjblob /reuse
b. Copy the odjblob file to the Nano Server by running the following PowerShell cmdlets on a
remote machine:
$nanoserver = “<input name of the Nano Server>”
$nanouname = ““<input username of the Nano Server>”
$nanopwd = ““<input password of the Nano Server>”
$filePath = 'c:\temp\odjblob'
$fileContents = Get-Content -Path $filePath -Encoding Unicode
$securenanopwd = ConvertTo-SecureString -AsPlainText -Force $nanopwd
$nanosecurecred = new-object management.automation.pscredential $nanouname, $securenanopwd
Invoke-Command -VMName $nanoserver -Credential $nanosecurecred -ArgumentList
@($filePath,$fileContents) -ScriptBlock {
param($filePath,$data)
New-Item -ItemType directory -Path c:\temp
Set-Content -Path $filePath -Value $data -Encoding Unicode
cd C:\temp
djoin /requestodj /loadfile c:\temp\odjblob /windowspath c:\windows /localos
}
c. Reboot the Nano Server.
Connect to Nano Server
To connect to the Nano Server remotely using PowerShell, complete the following steps:
1. Add the Nano Server as a trusted host on the remote computer by running the following cmdlet on the remote server:
Set-Item WSMan:\LocalHost\Client\TrustedHosts “<input IP Address of the Nano Server>”
2. If the environment is safe and if you want to set all the hosts to be added as trusted hosts on a server, run the following command:
45 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Set-Item WSMan:\LocalHost\Client\TrustedHosts *
3. Start the remote session by running the following cmdlet on the remote server. Provide the password for the Nano Server when prompted.
Enter-PSSession –ComputerName “<input IP Address of the Nano Server>” -Credential ~\Administrator
To connect to the Nano Server remotely using GUI management tools from a remote Windows Server
2016, complete the following commands:
1. Log in to the Windows Server 2016 as a member of the administrator group.
2. Start Server Manager.
3. To manage a Nano Server remotely from Server Manager, right-click All Servers, click Add Servers, provide the Nano Server’s information, and add it. You can now see the Nano Server listed in the server list. Select the Nano Server, right-click it, and start managing it with the various options provided.
4. To manage services on a Nano Server remotely, complete the following steps:
a. Open Services from the Tools section of Server Manager.
b. Right-click Services (Local).
c. Click Connect to Server.
d. Provide the Nano Server details to view and manage the services on the Nano Server.
5. If the Hyper-V role is enabled on the Nano Server, complete the following steps to manage it remotely from Hyper-V Manager:
a. Open Hyper-V Manager from the Tools section of Server Manager.
b. Right-click Hyper-V Manager.
c. Click Connect to Server and provide the Nano Server details. Now the Nano Server can be managed as a Hyper-V server to create and manage VMs on top of it.
6. If the failover clustering role is enabled on the Nano Server, complete the following steps to manage it remotely from the failover cluster manager:
a. Open Failover Cluster Manager from the Tools section of Server Manager.
b. Perform clustering-related operations with the Nano Server.
Appendix C: Deploy Hyper-V Cluster
This appendix describes deploying a Hyper-V cluster.
Prerequisites
At least two Hyper-V servers exist connected to each other.
At least one virtual switch is configured on each Hyper-V server.
The failover cluster feature is enabled on each Hyper-V server.
SMB shares or CSVs are used as shared storage to store VMs and their disks for Hyper-V clustering.
Storage should not be shared between different clusters. You should have only one CSV/CIFS share per cluster.
If the SMB share is used as shared storage, then permissions on the SMB share must be configured to grant access to the computer accounts of all the Hyper-V servers in the cluster.
Deployment
1. Log in to one of the Windows 2016 Hyper-V servers as a member of the administrator group.
2. Start Server Manager.
46 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
3. In the Tools section, click Failover Cluster Manager.
4. Click the Create Cluster from Actions menu.
5. Provide details for the Hyper-V server that is part of this cluster.
6. Validate the cluster configuration. Select Yes when prompted for cluster configuration validation and select the tests required to validate whether the Hyper-V servers pass the prerequisites to be part of the cluster.
7. After validation succeeds, the Create Cluster wizard is started. In the wizard, provide the cluster name and the cluster IP address for the new cluster. A new failover cluster is then created for the Hyper-V server.
8. Click the newly created cluster in Failover Cluster Manager and manage it.
9. Define shared storage for the cluster to use. It can be either an SMB share or a CSV.
10. Using an SMB share as shared storage requires no special steps.
Configure a CIFS share on a NetApp storage controller. To do so, see the section “Provisioning in SMB Environments.”
11. To use a CSV as shared storage, complete the following steps:
a. Configure LUNs on a NetApp storage controller. To do so, see the section “Provisioning in SAN Environments.”
b. Make sure that all the Hyper-V servers in the failover cluster can see the NetApp LUNs. To do this for all the Hyper-V servers that are part of the failover cluster, make sure that their initiators are added to the initiator group on NetApp storage. Also be sure that their LUNs are discovered and make sure that MPIO is enabled.
c. On any one of the Hyper-V servers in the cluster, complete the following steps:
i. Take the LUN online, initialize the disk, create a new simple volume, and format it using NTFS or ReFS.
ii. In Failover Cluster Manager, expand the cluster, expand Storage, right-click Disks, and then click Add Disks. Doing so opens the Add Disks to a Cluster wizard showing the LUN as a disk. Click OK to add the LUN as a disk.
iii. Now the LUN is named Clustered Disk and is shown as Available Storage under Disks.
d. Right-click the LUN (Clustered Disk) and click Add to Cluster Shared Volumes. Now the LUN is shown as a CSV.
e. The CSV is simultaneously visible and accessible from all the Hyper-V servers of the failover
cluster at its local location C:\ClusterStorage\.
12. Create a highly available VM:
In Failover Cluster Manager, select and expand the cluster created previously.
Click Roles and then click Virtual Machines in Actions. Click New Virtual Machine.
Select the node from the cluster where the VM should reside.
In the Virtual Machine Creation wizard, provide the shared storage (SMB share or CSV) as the path to store the VM and its disks.
Use Hyper-V Manager to set the shared storage (SMB share or CSV) as the default path to store the VM and its disks for a Hyper-V server.
13. Test planned failover. Move VMs to another node using live migration, quick migration, or storage migration (move). Review the section “Live Migration in a Clustered Environment” for more details.
14. Test unplanned failover. Stop cluster service on the server owning the VM.
47 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Appendix D: Deploy Hyper-V Live Migration in a Clustered
Environment
This appendix describes deploying live migration in a clustered environment.
Prerequisites
To deploy live migration, you need to have Hyper-V servers configured in a failover cluster with shared
storage. Review the section “Hyper-V Clustering” for more details.
Deployment
To use live migration in a clustered environment, complete the following steps:
1. In Failover Cluster Manager, select and expand the cluster. If the cluster is not visible, then click Failover Cluster Manager, click Connect to Cluster, and provide the cluster name.
2. Click Roles, which lists all the VMs available in a cluster.
3. Right-click on the VM and click Move. Doing so provides you with three options:
Live migration. You can select a node manually or allow the cluster to select the best node. In live migration, the cluster copies the memory used by the VM from the current node to another node. Therefore, when the VM is migrated to another node, the memory and state information needed by the VM are already in place for the VM. This migration method is nearly instantaneous, but only one VM can be live migrated at a time.
Quick migration. You can select a node manually or allow the cluster to select the best node. In quick migration, the cluster copies the memory used by a VM to a disk in storage. Therefore, when the VM is migrated to another node, the memory and state information needed by the VM can be quickly read from the disk by the other node. With quick migration, multiple VMs can be migrated simultaneously.
Virtual machine storage migration. This method uses the Move Virtual Machine Storage wizard. With this wizard, you can select the VM disk along with other files to be moved to another location, which can be a CSV or an SMB share.
Appendix E: Deploy Hyper-V Live Migration Outside a Clustered
Environment
This section describes the deployment of Hyper-V live migration outside a clustered environment.
Prerequisites
Standalone Hyper-V servers with independent storage or shared SMB storage.
The Hyper-V role installed on both the source and destination servers.
Both Hyper-V servers belong to the same domain or to domains that trust each other.
Deployment
To perform live migration in a nonclustered environment, configure source and destination Hyper-V
servers so that they can send and receive live migration operations. On both Hyper-V servers, complete
the following steps:
1. Open Hyper-V Manager from the Tools section of Server Manager.
2. In Actions, click Hyper-V Settings.
3. Click Live Migrations and select Enable Incoming and Outgoing Live Migrations.
48 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
4. Choose whether to allow live migration traffic on any available network or only on specific networks.
5. Optionally, you can configure the authentication protocol and performance options from the Advanced section of Live Migrations.
6. If CredSSP is used as the authentication protocol, then make sure to log in to the source Hyper-V server from the destination Hyper-V server before moving the VM.
7. If Kerberos is used as the authentication protocol, then configure the constrained delegation. Doing so requires Active Directory domain controller access. To configure the delegation, complete the following steps:
a. Log in to the Active Directory domain controller as the administrator.
b. Start Server Manager.
c. In the Tools section, click Active Directory Users and Computers.
d. Expand the domain and click Computers.
e. Select the source Hyper-V server from the list, right-click it, and click Properties.
f. In the Delegation tab, select Trust This Computer for Delegation to Specified Services Only.
g. Select Use Kerberos Only.
h. Click Add, which opens the Add Services wizard.
i. In Add Services, click Users and Computers, which opens Select Users or Computers.
j. Provide the destination Hyper-V server name and click OK.
To move VM storage, select CIFS.
To move VMs, select the Microsoft Virtual System Migration service.
k. In the Delegation tab, click OK.
l. From the Computers folder, select the destination Hyper-V server from the list and repeat the process. In Select Users or Computers, provide the source Hyper-V server name.
8. Move the VM.
a. Open Hyper-V Manager.
b. Right-click a VM and click Move.
c. Choose Move the Virtual Machine.
d. Specify the destination Hyper-V server for the VM.
e. Choose the move options. For Shared Live Migration, choose Move Only the Virtual Machine. For Shared Nothing Live Migration, choose any of the other two options based on your preferences.
f. Provide the location for the VM on the destination Hyper-V server based on your preferences.
g. Review the summary and click OK to move the VM.
10 Appendix F: Deploy Hyper-V Storage Live Migration
Prerequisites
You must have a standalone Hyper-V server with independent storage (DAS or LUN) or SMB storage (local or shared among other Hyper-V servers).
The Hyper-V server must be configured for live migration. Review the section on deployment in “Live Migration Outside of a Clustered Environment.”
Deployment
1. Open Hyper-V Manager.
2. Right-click a VM and click Move.
49 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
3. Select Move the Virtual Machine’s Storage.
4. Select options for moving the storage based on your preferences.
5. Provide the new location for the VM’s items.
6. Review the summary and click OK to move the VM’s storage.
Appendix G: Deploy Hyper-V Replica Outside a Clustered
Environment
This appendix describes deploying Hyper-V Replica outside a clustered environment.
Prerequisites
You need standalone Hyper-V servers located in the same or separate geographical locations serving as primary and replica servers.
If separate sites are used, then the firewall at each site must be configured to allow communication between the primary and replica servers.
The replica server must have enough space to store the replicated workloads.
Deployment
1. Configure the replica server.
a. So that the inbound firewall rules allow incoming replication traffic, run the following PowerShell cmdlet:
Enable-Netfirewallrule -displayname "Hyper-V Replica HTTP Listener (TCP-In)”
b. Open Hyper-V Manager from the Tools section of Server Manager.
c. Click Hyper-V Settings from Actions.
d. Click Replication Configuration and select Enable This Computer as a Replica Server.
e. In the Authentication and Ports section, select the authentication method and port.
f. In the Authorization and Storage section, specify the location to store the replicated VMs and files.
2. Enable VM replication for VMs on the primary server. VM replication is enabled on a per-VM basis and not for the entire Hyper-V server.
a. In Hyper-V Manager, right-click a VM and click Enable Replication to open the Enable Replication wizard.
b. Provide the name of the replica server where the VM must be replicated.
c. Provide the authentication type and the replica server port that was configured to receive replication traffic on the replica server.
d. Select the VHDs to be replicated.
e. Choose the frequency (duration) at which the changes are sent to the replica server.
f. Configure recovery points to specify the number of recovery points to maintain on the replica server.
g. Choose Initial Replication Method to specify the method to transfer the initial copy of the VM data to the replica server.
h. Review the summary and click Finish.
i. This process creates a VM replica on the replica server.
50 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Replication
1. Run a test failover to make sure that the replica VM functions properly on the replica server. The test creates a temporary VM on the replica server.
a. Log in to the replica server.
b. In Hyper-V Manager, right-click a replica VM, click Replication, and click Test Failover.
c. Choose the recovery point to use.
d. This process creates a VM of the same name appended with -Test.
e. Verify the VM to make sure that everything works well.
f. After failover, the replica test VM is deleted if you select Stop Test Failover for it.
2. Run a planned failover to replicate the latest changes on the primary VM to the replica VM.
a. Log in to the primary server.
b. Turn off the VM to be failed over.
c. In Hyper-V Manager, right-click the turned-off VM, click Replication, and click Planned Failover.
d. Click Failover to transfer the latest VM changes to the replica server.
Run an unplanned failover in the case of primary VM failure.
a. Log in to the replica server.
b. In Hyper-V Manager, right-click a replica VM, click Replication, and click Failover.
c. Choose the recovery point to use.
d. Click Failover to fail over the VM.
Appendix H: Deploy Hyper-V Replica in a Clustered Environment
Prerequisites
You need to have Hyper-V clusters located in the same or in separate geographical locations serving as primary and replica clusters. Review the section “Hyper-V Clustering” for more details.
If separate sites are used, then the firewall at each site must be configured to allow communication between the primary and replica clusters.
The replica cluster must have enough space to store the replicated workloads.
Deployment
1. Enable firewall rules on all the nodes of a cluster. Run the following PowerShell cmdlet with admin privileges on all the nodes in both the primary and replica clusters.
# For Kerberos authentication
get-clusternode | ForEach-Object {Invoke-command -computername $_.name -scriptblock {Enable-
Netfirewallrule -displayname "Hyper-V Replica HTTP Listener (TCP-In)"}}
# For Certificate authentication
get-clusternode | ForEach-Object {Invoke-command -computername $_.name -scriptblock {Enable-
Netfirewallrule -displayname "Hyper-V Replica HTTPS Listener (TCP-In)"}}
2. Configure the replica cluster.
a. Configure the Hyper-V Replica broker with a NetBIOS name and IP address to use as the connection point to the cluster that is used as the replica cluster.
i. Open Failover Cluster Manager.
ii. Expand the cluster, click Roles, and click the Configure Role from Actions pane.
iii. Select Hyper-V Replica Broker in the Select Role page.
51 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
iv. Provide the NetBIOS name and IP address to be used as the connection point to the cluster (client access point).
v. This process creates a Hyper-V Replica broker role. Verify that it comes online successfully.
b. Configure replication settings.
i. Right-click the replica broker created in the previous steps and click Replication Settings.
ii. Select Enable This Cluster as a Replica Server.
iii. In the Authentication and Ports section, select the authentication method and port.
iv. In the authorization and storage section, select the servers allowed to replicate VMs to this cluster. Also, specify the default location where the replicated VMs are stored.
Replication
Replication is similar to the process described in the section “Replica Outside a Clustered Environment.”
References
NetApp Clustered Data ONTAP 8.3.x and 8.2.x: An Introduction http://www.netapp.com/us/media/tr-3982.pdf
Fibre Channel and iSCSI Configuration Guide for ONTAP https://library.netapp.com/ecm/ecm_get_file/ECMM1280845
Scalable SAN Best Practices in ONTAP https://fieldportal.netapp.com/content/196096
Scalable SAN in ONTAP Technical FAQ https://fieldportal.netapp.com/content/310608
CIFS Technical FAQ https://fieldportal.netapp.com/content/201705
Deployment and Best Practices Guide for ONTAP 8.1 Windows File Services http://www.netapp.com/us/media/tr-3967.pdf
Best Practices Guide for ONTAP 8.2.x and 8.3.x Windows File Services http://www.netapp.com/us/media/tr-4191.pdf
CIFS/SMB Configuration Express Guide https://library.netapp.com/ecm/ecm_get_file/ECMP1547457
File Access Management Guide for CIFS https://library.netapp.com/ecm/ecm_get_file/ECMP1610207
Getting Started with Nano Server https://technet.microsoft.com/library/mt126167.aspx
What’s New in Hyper-V on Windows Server 2016 https://technet.microsoft.com/windows-server-docs/compute/hyper-v/what-s-new-in-hyper-v-on-windows
What’s New in Failover Clustering in Windows Server 2016 https://technet.microsoft.com/windows-server-docs/compute/failover-clustering/whats-new-failover-clustering-windows-server
52 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Version History
Version Date Author Document Version History
Version 1.0 November 2016
Brahmanna Chowdary Kodavalli, Shashanka S R
Initial release
53 NetApp Deployment Guidelines and Storage Best Practices for Windows Server 2016 © 2016 NetApp, Inc. All Rights Reserved.
Refer to the Interoperability Matrix Tool (IMT) on the NetApp Support site to validate that the exact product and feature versions described in this document are supported for your specific environment. The NetApp IMT defines the product components and versions that can be used to construct configurations that are supported by NetApp. Specific results depend on each customer's installation in accordance with published specifications.
Copyright Information
Copyright © 1994–2016 NetApp, Inc. All rights reserved. Printed in the U.S. No part of this document covered by copyright may be reproduced in any form or by any means—graphic, electronic, or mechanical, including photocopying, recording, taping, or storage in an electronic retrieval system—without prior written permission of the copyright owner.
Software derived from copyrighted NetApp material is subject to the following license and disclaimer:
THIS SOFTWARE IS PROVIDED BY NETAPP "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WHICH ARE HEREBY DISCLAIMED. IN NO EVENT SHALL NETAPP BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
NetApp reserves the right to change any products described herein at any time, and without notice. NetApp assumes no responsibility or liability arising from the use of products described herein, except as expressly agreed to in writing by NetApp. The use or purchase of this product does not convey a license under any patent rights, trademark rights, or any other intellectual property rights of NetApp.
The product described in this manual may be protected by one or more U.S. patents, foreign patents, or pending applications.
RESTRICTED RIGHTS LEGEND: Use, duplication, or disclosure by the government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFARS 252.277-7103 (October 1988) and FAR 52-227-19 (June 1987).
Trademark Information
NETAPP, the NETAPP logo, and the marks listed at http://www.netapp.com/TM are trademarks of
NetApp, Inc. Other company and product names may be trademarks of their respective owners.
TR-4568-1116