implementing an end-user data centralization solution

103
Microsoft® Implementing an End- User Data Centralization Solution Folder Redirection and Offline Files Technology Validation and Deployment

Upload: alex-goodfriend

Post on 11-Mar-2015

134 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Implementing an End-User Data Centralization Solution

Microsoft®

Implementing an End-User Data Centralization SolutionFolder Redirection and Offline Files Technology Validation and Deployment

Page 2: Implementing an End-User Data Centralization Solution

Copyright informationThis is a preliminary document and may be changed substantially prior to final commercial release of the software described herein.

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.

This White Paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

Unless otherwise noted, the companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted in examples herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred.

2009 Microsoft Corporation. All rights reserved.

Microsoft, Active Directory, BitLocker, Forefront, SQL Server, Windows, Windows Server, and Windows Vista are trademarks of the Microsoft group of companies.

All other trademarks are property of their respective owners.

Page 3: Implementing an End-User Data Centralization Solution

ContentsExecutive Summary......................................................................................................................................................6

Target Organizational Needs...................................................................................................................................7

Target Group Overview...........................................................................................................................................7

Business Needs........................................................................................................................................................7

Deployment Requirements and Service Level Agreements.........................................................................................8

Data Centralization and Management SLA..............................................................................................................8

Data Availability and Mobility SLA...........................................................................................................................9

Data Protection and Portability SLA.........................................................................................................................9

Technology Overview...................................................................................................................................................9

SMB.......................................................................................................................................................................10

Folder Redirection.................................................................................................................................................11

Offline Files............................................................................................................................................................11

File Server Resource Manager...............................................................................................................................12

Failover Clustering.................................................................................................................................................13

Shadow Copy for Shared Folders...........................................................................................................................13

Data Protection Manager 2007 SP1.......................................................................................................................13

Planning.....................................................................................................................................................................14

Data Centralization and Management...................................................................................................................14

Server location and network infrastructure......................................................................................................14

Server characteristics........................................................................................................................................16

Planning for Folder Redirection deployment.....................................................................................................20

Planning for quotas deployment.......................................................................................................................23

Data Availability and Mobility................................................................................................................................24

Deploying Offline Files.......................................................................................................................................24

Deploying Failover Clustering............................................................................................................................28

Data Protection and Portability.............................................................................................................................30

Understanding the challenges...........................................................................................................................31

Backup methodologies......................................................................................................................................31

Data Protection Manager..................................................................................................................................32

User-enabled data recovery..............................................................................................................................33

3

Page 4: Implementing an End-User Data Centralization Solution

Data recovery planning.....................................................................................................................................34

Protecting the Data Protection Manager infrastructure...................................................................................34

Data Security and Privacy Considerations..............................................................................................................35

Back-end infrastructure security.......................................................................................................................35

Client computer security...................................................................................................................................37

Network security...............................................................................................................................................39

Implementation.........................................................................................................................................................41

Server and Storage Hardware................................................................................................................................41

Domain Representation.........................................................................................................................................41

Failover cluster hardware..................................................................................................................................42

Stand-alone server hardware............................................................................................................................42

Operating system configuration........................................................................................................................43

File-Server-1 (Folder Redirection and Script Content – Head Office).................................................................45

File-Server-2 (Folder Redirection and Script Content – Far Branch Office)........................................................46

File Server Resource Manager...........................................................................................................................46

Shadow Copy for Shared Folders.......................................................................................................................50

Group Policy settings.........................................................................................................................................50

Data Protection Manager 2007 Configuration.......................................................................................................58

Network throttling.............................................................................................................................................59

On-the-wire compression..................................................................................................................................60

Consistency checks............................................................................................................................................60

Operational Procedures.............................................................................................................................................62

Adding Users to the Service...................................................................................................................................62

Stage one..........................................................................................................................................................62

Stage two..........................................................................................................................................................63

Removing Users from the Service..........................................................................................................................63

Conclusion................................................................................................................................................................. 64

Index of Tables...........................................................................................................................................................65

Index of Figures..........................................................................................................................................................66

Appendix A – Scripts..................................................................................................................................................67

Computer Specific Folder Redirection...................................................................................................................67

Computer Specific Folder Redirection logon script...........................................................................................67

4

Page 5: Implementing an End-User Data Centralization Solution

Computer Specific WMI Namespace Installation startup script........................................................................68

Computer Specific (MOF) support file...............................................................................................................68

Windows Vista Background Synchronization.........................................................................................................68

Windows Vista Offline Rename Delete..................................................................................................................74

5

Page 6: Implementing an End-User Data Centralization Solution

Executive Summary

Audience:

This white paper is intended for technical decision makers and operation managers who are evaluating ways to achieve effective management, centralization, and protection for their organization’s user file data. Working knowledge of Windows Server® and Windows® client operating systems is assumed.

Objective:

The objective of this white paper is to show through a case study how to use different Microsoft® products and technologies to put in place a comprehensive solution satisfying the needs of a mid-sized organization around users’ file data management.

The study was conducted by the Quality Assurance group of the Storage Solutions Division (SSD) at Microsoft, a division that focuses on enabling customers of all sizes to store, manage, and reliably access their file data.

As part of a widespread effort within Microsoft to internally deploy products and solutions before releasing them to customers, the Quality Assurance group assembled a deployment team that was tasked with deploying a client data centralization service for a select group of users (the target group) representing a mid-sized organization.

The primary goal of this deployment was to validate that the technologies being developed by SSD enabled our customers to put in place a client data centralization service that satisfied a set of typical business needs.

Learning:

The white paper is structured in a format that will take the reader through the different phases of a deployment project. We will first assess the organizational business needs and requirements, defining the Service Level Agreements. We will then go through the planning considerations and recommended best practices to build a service that satisfies the objective. Lastly, we will provide the reader with the implementation details in a section that is mainly targeted towards IT administrators.

6

Solution Leverage Microsoft

Windows Server 2008 R2 and Windows client technologies to ensure that your users’ files are always available. Log on to an alternate computer or replace a computer and your users’ data will follow them.

Provide a common view from any computer to your users’ data.

Reduce the cost and complexity of your users’ computer replacement.

Achieve centralized management and protection for your users’ data.

Lower WAN utilization and cost with faster file access across geographical locations.

Reduce IT overheard for selective file restores by enabling your users to access previous versions of their files.

Products & Technologies Windows Server 2008 R2 Windows 7 Windows Vista Server Message Blocks

(SMB) Folder Redirection Offline Files (Client Side

Caching) File Server Resource

Manager Microsoft Failover Clustering Shadow Copy for Shared

Folders Data Protection Manager

2007 SP1 (DPM) File Server Capacity Tool

(FSCT)

Page 7: Implementing an End-User Data Centralization Solution

Target Organizational Needs

Target Group Overview

When selecting the target group for the internal deployment, the main criteria was to ensure that its structure and needs were representative of a generic mid-sized organization. Several criteria were considered to select such a group. For instance, the number of users, their geographical location, and their mobility needed to be typical of a modern medium-sized business. The target group chosen by the deployment team had approximately 340 people geographically dispersed across two continents and four different sites, as depicted in Figure 1 below:

Figure 1 - Geographic distribution

The employees within this group had unique characteristics that needed to be considered when defining the solution:

• Users were geographically dispersed, some located in branch offices connected to the main office using slow wide area network (WAN) links.

• Users had various operating systems such as Windows Vista® or Windows 7 installed on each of their client computers.

• In addition to using their own corporate computers, some users had the need to use shared computers.

• Some users were mobile, needing to travel inside and outside corporate facilities with their laptops.• Some users were frequently working from non-corporate computers with Internet connectivity

(working from home).

Business Needs

For this project, the business needs of the IT organization and the target group can be categorized as follows:

7

Page 8: Implementing an End-User Data Centralization Solution

1. Data Centralization and Management – User data typically stored on client computers should be centralized for IT administrators to monitor and manage. Users should also be able to access their data from any corporate domain-joined computer.

2. Data Mobility and Availability – Users should have access to their data at any time regardless of their network connectivity or bandwidth/latency constraints. For example, users should be able to take their laptop home and access their data without connectivity to the corporate network.

3. Data Protection and Portability – Data stored on client computers should be protected against possible data loss. Additionally, data needs to be portable so that users can easily replace their computer and maintain access to their data after domain logon.

The deployment team had also to consider the data privacy and security aspects of the solution. All user data should be stored in a secure and safe location and should be accessible only by the authorized user. Some of the important considerations are defined in a specific security section later in this document.

Additionally, while fulfilling the business needs above, the solution needed to be implemented and operated within a limited budget. In this deployment, reducing IT personnel (administration and support) and hardware costs were primary goals as we wanted to ensure the cost effectiveness of the solution that we proposed to our customers.

Deployment Requirements and Service Level AgreementsThe service was implemented by using pre-released versions of Windows Server 2008 R2 and Windows 7. Service outages and data loss due to code instability were not anticipated because of the rigorous product testing and validation process in place at Microsoft, but planning for such incidents had to be considered based upon the use of pre-release software. Independent of the quality of the early Windows Server 2008 R2 versions being deployed, the team was held to a high standard to maintain system availability and prevent data loss or corruption.

Defining precise Service Level Agreements (SLAs) was instrumental in making the appropriate design decisions. Going through this definition process is an important step as the SLAs have an effect on planning considerations such as server placement, server configuration, backup technology selection, and methodology. We will highlight many of the decisions that were made and how they were handled to ensure that the implemented solution met the business objectives.

Data Centralization and Management SLA

One deployment objective was to implement a controlled platform to achieve data centralization for the organization’s user data. In our case, this data was maintained in the user profile folders which are composed of well-known Windows folders such as Documents, Videos, Pictures, and Music.

Prior to implementing the solution for the target organization, user profile folders were maintained on individual client computers. The data was not synchronized between the different computers (laptop and desktop, for example), which was not convenient for our users who wanted a consistent view of their data on every computer.

The goal was to provide the ability for users to access a consistent view of their user profile folders from a centralized location based on the specific SLA principles below:

• Support client computers that are domain-joined and running Windows Server 2008 R2, Windows Server 2008, Windows 7, and Windows Vista.

• Provide the following storage capabilities to users:

8

Page 9: Implementing an End-User Data Centralization Solution

o 10 gigabytes (GB) of storage space to support individual user profile folders.o Warn users when their capacity utilization reaches 90%.o Warn users when their capacity utilization reaches 100%, and block the ability to create

additional content to the centralized data store. o Provide increased capacity as required given relevant business justification.

• Solution should have a negligible performance impact on user experience independent of location or network connectivity during enrollment or daily usage.

• Users should be able to efficiently search the data stored in their user profile folders.

• The IT administrator should be able to monitor and control the data stored for the target group for compliance and space management reasons.

Data Availability and Mobility SLA

Achieving high availability of user data was a key business objective because users were relying on always being able to access their files to perform their work.

The goal was to ensure that users maintain fast access to their data independent of their network connectivity or quality of the link leveraged to connect to the centralized storage location. The SLAs for data availability and mobility were:

• Achieve 99.99% data availability to user profile folders from client computers.

• Allow users fast access to their user profile folders independent of:o Users that infrequently connect to the corporate network.o The throughput and latency of the network connecting the user’s computer to the centralized

storage location.

Data Protection and Portability SLA

Protecting the user from a partial or full data loss incident was another key requirement. We wanted to protect user data in case of various events such as accidental file modification or deletion, laptop loss, or client or server hardware failure.

To reduce the service support costs, it was required to provide end users with the ability to recover previous versions of their files without assistance from any IT organization’s staff member.

The SLAs for data protection and portability were:

• Service should achieve a Recovery Point Objective (RPO) of zero data loss for centralized data.

• Service should make computer replacement easy, allowing users to maintain access to their data without the need for data migration.

• Employ a single backup server located in the head office to protect all file servers independently of their geographic location.

• Mitigate any performance impact on the end users during backup operations.

• Minimize WAN traffic during backup operations.

• Allow end users to perform selective file and folder content recovery without assistance from IT staff.

9

Page 10: Implementing an End-User Data Centralization Solution

Technology OverviewTo implement the solution, a wide variety of technologies were needed. Many of the technologies that were required have been available in the Windows Server and Windows client operating systems for a long time. However, a few technologies key to this project have been either greatly enhanced or are new for the Windows Server 2008 R2 and Windows 7 operating systems. These technologies will be introduced with special attention given to the features relevant to this deployment.

Note: The following is not an exhaustive discussion of each technology’s feature set.

SMB

Server Message Block (SMB) is the primary remote file protocol used by Windows clients and servers and dates back to the 1980s. When it was first introduced, local area network (LAN) speeds were typically 10 megabits per second (Mbps) or less, WAN use was very limited, and wireless LANs did not exist. Since then, the world of network communication has changed considerably. The original SMB protocol, also known as CIFS (Common Internet File System), incrementally evolved over time until Windows Vista and Windows Server 2008 were introduced. Starting with these releases, a redesigned version of the protocol known as SMB 2.0 was introduced. Notably, this version brought a number of performance improvements over the former SMB 1 implementation.

Two important limitations of SMB 1 were its “chattiness” and lack of concern for network latency. To accomplish many of the most common tasks, a series of synchronous round trips were required. The protocol was not created with WAN or high-latency networks in mind, and there was limited use of compounding (combining multiple commands in a single network packet) or pipelining (sending additional commands before the answer to a previous command arrives). There were also limitations regarding the number of files that could be opened concurrently, the number of shares, and the number of concurrent users supported.

The SMB 2.0 design addressed those shortcomings, bringing a number of significant improvements, including but not limited to:

• General improvements to allow for better utilization of the network.

• Request compounding, which allows multiple SMB 2.0 requests to be sent as a single network request.

• Larger reads and writes to make better use of faster networks, even those with high latency.

• Caching of folder and file properties, where clients keep local copies of folders and files.

• Durable handles to allow an SMB 2.0 connection to transparently reconnect to the server in the event of a temporary disconnection.

• Improved message signing (HMAC SHA-256 replaces MD5 as hashing algorithm) with improved configuration and interoperability.

• Improved scalability for file sharing (number of users, shares, and open files per server greatly increased).

• Support for symbolic links.

SMB 2.1, a minor revision, brought an important performance enhancement to the protocol in Windows Server 2008 R2 and Windows 7 with the introduction of a new client opportunistic lock (oplock) leasing model. Oplocks are extensively used by SMB to allow the client to cache data and file handles. This brings major performance benefits especially on slower networks by limiting the amount of data that needs to be transferred between the client and server. The new leasing model in SMB 2.1 allows greater file and handle caching

10

Page 11: Implementing an End-User Data Centralization Solution

opportunities for an SMB 2.1 client, while preserving data integrity, and requiring no current application changes to take advantage of this capability.

The benefits of this change are:

• Reduced network bandwidth consumption

• Greater file server scalability

• Better application response time when accessing files over a network

Another important enhancement brought in Windows 7 is improved energy efficiency for workstations. Windows Vista computers may enter a sleep power state in a number of scenarios, and Windows 7 computers allow a greater range of scenarios where they may enter a sleep power state.

For backwards compatibility, Windows Server 2008 R2 and Windows 7 support SMB 1, SMB 2.0, and SMB 2.1 and will automatically use the version most appropriate for communication, as shown in Table 1.

SMB Version Negotiation

Client

Server Pre-Windows Server 2008 Windows Server 2008 Windows Server 2008 R2

Pre-Windows Vista SMB 1 SMB 1 SMB 1Windows Vista SMB 1 SMB 2.0 SMB 2.0Windows 7 SMB 1 SMB 2.0 SMB 2.1

Table 1 - SMB version negotiation

Folder Redirection

Folder Redirection is a feature that allows users and administrators to redirect the path of a user profile folder to a new location. The new location can be a folder on the local computer or a folder on a network share. Folder Redirection provides users with a centralized view of select user profile folders from any domain-joined computer. Users then have the ability to work with documents located on a server as if they were located on the local drive. For example, with this technology, it is possible to redirect the Documents folder, which is usually stored on the computer's local hard disk drive, to a network location.

Folder Redirection offers many benefits to users and administrators, including having their data stored on a server which can be easily backed up as part of routine system administration tasks. It also allows a user to log on to different physical computers while automatically maintaining access to their data.

One drawback of Folder Redirection prior to Windows 7 was a user’s first logon experience when Folder Redirection was initially deployed. With Windows Vista, the user, in certain conditions, could experience delays during their first logon while all their local data was copied over the network to the server. To improve this experience in Windows 7, a user with Offline Files enabled (on by default) when Folder Redirection is deployed will see a significantly improved first logon experience because their data is moved from their local drive into their local cache, and not over the network. After the initial move is complete, the user may access their desktop normally and their locally cached data will be synchronized over the network to the server as a background task.

Offline Files

Offline Files (also known as Client Side Caching or CSC) makes network files available to an end user with near local access performance when a network connection to the server is unavailable or slow.

11

Page 12: Implementing an End-User Data Centralization Solution

Offline Files maintains a local cached copy of network files and folders on the client computer, so that the cached files are available when there is no network connection to the file servers. When the connection to the file server is restored, changes made to the files while working offline are automatically synchronized to the file server. If a user modified the same file from multiple computers when working offline, options exist to resolve the conflict.

Many improvements relevant to the solution discussed in this document have been made to Offline Files in Windows 7. These changes include:

• Introduction of the Usually Offline concept

• Ability to transition online automatically from Slow Link mode

• Support for exclusion list in Offline Files based on file extensions

• Support for offline directory rename and delete

Usually Offline provides the ability for users to always work from the cached copy while maintaining a synchronized view of the data between the client and the server. This concept was introduced to enable users to experience local file access performance even when the network between the client and the server is slow. When a client network connection to a server is slow, Offline Files will automatically transition the client into an “Offline (slow connection)” mode. The user then works from the locally cached version of the files so all reads and writes are satisfied by the cached copy. While this feature is also available in Windows Vista, Windows 7 adds the ability to automatically synchronize in the background at regular intervals to reconcile any changes between the client and the server. With this feature, users do not have to worry about manually synchronizing their data with the server when they are working offline.

Additionally, prior to Windows 7, there was no mechanism to automatically transition to an online state from an offline state that was caused by a slow link condition. Now, the system can detect when network conditions improve and transition back to online mode. With Windows Vista, the user needed to explicitly transition back online by using the user interface. It is also possible for administrators to programmatically transition online at certain key times such as user logon.

The Offline Files Exclusion List feature allows administrators to block files of selected types from being created from the client computers for all cached network locations. The list of file types is configured by the IT administrator through Group Policy. In the solution described in this document, and for compliance reasons, users are prevented from creating files of specific types by policy, independent of whether they are working in an online or offline mode.

By default, Windows 7 enables users to rename and delete directories while in offline mode. In Windows Vista Service Pack 1 (SP1), adding a registry key was required to enable this functionality. Prior to Windows Vista SP1, it was not possible while offline to rename or delete directories that were created online.

File Server Resource Manager

File Server Resource Manager is a suite of tools that allows administrators to understand, control, and manage the quantity and type of data stored on their computers that are running Windows Server. By using File Server Resource Manager, administrators can place quotas on volumes, actively screen files and folders, and generate comprehensive storage reports. This set of advanced instruments not only helps the administrator to efficiently monitor existing storage resources, but it also aids in the planning and implementation of future policy changes. In Windows Server 2008 R2, File Server Resource Manager is supported on all server installation options, including Server Core.

File Server Resource Manager enables the following tasks:

12

Page 13: Implementing an End-User Data Centralization Solution

• Create quotas to limit the space allowed for a volume or folder and generate notifications when the quota limits are approached or exceeded.

• Automatically generate and apply quotas to all existing folders and any new subfolders in a volume or folder.

• Create file screens to control the type of files that users can save and send notifications when users attempt to save blocked files.

• Define quota and file screening templates that can be easily applied to new volumes or folders and reused across an organization.

• Schedule periodic or on demand storage reports that help identify trends in disk usage.

• Monitor attempts to save unauthorized files for all users or for a selected group of users.

Failover Clustering

Failover clusters in Windows Server 2008 R2 provide high availability for mission-critical applications such as databases, messaging systems, file and print services, and virtualized workloads. Failover clusters can scale to include sixteen servers (nodes) in a single cluster by using a shared storage backend with support for Serial Attached SCSI (SAS), Internet SCSI (iSCSI), or Fibre Channel interconnects. The nodes maintain constant communication with each other to ensure service availability; if one of the nodes in a cluster becomes unavailable due to an unscheduled or scheduled failure, another node immediately begins providing service. Users who are accessing a service that has moved from one cluster node to another due to failure or another service-impacting outage will typically not notice any service impact and will continue to work without issue.

Shadow Copy for Shared Folders

Shadow Copy for Shared Folders is a feature in Windows Server that transparently maintains previous versions of files on selected volumes by using shadow copies. It works by taking snapshots of an entire volume at particular points in time. Shadow Copy for Shared Folders is enabled on a per-volume basis.

By default, two schedules are applied to take the snapshots at 7:00 A.M. and 12:00 P.M. every weekday. Shadow Copy for Shared Folders allows individual users to do selective file or folder restore from previous versions without IT assistance.

This feature helps reduce IT operational costs by eliminating the need for administrator intervention by enabling users to restore deleted, modified, or corrupted files from a snapshot of the volume.

Data Protection Manager 2007 SP1

Microsoft Data Protection Manager 2007 (DPM) is a full featured data protection product designed to protect compatible applications and the Windows Server operating system. DPM is designed to deliver continuous data protection for compatible applications and file servers by using seamlessly integrated disk, tape, or cloud storage as a backup target. DPM invokes the Volume Shadow Copy Services (VSS) to create a one-time full replica of the data to be protected followed by incremental synchronizations (recovery points) that by default are scheduled to occur every fifteen minutes. DPM is intended to provide a “zero data loss” recovery when protecting applications such as Microsoft Exchange and Microsoft SQL Server®, and a “near Continuous Data Protection (CDP)” model for file servers where content can be protected on a fifteen-minute schedule.

DPM provides the following key file protection features related to the needs of our solution:

• File-based protection for stand-alone and clustered file servers with support for Windows Server 2008 R2.

13

Page 14: Implementing an End-User Data Centralization Solution

• Backup to disk with the option to also back up to tape and/or the cloud.

• Flexible exclusion for files and folders.

• Ability to protect open files.

• Elimination of the need to repeat Full backups. The initial replica is created once and incrementally synchronized.

• Only changed blocks within files are moved to the DPM server during incremental synchronization.

• Minimization of data loss through the ability to restore from selected recovery points or from the last incremental synchronization.

• Support for protection of remote servers with minimum impact on WAN links (requires 512 kilobits per second [Kbps] minimum link).

• Advanced bandwidth throttling for WAN backup scenarios.

• Self-service ability for End-User Recovery of files directly from Windows Explorer or Microsoft Office.

• Self-healing capability: in case a number of backups have failed, DPM will check to ensure data consistency. 

PlanningThis section focuses on the planning required to meet the previously defined business objectives. Planning is an important stage of the deployment process and is critical to delivering the solution on time and on budget.

Data Centralization and Management

Centralizing data from user profile folders was a key requirement of this solution. The main technology used to achieve centralization of user profile folders was Folder Redirection. To control and monitor the data stored at the central location both in terms of size and content, we used the functionalities offered by File Server Resource Manager.

Server location and network infrastructure

To define our server infrastructure, we first needed to understand the way client computers would access the servers. The geographical location and the mobility of users would have a deep impact on any data centralization project because both location and mobility would have an effect on the characteristics of the network link between the clients and the server. The relevant characteristics of networking are the link latency, throughput, and the availability of the network.

SMB 2.0, introduced in Windows Vista, dramatically increased the performance of file access over slow networks (high latency/low bandwidth). Nevertheless, the better the performance characteristics of the underlying network, the better the end user’s experience will be when accessing files directly from the server.

14

Page 15: Implementing an End-User Data Centralization Solution

Table 2 represents our user base classification for user location and mobility.

Facility Classification

User category Description Average network latency (RTT) between user location and data facility

Local user Users located at the same location as the main office/data center. This implies a LAN connection between the client computers and the datacenter.

< 3 milliseconds (ms)

Near branch office users Users located in branch offices on the same continent as the main office/data center location. This implies a relatively fast (depending on the infrastructure) WAN connection between the client computers and the datacenter.

80 to 100 ms

Far branch office users Users located in branch offices on a different continent from the main office/data center location. This implies a relatively slow WAN connection between the client computers and the datacenter.

250 to 300 ms

Mobile users Users using laptop computers, having sporadic access to the network, and connecting to the corporate network by using Remote Access or Direct Access (new in Windows 7). This implies a transient WAN connection between the client computers and the datacenter with latency and speed varying depending on where the user connects from.

Variable

Table 2 - Facility classification

Based on our testing, the initial logon performance for all the client operating systems that we planned to support was acceptable for local users and near branch users with a file server located in our main datacenter. The use of the Offline Files technology allowed us to meet our business requirements for performance for the near branch user experience.

For mobile users, we also looked at Offline Files to allow us to meet our data access performance and availability needs because the characteristics of the underlying network can vary widely.

For far branch users, we chose to deploy a separate file server that was physically located on the same site as the users. Although Offline Files would have allowed acceptable performance as far as file access is concerned, the initial folder redirection process could have been unacceptably long for users with a large data set and who use Windows Vista for their client operating system. Indeed, when Folder Redirection first applies, the content of the local folders to redirect is moved to the server, and for Windows Vista this operation occurs during initial logon. For users with several gigabytes of data stored in their local folders and with the server located on another continent with an average latency of 250 ms to 300 ms, this would have meant an initial logon time in the order of hours, which was not acceptable. Users with Windows 7 would not have experienced this issue, but because our solution stipulated that we also needed to support Windows Vista and Windows Server 2008 as client operating systems, we decided to put a file server in the far branch office.

15

Page 16: Implementing an End-User Data Centralization Solution

Figure 2 summarizes the pre-existing network infrastructure and the choices that were made for server placement.

Figure 2 - Network infrastructure and server locations

Server characteristics

File server capacity planning

Correctly sizing the different elements of our file server back-end infrastructure was critical for our ability to scale the service for years to come in a cost efficient way. The purpose of this planning phase was to determine what hardware needed to be purchased so we would provide the service in a way that meets our needs while making sure that we accounted for future growth. Understanding the following is important to achieve this goal without overspending:

• Characteristics of the workload

• Current and future needs

• Existing company infrastructure and how it can be best leveraged

The type of workload applied to the servers is a major variable in properly sizing the different components of the system. Its characteristics define the relative needs in terms of CPU, memory, storage, and network. Several broad workload categories are commonly referred to in the industry. These workloads include Database, File, E-Mail, Web, and High Performance Computing (HPC). Each workload stresses the system resources differently.

In our case, we knew that we would be applying a File workload to our servers where multiple clients would access files on a server through the SMB protocol. Understanding the broad category in which our workload belongs is a good first step. However, characterizing the resource requirements based only on this data point is very challenging. There are multiple sub-categories within the File workload, each having very different characteristics. For our deployment, we defined our workload as a home folder workload. Some of the interesting characteristics for this type of workload are the following:

16

Page 17: Implementing an End-User Data Centralization Solution

• Very little sharing of files between clients (each user has a dedicated folder mainly accessed by one computer at a time)

• User folders are mainly composed of Microsoft Office documents, pictures, and a few videos

• Mix of mostly random read and write operations (approximately 70% read, 30% write)

• Relatively light load from each client with sporadic access over the course of a work day

• Potentially large number of clients and network connections

From a hardware standpoint, this translated into the following generic characteristics:

• More stress on the storage subsystem with more performance needed for small/medium random I/Os

• Storage subsystem using RAID 5 or RAID 6 needed for data availability

• More stress on the networking subsystem with the ability of efficiently handling a large number of concurrent connections and transferring a large number of small/medium packets

• Less stress on CPU which is used mainly to move data rather than performing complex data transformation

• Less stress on memory which is used mainly for file system caching purposes

To define our hardware needs more precisely, we needed to quantify our current and future scalability requirements. We knew that we would be deploying two file servers: one in our head office supporting approximately 220 users and one in the far branch location supporting approximately 100 users.

Our requirements specified the need to provide 10 GB of storage space per user for this service. This goal was established from business needs and cost. Before deploying the service, our users had various amounts of data in their local user profile folders. Some had a few megabytes while others had many gigabytes of data. It was important to know before the deployment the amount of data each user had in the folders we wanted to redirect in order to provision the right amount of storage. For this purpose, the team developed a simple script that was run on the client computers to be deployed that calculated the amount of data stored in the users folders to be redirected.

In our case, we determined that the average amount of space consumed by each user in their local user profile folders was 3 GB with wide variations between all users. With this data point, we did not provision 10 GB of storage per user since it would have resulted in a large amount of unused space.

As far as future needs are concerned, we have planned for 5% increase year over year in terms of users and 10% in terms of data. We will also adjust our requirements and the amount of storage space we offer as the business needs change.

17

Page 18: Implementing an End-User Data Centralization Solution

Tables 3 and 4 represent the forecasted needs for both servers over a period of 5 years.

Head Office Server Capacity Sizing

Year Number of users Actual per user storage usage prediction (GB)

Per user storage offered as per SLA (GB)

Total storage provisioned (GB)

Year 0 220 3 10 1100Year 1 231 3 10 1155Year 2 243 4 10 1455Year 3 255 4 15 1783Year 4 267 4 15 1872

Table 3 - Head office server capacity sizing

Far Branch Server Capacity Sizing

Year Number of users Actual per user storage usage prediction (GB)

Per user storage offered as per SLA (GB)

Total storage provisioned (GB)

Year 0 100 3 10 500Year 1 105 3 10 525Year 2 110 4 10 662Year 3 116 4 15 810Year 4 122 4 15 851

Table 4 - Far branch server capacity sizing

Based on these projections, we provisioned 2 terabytes of usable storage for our head office server and 1 terabyte for our far branch server.

Based on our testing, and given the relatively low number of users per server and the characteristics of workloads, an entry level server (dual core system, 4 GB of RAM) with direct attached storage and one 1 gigabits per second (Gbps) network adapter would be sufficient to support the load. Having additional RAM, however, would enhance the availability of the solution as we will discuss in the next section related to check disk (chkdsk).

To determine how many users a given configuration can support, we recommend using the Microsoft File Server Capacity Tool (FSCT) tool. For more information, see File Server Capacity Tool - (32 bit) in the Microsoft Download Center (http://go.microsoft.com/fwlink/?LinkId=166651). This tool simulates a home folders file workload on a set of client computers and computes the maximum number of users a server can support based on the response time of simulated scenarios. The scenarios include common operations such as browsing a directory, copying files, and modifying Microsoft Office files. For a given number of users accessing data on a file server, FSCT will compute a throughput number corresponding to the average scenarios per second that the server is able to sustain. The tool also provides the ability to collect performance counters such as CPU, Memory, Network, and Disk subsystem utilization details to help identify bottlenecks.

Figure 3 shows an example of the data FSCT provides when run on a setup composed of a target server under test and several load simulation client computers. The server system used in the test had the following specifications:

Dual socket quad core 2.33-gigahertz (GHz) CPU 8 GB of RAM

18

Page 19: Implementing an End-User Data Centralization Solution

3x1-Gbps network adapters 12x146-GB 15-KRPM SAS drives

FSCT profiled the server in this example as having the capability to sustain 4,400 file server users before it reached an overload condition at 4,600 users.

Figure 3 - FSCT report output

With this tool it is easy to predict the evolution of the server scenarios throughput based on the amount of simulated users as shown in Figure 4.

19

Page 20: Implementing an End-User Data Centralization Solution

20002400

28003200

36004000

44004800

0

50100150200250300350400450

0.00%10.00%20.00%30.00%40.00%50.00%60.00%70.00%80.00%90.00%100.00%

Average scenario throughputCPU utilization

Number of users

Num

ber o

f sce

nario

s per

seco

nd

CPU

utiliz

ation

Figure 4 – FSCT server scenarios throughput

In our deployment, we leveraged servers and storage, which were made available from a pool of existing hardware. The specifications were higher than required which resulted in a heavily underutilized system for our solution. The implementation section contains the details of the specifications.

File Server storage configuration

After we established the amount of storage needed and its type, we defined how it would be configured and presented to the operating system.

In the past and with earlier versions of Windows Server such as Windows 2000 Server, it was usually not recommended to create large NTFS volumes (> 2 terabytes) in a system requiring high availability of data. Although NTFS has been fully capable of handling very large volumes (up to 256 terabytes with a 64-KB cluster size by using a GPT disk type), the time needed to run check disk (chkdsk) was often considered as a barrier to achieve high availability. Since then, substantial improvements have been made to reduce check disk run time. Specifically, in Windows Server 2008 R2 a new feature called block caching was introduced to address this problem. The feature makes better use of available RAM on the system to reduce check disk run times. For this reason and given the current price of RAM, we recommend adding extra memory capacity to the system. The time check disk needs to complete depends on many parameters, such as number of files, size of files, speed of the storage subsystem, volume of data, and level of fragmentation of the volume. Because of all these parameters, it is difficult to give absolute performance numbers, and the best way to evaluate the time to completion is to perform a test on a representative system and data set. In our solution, we decided to implement a single data volume on each server for ease of management.

Planning for Folder Redirection deployment

Determining which user profile folders to redirect

The first step was to determine which user profile folders we wanted to redirect. Our focus in this deployment was mainly to centralize user files and to lower the total cost of ownership in the scenario when a user needs to configure a new computer. Folder Redirection is an effective technology to enable this scenario as it can restore user data on a computer upon logon. To achieve our goal, we redirected the following user profile folders: Desktop, Documents, Pictures, Music, Videos, Favorites, Contacts, Downloads, Links, Searches, and Saved Games.

20

Page 21: Implementing an End-User Data Centralization Solution

Determining which user computers need redirection

Because the Folder Redirection Group Policy is a per-user setting, the redirection of user profile folders is tied to a user account independent of the computer used by the user. This means that if Folder Redirection is configured for a given user, the user will get their folders redirected on every computer they log on to. While it is desirable in some scenarios, it might be an unwanted feature in others. It was the case in our deployment where users log on to a variety of different computers including shared computers and computers from other users. When deploying Folder Redirection in conjunction with the Offline Files technology as we did in our solution, every user logging on to a computer has a copy of their redirected data stored on to the local hard drive. When several different users log on to the same computer, this can cause the local hard drive to quickly run out of space.

As of today, there is no built-in mechanism in Windows through which an administrator can specify on which computers Folder Redirection should apply. However, it is easy to put in place a custom solution that achieves this result with the help of a Windows Management Instrumentation (WMI) filter and a combination of a computer startup script and a logon script. The implementation section in this document provides the necessary information to put in place this computer-specific Folder Redirection technique.

In our deployment, the administrator of the solution is made aware of the computers on which Folder Redirection should apply for given users. This information is then stored in a custom database and, by using the aforementioned technique, only specific computers get the redirection policy applied.

Determining the type of client operating system

The type of operating system installed on end user computers affects their Folder Redirection experience. Folder Redirection and Offline Files are technologies that were introduced in Windows 2000. Since then, many improvements have been made and new features introduced, especially to handle the behavior on slow networks.

Our users mainly use Windows Server 2008 R2, Windows Server 2008, Windows 7, and Windows Vista as their operating systems. Scoping our deployment to those operating systems simplified our deployment and allowed us to take advantage of the latest features available. Including older versions of client operating systems such as Windows XP is possible but adds complexity and does not provide the same functionalities as Windows 7 and Windows Vista. For instance, the lack of APIs for Offline Files makes the implementation of any automatic background synchronization mechanism challenging. The Offline Files feature was completely redesigned for Windows Vista. For more information, see What's New in Offline Files for Windows Vista (http://go.microsoft.com/fwlink/?LinkId=166654).

Planning around network characteristics

As shown earlier, the location of the users and the characteristics of the network link used to connect to the corporate network have a deep impact on how we implemented the solution and the technologies we put in place. For the Folder Redirection technology, there are two important factors to consider, both having an impact on the initial redirection experience:

• The end-to-end throughput of the link between the client computer and the domain controller used to authenticate the user logging on to the client computer

• The end-to-end throughput and latency of the link between the client computer and the folder redirection server

When the user logs on to their client computer after the administrator has applied the Folder Redirection policy setting to their account, the Group Policy engine will first evaluate the throughput of the link between the client computer and the domain controller that was used to authenticate the user. If the speed of the link is lower than a

21

Page 22: Implementing an End-User Data Centralization Solution

certain threshold (a configurable value that is 500 Kbps by default), the Group Policy engine will determine that a slow link logon has occurred and policy settings should not be applied, including the Folder Redirection policy setting. This mechanism was put in place to avoid long logon times for users attempting to log on to the corporate network through a slow Remote Access Service (RAS) connection. In all the cases, it is important to note that for the Folder Redirection policy setting to apply, the user needs to be connected to the corporate network at the time of logon, either through RAS or a direct connection, because the Folder Redirection policy setting is only applied at logon and not during a Group Policy background refresh. Similarly, the policy setting will not apply if the user performs a logon while disconnected from the corporate network (cached logon) and subsequently uses RAS to connect to the corporate network.

The characteristics of the network link between the client and the Folder Redirection server also has an impact on the Folder Redirection process. On Windows Vista, when Folder Redirection applies, we know that the content of the local folders is moved over to the server during the logon operation. In extreme cases with a very large amount of data to move over a slow network, it could take up to an hour for the first logon to complete (by default, the Group Policy application process will stop after one hour and let the logon process continue). For this reason, knowing the amount of data each user has in their local folders prior to applying the policy setting (by using the script mentioned previously), along with the characteristics of the networks between the file server and the clients, are two important data points when planning for a deployment.

These considerations have different implications from a planning point of view depending on user categories as defined in Table 5.

Network Characteristics Impact

User category Clients to domain controller network impact Clients to file server network impact

Local users Users have a fast (>100 Mbps) LAN connection to the domain controller authenticating them. The throughput is greater than the slow link Group Policy default threshold. The Folder Redirection policy setting applies when the user logs on after the policy setting has been set by the administrator.

The connection between the clients and the file server is also a fast LAN. The file copy operation time on Windows Vista during the Folder Redirection policy setting application will not be a concern.

Near branch office users

Users also have a fast LAN connection to the domain controller authenticating them. Although there is a WAN link between the branch offices and the main office, each branch office has a local domain controller performing user authentication. The behavior here is equivalent to that of the local users case.

The connection between the clients and the file server is WAN with 80 ms–110 ms latency. Our testing has shown that the time of the copy operation on Windows Vista was still acceptable. Before rolling out the deployment, it is useful to communicate to those users the potential delay during the first logon.

Far branch office users

Same behavior as that of the near branch office users.

Our users in the far branch office would have a local file server dedicated to them. The behavior here is equivalent to that of the local users.

Mobile users The link between the client and the domain controller can potentially be slow for this category of users and below the default 500 Kbps threshold. If the user does a RAS logon on such a slow network or does a cached logon, the Folder Redirection policy setting

The link between the client and the file server can also be slow in this case. Latency and throughput can vary widely depending on the underlying network used, and the copy operation time on Windows Vista can similarly vary widely.

22

Page 23: Implementing an End-User Data Centralization Solution

will not be applied. Effectively, this means by default that the policy will apply only with RAS logon on a faster connection (through high speed Internet for instance) or when the user is directly connected to the corporate network (travelling user back in the office).

Before rolling out the deployment, we communicated the experience to our mobile user base to set expectations on the potentially long RAS logon time right after the policy is put into effect. Users can then make the appropriate decision of doing a RAS logon or waiting to be back in the office with a direct LAN connection.

Table 5 - Network characteristics impact

It is important to emphasize that the folder redirection process exposed above happens only once after the policy setting has been deployed by the administrator. After the folders are successfully redirected, none of the above will happen during subsequent logons provided that the Folder Redirection policy setting has not been changed by the administrator.

It is also important to emphasize that the connection from the clients to the file server matters in the folder redirection process mainly when you plan to deploy Windows Vista clients or operating systems prior to Windows Vista. If the Folder Redirection policy setting is deployed on Window 7 clients, the delay experienced during the first logon is limited even when the network connection is slow.

Planning for quotas deployment

We wanted to control the amount of space consumed on the server by each user. We used File Server Resource Manager directory quotas to enforce our space management policy. As a general rule, we planned to enforce a 10 GB quota for every user in the deployment. We made exceptions and provided more space upon request with appropriate business justification.

Before deploying quotas, it is important to understand how the technology interoperates with Folder Redirection to avoid issues. The main problem to avoid is a user running out of quotas during the initial file merge process of the Folder Redirection operation. Typically, this can happen if a user has a large amount of data in his local folders to be redirected, in excess of the quota limit set on the server (10 GB in our case). Here also the behavior differs between Windows Vista and Windows 7.

In Windows Vista, the merge process will start between the client and the server regardless of the amount of data to merge and regardless of the quota available on the server. During that process the content of the local folders not already present on the server is copied over to the server. For each user profile folder, and if the copy succeeds, the folders are redirected and the local content is deleted. If the copy fails (because of a quota limit hit for instance) the merge process will stop, leaving the content on the server and local client folders, which will result in the failure of the redirection operation. Since Folder Redirection did not apply correctly, the same process happens again at each subsequent logon until the problem is resolved and Folder Redirection succeeds. To recover from this situation, the quota needs to be increased or the user must remove some of the data that failed to be redirected both on the client computer’s local user profile folders and on the server.

In Windows 7, the merge process is handled differently. When Folder Redirection applies for a specific user profile folder, the system first checks if there is enough space or quota on the server to perform the redirection operation. If there is enough space, the content of the user profile folder to redirect is copied into the Offline Files cache. This operation being only a local file copy, no data is transferred over the network at this time, making the logon process faster. After the user has successfully logged on, Offline Files will perform an initial synchronization operation, merging the content on the local cache with the one on the server. In the scenario where not enough quota is available on the server, there would still be space issues to be resolved for Folder Redirection to apply successfully, but the process is more streamlined and efficient.

23

Page 24: Implementing an End-User Data Centralization Solution

To avoid space issues during the deployment, the use of the soft quota feature of File Server Resource Manager is recommended. This feature allows administrators to define quotas that are not strictly enforced. The user will be allowed to store data past the quota limit but the administrator and the end user can be notified by e-mail if a certain percentage of the quota is reached. This allows the administrator and the user to be aware of the space issue while allowing the Folder Redirection process to proceed uninterrupted. After Folder Redirection is deployed, the IT organization can address the cases of users breaching their quota according to its space management policy and subsequently enforce hard quotas if required.

Data Availability and Mobility

Our SLA stipulates that users must be able to access their data 99.99% of the time independently of the connectivity they have to the file server. This includes being able to access redirected folders from the client computer when there is no network connectivity or when the server is unavailable. Furthermore, performance when working with the redirected folders must be comparable to the experience when working with documents stored on the local computer’s hard drive. Particularly, accessing a document in the redirected folders when connected to the corporate network over a slow link must be as fast as accessing a document stored locally.

Meeting those stringent requirements would not be possible by only redirecting folders to a remote share and always performing file accesses over the network. It is, however, possible to achieve with a technology capable of caching the files on the local computer and accessing the cached copy in an intelligent way. The Offline Files technology in Windows provides this functionality and has been designed to work hand in hand with Folder Redirection to provide anytime access to data with a high level of performance.

Deploying Offline Files

When deploying the Folder Redirection Group Policy setting for specific folders, those folders are also automatically cached by default on the client computer by using the Offline Files technology. This allows users to transparently access files from the cache when no network or a slow network is present with the possibility of synchronizing the changes between the client and the server in the background. When deploying Offline Files with Folder Redirection, the following needs to be considered:

• Versions of the operating system installed on users’ computers

• Size of the users’ computers Offline Files cache

• Possible folder states and optimum settings depending on users’ category

Offline Files and operating system versions

The Offline Files feature is enabled by default on the following client operating systems:

Windows 7 Professional, Windows 7 Enterprise, Windows 7 Ultimate Windows Vista Business, Windows Vista Enterprise, Windows Vista Ultimate

This feature is turned off by default on Windows Server operating systems. For us, this includes Windows Server 2008 and Windows Server 2008 R2. To enable Offline Files on server operating systems, the Desktop Experience feature must be installed and enabled.

Without Offline Files enabled, access to the redirected folders will happen online without any assistance from local caching. This means that the content of the redirected folders would not be available if the server could not be reached and access would be slow in case the server can only be reached over a slow network.

24

Page 25: Implementing an End-User Data Centralization Solution

While it is possible to administratively enable Offline Files on server operating systems, we did not consider this option in our deployment. In our case, the majority of users use Windows client and we will allow users of server operating systems (which are typically power users) to install the feature by themselves if they so desire.

Offline Files cache size

As we learned previously, redirecting the content of a folder when using the Offline Files technology involves copying data into the local Offline Files cache. For this operation to succeed, it is necessary for the client computer to have enough disk space dedicated to the cache. By default, the size of the cache is 25% of the system volume free space when the cache is initialized. The maximum amount of disk space allocated to the Offline Files cache is configurable through Group Policy for Windows Server 2008 R2, Windows Server 2008, Windows 7, and Windows Vista.

Depending of the amount of data stored in the redirected folders, the size of the Offline Files cache, and the disk space available on the client computer, it is possible for the local computer to not have enough disk space to store all content. This is especially true for users of netbooks equipped with solid state drives which—at the time of writing this white paper—offer limited storage capacity, or when users have a separate small system partition. In the case where not enough space is available to cache the whole content of the redirected folders, the cache will be filled to capacity, and an error stating that the quota has been exceeded will be raised to the user during synchronization operations. In this state, files that have not been cached will only be available when there is network connectivity between the client and the server.

In our deployment, we maintained the default maximum size for the Offline Files cache. According to our hardware inventory, the client computers deployed had enough disk space to hold the maximum amount of data allowed (10 GB) with the default cache size.

25

Page 26: Implementing an End-User Data Centralization Solution

Possible folder states and optimum settings

Folder states

After the redirected folders are cached, they can be available to the user in different states, depending on the connection with the server, the settings deployed by the administrator, and the user intent. Table 6 represents the different states and the impact that they have on where file operations happen and how the changes are kept synchronized with the server.

Working States

Folder state Condition File operation Client server relationship

Online The client has a fast (above slow link threshold conditions) connection with the server.

Read operations are serviced from the cache. All other operations go to the server or go to the server and to the cache as is the case for write operations.

Highly coupledAny change goes directly to both the server copy and the cached copy.

Offline(not connected)

The client has no connectivity to the server or the server is unavailable.

The user operates on the files from the cache only.

DecoupledAny change is made only to the cache and will be subsequently synchronized to the server when it becomes available.

Offline(slow connection)

The client has a slow (below slow link threshold conditions) connection with the server.

The user operates on the files from the cache only.

Loosely coupledAny change is made only to the cache and needs to be synchronized with the server either in the background or upon explicit user request.

Offline (working offline)

The user has put their folders into Offline (working offline) through the user interface.

The user operates on the files from the cache only.

Loosely coupledAny change is made only to the cache and by default will only be synchronized with the server upon explicit user request. The administrator can set a policy to cause changes to be automatically synchronized with the server in the background.

Table 6 - Working states

26

Page 27: Implementing an End-User Data Centralization Solution

States transitions

Figure 5 describes the different possible states and how to transition from one to another.

Figure 5 – Offline Files state transition

1. Online to Offline (Not Connected)This transition happens whenever the server cannot be reached during a file access operation.

2. Offline (Not Connected) to OnlineThis transition happens when server availability has been detected. The Offline Files engine will attempt to reach the server periodically every 5 minutes or whenever a new network has been detected (such as connection through RAS, plugging in a network cable, or connecting to a wireless network).

3. Online to Offline (Slow Connection)This transition happens when the conditions of the network link between the client computer and the server are deemed as slow. The detection happens when files are accessed. The administrator can configure the definition of a slow link through Group Policy settings by using a throughput threshold and/or a latency threshold. In Windows Vista there is no default latency threshold for Slow Link mode whereas in Window 7 the default value is 80 ms.

4. Offline (Slow Connection) to OnlineThis transition happens when the user expresses the intent of working online through the user interface. On Windows 7, the Offline Files engine will detect when the network connection between the client and the server ceases to be slow and transitions back to online automatically if such a condition has been detected. This detection happens during synchronization between client and server. Windows Vista and operating systems prior to Windows Vista will not automatically transition from Offline (Slow Connection) mode to Online mode.

5. Online to Offline (Working Offline)This transition happens when a user clicks the “Work Offline” button in Windows Explorer.

6. Offline (Working Offline) to OnlineThis transition happens when a user clicks the “Work Online” button in Windows Explorer.

27

Page 28: Implementing an End-User Data Centralization Solution

Background synchronization

When the redirected folders are in the Offline (Slow Connection) mode, it is possible to maintain consistency between the client and the server by using manual (user-initiated) synchronizations or automatic background synchronizations. While manual synchronization is available by default on both Windows Vista and Windows 7, automatic background synchronization is only available on Windows 7. The background synchronization feature allows the administrator to centrally configure the synchronization frequency and synchronization window. In addition, on Windows 7, the administrator can set a policy to enable automatic background synchronization when in Offline (Working Offline) mode. Although an automatic background synchronization feature is not built into Windows Vista, it is simple to write a small application by using the Offline Files APIs launched via a scheduled task to perform the same functionality. The implementation section will describe in more detail how this can be achieved.

Optimizing folder states based on user categories

To guarantee the best user experience, we needed to correctly optimize the Offline Files settings based on the characteristics of the underlying network used by clients to connect to the server. Table 7 represents the user experience we expected based on our various user profiles.

User category Desired predominant state Desired user experience

Local users Online Users are connected to the file server through a fast LAN link. We expect them to access their files online directly on the server. On rare occasions, if the server is unavailable, users will temporarily work offline, automatically transitioning back online when the server becomes available again.

Near branch office users Usually offline Users are connected to the file server through a WAN link. To maintain a level of performance satisfying our requirements, we configured the system to have users predominately working in Offline (Slow Connection) mode with background synchronization happening at regular intervals. Users can also sporadically work offline if the server is unavailable.

Far branch office users Online Because the users for this branch have a local file server, the experience is the same as that of the local users’.

Mobile users Variable For these users, we wanted the state of the folders to adapt to the various types of network conditions that would be encountered. For example, the state should be Online if the user has a direct LAN connection with the file server, Offline (Slow Connection) if the user connects to the corporate server through Remote Access Services, and Offline (Not Connected) if the user has no connection to the file server.

Table 7 - Optimizing folder states based on users’ categories

28

Page 29: Implementing an End-User Data Centralization Solution

Deploying Failover Clustering

The Offline Files technology diminishes the need for a highly available file server because users still maintain access to their data through their local cache if the file server is down. However, an unavailable file server will provide a degraded service because at least the following operations will not be possible during an outage:

• Adding and removing users from the deployment

• Synchronizing files between client and server

• Accessing previous versions of files by using Shadow Copy for Shared Folders

• Recovering files from backup and placing the files in their original location

To maintain the highest level of service at all times, we decided to implement a highly available file server by using the Windows Server Failover Clustering technology. We implemented a two-node cluster functioning in active-passive mode.

To build an effective failover cluster, consider the following recommendations:

1. Servers: We recommend that all servers in a cluster are of the same make and model with similar system specifications. Processor, memory, network adapters, and BIOS versions should be consistent across each computer.

The deployment team implemented a dual server failover cluster configuration; the computer specifications are defined in the Implementation section later in this white paper.

2. Network adapters: The network infrastructure that connects your cluster nodes is important to maintaining effective communications. It is advised to avoid having single points of failure where possible and to ensure that there are at least two networks, one to support application or file access and another for internal cluster communications. The use of teamed network adapters for application or file access is typically supported by solution providers. It is also advised to avoid mixed negotiation between the network adapters and the network switch ports. If the network switch port is set to auto negotiate, it is not recommended to force negotiation on the server network adapters.

The deployment team implemented two network adapters per cluster node, one for file access, and the other for internal cluster communication. The network adapters allocated for file access are connected to corporate 1-Gbps switch ports with both the switch ports and network adapters set to auto-negotiate. The second network adapter, used for cluster communication, uses a crossover cable between both nodes. The network adapters are set to auto-negotiate and automatically IPv6 LinkLocal IP address. No configuration is required on this network.

3. Storage controllers and arrays for failover clusters: Serial Attached SCSI (SAS) or Fibre Channel: When using SAS or Fibre Channel, the mass-storage

device controllers that are dedicated to the cluster storage should be identical in all clustered servers. It is not recommended to mix and match vendor adapters. Additionally, consistent drivers and firmware versions should be used.

iSCSI: When using iSCSI, each clustered server must have one or more network adapters or host bus adapters that are dedicated to the clustered storage. The network used for iSCSI should not be used for cross-cluster nodes communication. We also recommend using a Gigabit Ethernet network or higher.Additionally, teamed network adapters should not be used because they are not supported with iSCSI.

29

Page 30: Implementing an End-User Data Centralization Solution

For more information about iSCSI, see iSCSI Cluster Support: Frequently Asked Questions (http://go.microsoft.com/fwlink/?LinkId=61375).

Storage array: We recommend using shared storage that is compatible with Windows Server 2008 failover clusters, which requires support for SCSI Primary Commands-3 (SPC-3). The storage should also support Persistent Reservations (PR) as specified in the SPC-3 standard. Ensuring that the storage supports PR natively or with an appropriate firmware upgrade can be achieved by consulting with the storage vendor or by running the Failover Cluster Validation Wizard included with the cluster feature.For a two-node failover cluster, the storage should contain at least two separate volumes (LUNs), configured at the hardware level. One volume will function as a witness disk (described in the next paragraph). One volume will contain the files that are being shared to users. The witness disk is a disk in the cluster storage that is designated to hold a copy of the cluster configuration database. For a two-node cluster, the quorum configuration will be Node and Disk Majority, the default for a cluster with an even number of nodes.

Multipath I/O software: In a highly available storage fabric, it is possible to deploy failover clusters with multiple host bus adapters by using multipath I/O software. This provides the highest level of redundancy and availability. For Windows Server 2008, the multipath solution must be based on Microsoft Multipath I/O (MPIO). Hardware vendors usually supply MPIO device-specific modules for their devices. Windows Server 2008 includes several device-specific modules as part of the operating system.

The deployment team chose to use an existing shared Fibre Channel storage array, which was used to provide two volumes, one for a 10-GB witness volume, the other for a 2-terabyte volume to support user data. Each host was connected to the fabric via a single host bus adapter without the use of MPIO. Although this could be seen as a Single Point of Failure (SPOF), it was not the team’s objective to eliminate all SPOFs within the deployment because of the added resiliency provided on the clients through Offline Files.

4. Software requirements for a two-node failover cluster: The servers for a two-node failover cluster must run the same version of Windows Server. They should also have the same software updates (patches) and service packs. The deployment team implemented Windows Server 2008 R2 Enterprise on both nodes. No hotfixes or service packs were available at the time of writing this white paper because the version used for the solution was a pre-release build.

Data Protection and Portability

Prior to implementing the solution, users maintained their own user profile folders and corresponding unstructured data on their various client computers, which were a mix of laptops and desktops. The content on these computers was typically unprotected. Users were infrequently impacted by random data loss due to hardware failure, or even accidental data loss due to users wiping and reinstalling their operating system. By implementing data centralization, the end users were able to realize the benefits of a single representation of their user profile folders across all their domain-joined client computers along with an effective data protection model.

The deployment team was tasked with implementing a data protection model that strived to achieve a Recovery Point Objective (RPO) of near zero data loss. The goal was to use a backup infrastructure located in the head office by using a near Continuous Data Protection (CDP) model for the unstructured data stored on the file servers. Another goal was to mitigate performance impact on users while also minimizing WAN traffic as a result of backing up the file server located in the branch offices. The final goal was to provide a service that allowed users to

30

Page 31: Implementing an End-User Data Centralization Solution

perform their own data recovery operations for previous versions of files and folders without the need for involving IT support staff.

Understanding the challenges

The first challenge for the deployment team was to achieve an effective yet consistent backup window against the servers independent of their geographical location. This has become more of a challenge for many IT administrators who are being put under increased pressure to reduce or even eliminate backup windows. However, as data volumes continue to grow, so too does the required backup window. Data growth is exploding and the constant need to have data accessible to everyone is also increasing dramatically. Often companies are left with little to no backup window and need to continually look at ways to adjust their backup methodologies.

The second challenge was to mitigate the impact of backup itself on the WAN links, which is sometimes difficult to achieve during Full, Incremental, or even Differential backups. With a goal to implement a centralized backup solution that also provided a near CDP capability, it was clear to the team that the typical mid-market backup product would make this challenge hard to achieve.

The third goal was to limit any sustained noticeable performance impact during Full, Incremental, or Differential backups.

Backup methodologies

This section provides an overview of the differences between Full, Incremental, and Differential backups.

Full backup

The most complete type of backup is typically referred to as a Full backup. As the name implies, this type of backup makes a copy of all data to another set of media, which can be tape, disk, or alternate media types. The primary advantage of performing a Full backup during every operation is that a complete copy of all data is available on a single set of media. However, the disadvantage is that it takes longer to perform a Full backup than other types, as all the data on the file server to be protected (Source) has to be copied to the backup server (Target). The time to complete the backup operation is dependent on the amount of data and the type of connectivity between the Source and the Target. When using a centralized backup solution where content is to be moved over a WAN link, completing a Full backup can be time consuming and costly especially when the WAN link is limited in bandwidth and charged based on its utilization.

The recovery process from a full data loss event on the Source server requires a single stage restore operation. Restoration is complete by restoring the last Full backup.

Many customers however, will choose to employ a weekly Full backup and complement it with an Incremental or Differential backup on a daily or more frequent basis.

Incremental backups An Incremental backup operation results in copying only the data that has changed since the last Full or previous Incremental backup operation.

Because an Incremental backup will only copy data since the last backup of any type, it may be run as often as desired, with only the most recent changes stored. The benefit of an Incremental backup is that it copies a smaller amount of data than a Full backup. Thus, these operations will complete faster and require less storage to store the backup.

However, the recovery process from a full data loss event on the Source server may require a multi-stage recovery process. Restoration will require the last Full and all subsequent Incremental backups to be restored.

31

Page 32: Implementing an End-User Data Centralization Solution

Differential backups

A Differential backup operation is similar to an Incremental backup the first time it is performed, in that it will copy all data changed from the previous backup. However, each time it is run afterwards, it will continue to copy all data changed since the previous Full backup. Thus, it consumes more storage than an Incremental on subsequent operations, although typically far less than a Full backup.

The recovery process from a full data loss event on the Source server may require a multi-stage recovery process. Restoration will require the last Full and the last Differential backup to be restored.

Data Protection Manager

The deployment team’s challenge to implement a backup solution that achieved a near Continuous Data Protection (CDP) model with minimum impact on WAN links was realized with the implementation of Microsoft’s own Data Protection Manager (DPM) 2007 SP1.

The far branch office file server had, at the time of initial deployment, approximately 118 GB of consolidated data. Moving this data on a typical Full backup schedule would have taken more than six hours to complete based on WAN bandwidth and utilization. This backup window was validated by using various mid-market backup engines with both tape and disk as a backup target. Moving this amount of data on a daily basis was not viable due to the impact on the WAN link between the head office and the far branch office. Implementing subsequent Incremental or Differential backups was viable, but neither met the objective of achieving a near CDP model.

The team tested DPM to validate capability to meet their data protection challenges. The centralized DPM server was used to protect the file servers in the head office and the far branch office. Enabling centralized data protection for the server in the far branch office was the team’s biggest concern. These concerns, however, were mitigated with the features offered by DPM.

The team created two DPM protection groups, one for the far branch office server and another for the head office server. This resulted in the creation of an initial replica of the data for each server created on a storage pool on the DPM server, the amount of data transferred over the network during this operation being comparable to a standard Full backup implemented by other backup products. The time to complete the initial replica of approximately 118 GB of data for the far branch office server over the corporate WAN link was similar to the time to complete the initial replica of approximately 800 GB of data for the head office server over the LAN. Both initial replica jobs completed within six hours. See Table 8 for reference.

Time to Complete Initial Full Replica with DPM

Location Data size Full replica time Network type

Head office ~800 GB 6 hours 40 minutes LANFar branch office ~118 GB 6 hours 27 minutes WAN

Table 8 - Time to complete initial full replica with DPM

After the initial replica was completed, there would be no need to repeat the operation because the replica would be updated with periodic incremental synchronization operations according to a set schedule. This removed the need to repeat scheduled Full backup operations and effectively allowed the team to remove 118 GB worth of network traffic to protect the server located in the far branch office, which otherwise would have impacted the WAN link on a daily or weekly basis depending on data protection needs.

The deployment team maintained the default incremental synchronization policy which runs every fifteen minutes for the head office server and every two hours for the far branch office server. This synchronization ensures that

32

Page 33: Implementing an End-User Data Centralization Solution

any data changes since the last synchronization are transferred and applied to the replica. The team was able to achieve synchronization operations on average within a two-minute period. Albeit, a large data change of five gigabytes occurred between synchronization intervals on the far branch office server which took over 3 hours and 30 minutes to complete. Synchronization times vary throughout the day and are impacted by the rate of change in data on the protected servers and the WAN link utilization. Table 9 provides some statistics from the deployment.

Time to Complete Scheduled Synchronizations with DPM

Location Data size Synchronization time

Head office Max 221 MB 1 minute 36 secondsAverage 10 MB 1 minute 7 seconds

Far branch office Max 5,258 MB 3 hours 30 minutesAverage 50 MB 2 minutes 26 seconds

Table 9 - Time to complete scheduled synchronizations with DPM

DPM’s ability to enable an initial replica without a requirement to repeat the operation, along with subsequent incremental synchronization capabilities, allowed the team to achieve most of their data protection requirements.

User-enabled data recovery

The final goal of data protection was to implement a solution that allowed end users to recover previous versions of user profile folder content without the need for any IT involvement.

Shadow Copies for Shared Folders (also known as Previous Versions) enabled the team to achieve a point-in-time copy of files that are located on the file servers. The solution allows end users to recover files that are accidentally deleted or overwritten, or to compare versions of files if and when required. To complete these operations, a user must be working in an Online mode from an Offline Files point of view. If connected to the network, this can be easily achieved by manually transitioning online by clicking the “Work Online” button in Windows Explorer.

Implementing Shadow Copy for Shared Folders is relatively easy, requiring less than four mouse clicks. Most administrators maintain the default snapshot schedule, which is two snapshots per day (7:00 A.M. and 12:00 P.M.), five days per week (Monday to Friday). The frequency can obviously be adjusted and extended throughout the weekend with a maximum of 64 snapshots per volume. Shadow Copy for Shared Folders can have a slight performance impact due to copy on write semantics; on a lightly loaded server, no noticeable performance impact should be detected. However, if the server is expected to be heavily loaded, we suggest that you allocate dedicated disks to offload I/O activity. The default is to allocate 10% additional capacity to cater for snapshot space.

The deployment team chose to maintain two snapshots per day but extended the operation to occur on Saturday and Sunday. The server load was not expected to be too heavy, so storage was not dedicated specifically for the snapshots. Snapshot allocation was maintained on the same volume that stored the data, with the default space allocation of 10%.

DPM provides a similar capability to Shadow Copy for Shared Folders known as End-User Recovery, which is an effective solution. DPM can increase the amount of snapshots that an end user can recover from and does not require the 10% storage allocation or have any performance impact on the file servers. The deployment team chose not to implement End-User Recovery because some of the dependencies were outside the group’s control, specifically the requirement to complete an Active Directory® schema update on corporate domains.

For more information, see Planning for End-User Recovery (http://go.microsoft.com/fwlink/?LinkId=166655).

33

Page 34: Implementing an End-User Data Centralization Solution

Data recovery planning

Some organizations may consider that using Offline Files will eliminate the need to maintain normal backup activities. They incorrectly believe that having their users’ unstructured data located on all client computers and subsequently on the file servers will meet their data protection needs. This is not the case, and backing up the file servers is a key requirement and the only way to ensure a full recovery of data in the unlikely event that the integrity of the data is compromised on the file servers.

Consider the scenario where the back-end storage array is impacted by a multi-drive failure that compromises a RAID group resulting in the loss of the NTFS volume storing all users’ data. In this case an IT administrator will have to repair the RAID group and recreate a new volume to enable recovery of the data. It should be made clear that the content in the Offline Files cache cannot be resynchronized back to the new volume as a mechanism to recreate the data set because the Offline Files synchronization algorithm will interpret the missing files and folders on the share to mean that those files or folders were intentionally deleted and need to be deleted from the client’s local cache on the next synchronization operation. Therefore, if the IT administrator recreates the new volume and subsequently creates a share without restoring the data first, this can result in the deletion of Offline Files cache content.

The correct mechanism in the failure scenario described above is for the IT administrator to repair the RAID group, create a new volume, and follow this with a full volume restore from the last known good backup without creating a share for the users to access the data. Only when the restore is complete should the original share be recreated, which will allow users to reconnect and synchronize the changes in their Offline Files cache back to the file server. This will allow for a complete recovery to achieve the zero data loss Recovery Point Objective.

Having a well-documented step by step procedure to enable data restoration is a key requirement to prevent data loss in any Folder Redirection and Offline Files solution deployment.

Protecting the Data Protection Manager infrastructure

Microsoft Data Protection Manager 2007 (DPM) enables protection for file and application data, and provides fast and efficient recovery. DPM, like any other infrastructure server, needs to be protected from data loss or corruption.

For more information about protecting the DPM server, see Preparing for Disaster Recovery (http://go.microsoft.com/fwlink/?LinkId=166657).

34

Page 35: Implementing an End-User Data Centralization Solution

Data Security and Privacy Considerations

Before implementing the solution detailed in the planning section, it is important to discuss the security and privacy considerations. In this section, we will talk about the different security threats, vulnerabilities, and possible mitigation options.

Figure 6 shows the architecture of the planned solution.

Figure 6 - Solution architecture

From a security point of view, there are three areas of concern:

1. Back-end infrastructure (folder redirection and backup servers)2. Client computers (desktops and laptops inside and outside corporate facilities)3. Network (corporate and remote access)

For each of these areas, we will list the potential security risks and then discuss the mitigation options available.

Back-end infrastructure security

Threat #1 - Physical access to back-end infrastructure

An attacker with physical access to the back-end server infrastructure may be able to access user files and other data resulting in data theft, tampering, or destruction. For example, an attacker who has physical access may be able to use any of the following methods to access or tamper with its content:

Remove the storage and attach to another computer to access its content. Perform a parallel operating system installation or boot the server to an alternate operating system to

enable access to content on the volumes. Use third-party utilities to change the administrator password.

35

Page 36: Implementing an End-User Data Centralization Solution

Threat #1 mitigation

The simplest solution to help mitigate this threat is by placing the back-end infrastructure in a secure data center where access can be limited to a set of known authorized personnel. For the case where an attacker still manages to gain physical access to the system, data security can be enhanced by the use of encryption technologies, such as Encrypted File System (EFS) and BitLocker™ Drive Encryption.

Threat #2 - Unauthorized data access by other users

An attacker tries to gain access to data belonging to other users residing on the same server.

Threat #2 mitigation

Folder Redirection, by default, grants users exclusive rights to their redirected content by setting appropriate access control lists (ACLs) on the directories.

Threat #3 - Unauthorized data access by server administrator

A rogue administrator of the server tries to read, tamper, or delete files belonging to users.

Threat #3 mitigation

Folder Redirection ensures that the administrator of the server does not have NTFS permissions to user redirected folders. However, the administrator could still gain access to the folders by taking ownership of the folders. This kind of operation can be tracked by implementing effective event monitoring by using security auditing tools such as Microsoft System Center Operations Manager.

Data on the server can also be secured by using Encrypted File System (EFS), which is a powerful feature for encrypting files and folders on client computers and file servers. It enables users to help protect their data from unauthorized access by other users or rogue administrators. For more information, see Encrypting File System (http://go.microsoft.com/fwlink/?LinkId=166658).

The measures described above do not provide foolproof mitigations against a rogue administrator trying to access the content because it is possible for any computer administrator to tamper with the operating system.

For more information, see The Administrator Accounts Security Planning Guide in the Microsoft Download Center (http://go.microsoft.com/fwlink/?LinkId=166653).

Threat #4 - Host security

Attackers try to exploit known security flaws in the operating system. The potential impact of these attacks can include server downtime, loss of data integrity, and stolen intellectual property. For more information about security vulnerabilities, see The STRIDE Threat Model on MSDN (http://go.microsoft.com/fwlink/?LinkId=166659).

Countermeasure against threat #4

Host security can be enhanced by proper patching and update management of all the servers. Update management is the process of controlling the deployment and maintenance of interim software releases including patches into production environments. For more information, see the Update Management Process (http://go.microsoft.com/fwlink/?LinkId=166661).

36

Page 37: Implementing an End-User Data Centralization Solution

Client computer security

Threat #5 - Stolen laptops

The mobility of laptops demands additional protection of the data they store. A stolen laptop can mean loss of sensitive data stored on that computer.

Threat #5 mitigation

The simplest mitigation to prevent this form of attack is to make sure that it doesn’t happen. Ensure that the laptop with confidential information stays with you the whole time you are in a public environment. However, thefts do occur and there are mechanisms in Windows that will make the data in the laptop difficult to read by the thief, being:

Use of password: If your laptop is joined to a domain, each time you start it—even when you aren’t physically on the corporate network—you still have to enter your password.

Encrypting files: The use of encryption techniques like EFS and BitLocker ensures that the data is not readily readable by the thief.

For more information, see The Case of the Stolen Laptop: Mitigating the Threats of Equipment Theft (http://go.microsoft.com/fwlink/?LinkId=166662).

Threat #6 - User data accessible to client computer administrators

This threat is different from the one discussed earlier about server administrators being able to access the contents of the redirected folders on the server. The Offline Files solution adds an additional risk element because the user’s data is now downloaded to all domain-joined computers to which the user logs on. The most common scenario is when a user logs on to a shared computer or another person’s computer. Although the user logs off from the computer, the user’s documents are downloaded into the local cache of that computer (provided Offline Files has been enabled on that computer). The computer administrator can then access this data, for example by taking ownership of the Offline Files cache folder.

Threat #6 mitigation

Offline Files cache encryption allows for cached copies of redirected content to be encrypted by using EFS. When this option is enabled, each file in the Offline Files cache is encrypted with a public key from the user who cached the file. Thus, only that user has access to the file, and even local administrators cannot read the file without having access to the user's private keys. If multiple users share a computer and more than one user tries to use an encrypted, cached copy of a particular file, only the first user to cache the file can access the offline copy of the file. Other users will be able to access that file only when working online.

EFS protection policies can be centrally controlled and configured for the entire enterprise by using Group Policy. When the Encrypt the Offline Files cache Group Policy setting is enabled, all files in the Offline Files cache are attempted to be encrypted. The cached copy on the local computer is affected, but the associated network copy is not affected. The user cannot decrypt Offline Files through the user interface. It should be noted that the cache is sometimes in a partially encrypted state (for example, when some files attempted to being encrypted are open). These files will be encrypted during subsequent logons. After all files are encrypted, the cache is put into a “fully encrypted state.”

When this Group Policy setting is disabled, all files in the Offline Files cache are decrypted during the next logon. Since the encryption/decryption task runs in the context of a given user, the task attempts to process any file to which the given user has access.

37

Page 38: Implementing an End-User Data Centralization Solution

When the Encrypt the Offline Files cache Group Policy setting is not configured, encryption of the Offline Files cache can also be controlled by the user through the user interface.

An important point to note is that even though the data is encrypted on the client computer, it is transferred as clear text over the network during synchronization or file access operations. We will discuss this further in the network security section.

It is possible for administrators to implement the capability of enabling Folder Redirection on specific computers belonging to the user instead of all computers (as is the default behavior). This can be implemented by using Group Policy and some custom scripts. These are discussed in detail in the implementation section in this white paper. Once enabled, the administrator can specify which computers they would like Folder Redirection to be enabled on for each user, thereby not caching the content from the user’s profile folder on to all computers to which they log on.

It is to be noted that the computer-specific Folder Redirection feature was not designed as a security mechanism and can easily be overridden by the local administrator of a computer. In summary, a best practice is not to log on to untrusted computers or computers with untrusted administrators.

Threat #7 - Re-use or decommissioning of computers

Computers can be reused by people other than the initial owner or user. The original owner’s data that is stored on these computers is susceptible to being accessed by the new owners.

Similarly, computers that are decommissioned by the enterprise might contain data that can be stolen in the process.

Threat #7 mitigation

To guard against threats mentioned above and to safeguard data stored on a stolen laptop, a data protection feature named BitLocker Drive Encryption is available for both Windows Server and Windows client operating systems. BitLocker Drive Encryption provides a seamless way to encrypt all data on an entire hard disk volume. When BitLocker is configured, it works transparently in the background and does not affect typical use of the computer or its applications. BitLocker encrypts the entire volume, so it can prevent many attacks that try to circumvent the security protections in Windows that cannot be enforced before Windows has started.

BitLocker is a full-volume encryption mechanism that encrypts all sectors on the system volume on a per-computer basis, including operating system, applications, and data files. BitLocker helps ensure that data stored on a computer is not revealed if the computer is tampered with when the installed operating system is offline. It is designed for systems that have a compatible Trusted Platform Module (TPM) microchip and BIOS. If these components are present, BitLocker uses them to provide enhanced data protection and to help ensure early boot component integrity.

BitLocker provides the following two primary capabilities:

Per-computer encryption by encrypting the content of the operating system volume. Attackers who remove the volume will not be able to read it unless they also obtain the keys, which in turn requires attacking the recovery infrastructure or the TPM on the original computer.

Full-volume encryption by encrypting the entire contents of protected volumes, including the files used by Windows, the boot sector, and slack space formerly allocated to files in use. An attacker is therefore prevented from recovering useful information by analyzing any portion of the disk without recovering the key.

38

Page 39: Implementing an End-User Data Centralization Solution

BitLocker can help create a simple, cost-effective decommissioning process. By leaving data encrypted by BitLocker and then removing the keys, an enterprise can permanently reduce the risk of exposing this data. It becomes nearly impossible to access BitLocker-encrypted data after removing all BitLocker keys because this would require cracking 128-bit or 256-bit AES encryption.

For more information about EFS and BitLocker, see the Data Encryption Toolkit for Mobile PCs (http://go.microsoft.com/fwlink/?LinkId=166663).

Network security

Threat #8 - Network security

With the measures discussed in the previous sections, we can secure the data on the client and server side. But the network, if not properly secured, is susceptible to unauthorized monitoring and data access. An attacker could monitor the network by obtaining and possibly tampering with sensitive data that is transferred through the network.

Other network threats include interference and impersonation attacks. Examples of interference attacks are viruses and worms, which are self-propagating malicious code that executes unauthorized computer instructions.

Impersonation attacks include IP address spoofing, when a rogue site or attacker intercepts authenticated communications between legitimate users and presents altered content as legitimate; man-in-the-middle spoofing, where captured packets are tampered and reinserted into an active session pipe; Trojan horses, which are inserted to reconfigure network settings or grant administrator access and permissions to an unauthorized person. All these attacks may cause unauthorized data access, data tampering, or loss.

Threat #8 mitigation

Before talking about network security, it is important to understand the different access scenarios possible. In the case of our file server deployments, three access scenarios based on their geographical location and connectivity to the corporate network are as follows:

• Client computers that connect to the server through the company’s local area network (LAN)

• Client computers that connect to the server through the company’s wide area network (WAN)

• Client computers that are mobile and connect to the server through remote access methods such as Remote Access Service (RAS) and Direct Access (new in Windows 7)

Using features present in the Windows operating system, it is possible to isolate the domain and server resources to limit access to authenticated and authorized computers. For example, a logical network consisting of computers that share a common Windows-based security framework can be created with a set of requirements for secure communication. Each computer on the logically isolated network can provide authentication credentials to the other computers on this network to prove membership. Requests for communication that originate from computers that are not part of the isolated network are ignored.

Network isolation is based on the Internet Protocol security (IPsec) and Windows Firewall with Advanced Security suite of security protocols. Windows-based isolated networks do not require ongoing maintenance based on changes in network topology or computers moving to different switch ports. The result is an isolated network that leverages your existing Windows infrastructure with no additional costs associated with network reconfiguration or hardware upgrades.

By leveraging Active Directory Domain Services (AD DS) domain membership and Group Policy settings, IPsec settings can be configured at the domain, site, organizational unit, or security group level.

39

Page 40: Implementing an End-User Data Centralization Solution

The computers on the isolated network are members of an Active Directory domain, which includes computers that are locally connected to the organization network (through wireless or wired LAN connections) or are remotely connected to the organization network (through a remote access connection) by using a virtual private network (VPN) connection across the Internet .

The IPsec security mechanisms use cryptography-based protection services, security protocols, and dynamic key management. IPsec transport mode is based on an end-to-end security model, establishing trust and security from a source IP to a destination IP address (peer-to-peer). The only computers that must be aware of IPsec are the sending and receiving computers. Each handles security at its respective end and assumes that the medium over which the communication takes place is not secure.

For the home/remote access scenario, data security is implemented by using a VPN, enabling home-based or mobile users to access the server on the corporate network. VPN encapsulates the data and uses a tunneling protocol to transmit data. This document does not discuss in detail VPN security methods. For more information about VPN, see Virtual Private Networks (http://go.microsoft.com/fwlink/?LinkId=166664).

For more information about network security using server and domain isolations, see Introduction to Server and Domain Isolation (http://go.microsoft.com/fwlink/?LinkId=166666).

40

Page 41: Implementing an End-User Data Centralization Solution

ImplementationThe implementation part of this paper is not intended to be a step-by-step deployment guide. This section will focus on providing guidance on the various customizations and hardware platforms that the deployment team used to provide the Offline Files and Folder Redirection solution.

Server and Storage Hardware

The team did not acquire hardware specifically for the deployment. The servers and storage were allocated from a pool of hardware available to the deployment team. The server hardware specifications are more than required to support the deployment load and should not be considered as base requirements. The choice to use Failover Clustering required the use of a shared storage array which was satisfied by an available Fibre Channel storage area network (SAN) with enough capacity to cater to the team’s objective.

Domain Representation

The domain topology below is provided as a representation of the internal Active Directory forest. The actual names of the domains are not representative of the internal deployment and are used here to highlight the location of users and resources within the namespace as it relates to the Folder Redirection and Offline Files deployment.

Figure 7 - Domain representation

Figure 7 shows four Active Directory forests: domain1, domain5, domain6, and domain8. Each forest has a two-way transitive trust established between each forest as represented by the dotted line. Domain1 has three child domains: domain2, domain3, and domain4; the child trust relationships are represented by the solid lines. The diagram is not intended to represent a required domain topology for enabling the solution, but more so is a representation of the complexity that the deployment team had to take into consideration when planning the deployment.

41

Page 42: Implementing an End-User Data Centralization Solution

The Folder Redirection and Offline Files server resources are members of domain5. The servers maintain user content and data protection needs as distributed across the various forests within the organization.

Failover cluster hardware

The failover cluster is a dual node configuration that uses identical server specifications, BIOS, and driver revisions across both computers.

The amount of system memory (RAM) allocated is 32 GB, which is more than needed for the deployment but was already configured in the hardware allocated from the pool. Having 4 GB to 8 GB of RAM would have sufficed for the deployment, but the team chose to maintain the 32 GB to leverage a new feature in Windows Server 2008 R2, known as block caching, that improves the time to complete a check disk (chkdsk) operation.

The operating system is installed on a pair of mirrored 146-GB drives connected to an internal SAS PCI RAID Controller. The failover cluster has two Logical Unit Numbers (LUNs) allocated, a 10-GB RAID5 LUN for Quorum, and a 2-terabyte RAID5 LUN to support a file server for Folder Redirection content and a script repository.

Table 10 defines clustered server specifications for reference.

Clustered Server Specifications

Processor 2 x Quad-Core - Processors (3.00 GHz)

Memory 32 GB

Network adapter 2 x 1 Gbps (1 for file traffic, 1 for cluster communication connected at 1 Gbps)

Internal controller Embedded RAID Controller (read cache only)

Internal drives 2 x 146-GB SAS drives

OS LUN / volume 146-GB Mirror for C: Drive (operating system)

External controller Single 4-Gbps host bus adapter – default parameters

External drives (FC array) 9 x 300-GB FC

Data LUN / volume 2-terabyte RAID5 (Data and Scripts)10-GB RAID5 (Quorum)

Table 10 - Clustered server specifications

Stand-alone server hardware

The stand-alone server configuration maintained in the far branch office location is a relatively basic specification. The server uses a single 1-Gbps network adapter albeit connected into a 100-Mbps switch. The 100-Mbps setting provides adequate throughput to support the local user community hosted on the server. The server uses direct-attached storage (DAS) configured into a single RAID5 set across ten 146-GB drives with two logical volumes. The first volume is 146 GB in size for the operating system; the second is a 1-terabyte volume for Folder Redirection content and a script repository.

42

Page 43: Implementing an End-User Data Centralization Solution

Table 11 defines stand-alone server specifications for reference.

Stand-alone Server Specifications

Processor 1 x Dual Core Processor (3.00 GHz)

Memory 4 GB

Network adapter 1 x 1 Gbps (file traffic at 100 Mbps)

Internal controller Embedded SAS Array Controller (read cache only)

Internal drives 10 x 146-GB RAID5 SCSI drives

OS LUN / volume 146 GB (operating system)

Data LUN / volume 1 terabyte (data & scripts)Table 11 - Stand-alone server specifications

Operating system configuration

The operating system used for both the failover cluster and stand-alone server is Windows Server 2008 R2. The initial deployment started on an early pre-released milestone build and rolled to various builds as they became available, including Beta and Release Candidate. At the time of writing this paper, the final Release to Manufacturing (RTM) version of Windows Server 2008 R2 was not available, so no hotfixes or service packs are applicable.

More details about roles, role services, and additional features installed on the servers can be seen in Table 12.

Details Failover Cluster Servers Stand-alone Servers

Operating system version Windows Server 2008 R2 Enterprise Windows Server 2008 R2 Enterprise

Install drive C:\ C:\

Partition size 146 GB 72 GB

Domain membership Domain5 Domain4

Windows Update Download updates but let me choose whether to install them

Download updates but let me choose whether to install them

Hotfixes Not applicable at time of deployment

Not applicable at time of deployment

Roles installed File Services File Services

Role services installed File Server Resource Manager File Server Resource Manager

Features installed Windows Server BackupFailover Clustering

Windows Server Backup

Backup DPM Agent DPM Agent

Virus protection Microsoft Forefront™ Microsoft Forefront™Table 12 - Operating system configuration

43

Page 44: Implementing an End-User Data Centralization Solution

Figure 8 - Failover cluster representation

The failover cluster as represented in Figure 8 is relatively standard within a dual node configuration which we will refer to as Node 1 and Node 2 for the purpose of documentation. There is a single Highly Available (HA) file server instance which is active on only one node at a time. This model is typically classed as an active passive configuration. The file server runs on Node 1 as an example and can be moved to Node 2 as part of controlled move to address any maintenance needs on Node 1. In the unlikely event that Node 1 terminates due to hardware or software failures, the HA file server will fail over to Node 2 automatically, restoring service to the users.

There are two network adapters per cluster node, one for client file and cluster management traffic, which we will refer to as Network 1; the other for internal cluster heartbeat communications, which we will refer to as Network 2. Network 1 is connected to the corporate network and uses IPv4 and IPv6 DHCP assigned addresses. Network 2 uses a crossover cable that is ideal for a two-node cluster configuration and uses IPv6 Link Local IP address. All network adapter properties are left at default.

Node 1 and Node 2 are connected via a single 4-Gbps host bus adapter to a Fibre Channel storage array that is used to provision two LUNs as required for the deployment. The first LUN is a 10-GB LUN for the Quorum / Witness resource; the second is a 2-terabyte LUN to support a highly available file server called “File-Server-1,” which is used to maintain all user folder redirected content and a series of scripts required for the deployment.

The single host bus adapter connection could be seen as a possible single point of failure and ideally should be eliminated by adding a second host bus adapter to Node 1 and Node 2. This would require a second fibre switch and the installation of Microsoft Multipath I/O (MPIO) or an appropriate third-party device-specific module. The

44

Page 45: Implementing an End-User Data Centralization Solution

team did not have access to a second switch at the time of deployment and moved forward understanding that the single host bus adapter was not a best practice configuration.

No other modifications were made to the cluster or default parameters for resource restart. Failover and failback policies were maintained.

File-Server-1 (Folder Redirection and Script Content – Head Office)

The highly available file server supporting “File-Server-1 ” maintains a number of resource types that include a network name with corresponding IP addresses (both IPv4 and IPv6), file server, disk drive, and eight shares.

Figure 9 shows the Failover Cluster Manager snap-in layout for reference with the appropriate resources.

Figure 9 - Failover Cluster Manager MMC representation

The Logon scripts$ share is used to support a number of scripts required for the deployment with the Data$ share used as a location to store information collected by the scripts. The scripts will be covered in more detail in the following Group Policy Object (GPO) section.

The team chose to use six folders for redirection shares to provide granular access to users from the various resource domains where their accounts are maintained. This allowed the team to view the data structure, assign file screening and quota settings, while also achieving a granular Storage Report view within File Server Resource Manager on a per domain basis.

45

Page 46: Implementing an End-User Data Centralization Solution

Figure 10 highlights a Windows Explorer view of the share and directory structure for reference for File-Server-1.

Figure 10 - Windows Explorer share representation

File-Server-2 (Folder Redirection and Script Content – Far Branch Office)

File-Server-2 leverages the same directory structure to support the far branch office server. This is a stand-alone non cluster configuration, but the same principles apply with regard to directory structures, etc.

File Server Resource Manager

File Server Resource Manager is implemented to enable Quota and File Screening. During the initial stage of deployment, the team used Soft Quota and Passive File Screening to prevent users from being impacted during initial deployment due to capacity or unsupported file types preventing folder redirection.

Implementing Quota

The team created a Quota template and set it to auto apply to each of the six folders corresponding to the shares created on the file servers. The template initially enables a soft quota that generates an event log entry and an e-mail notification to the administrator and user for users that breach 90% of their 10-GB quota followed by another series of notifications if they breach 100% of their quota.

Users are not impacted if they exceed quota during enrollment, but are advised to clear up unneeded content or request an increase in the 10-GB quota allocation with appropriate justification.

Below are sample representative figures of the Quota template and File Server Resource Manager MMC that show a custom template setting for a 10-GB threshold with a Soft quota and appropriate threshold warnings using e-mail and Event Log for reference.

46

Page 47: Implementing an End-User Data Centralization Solution

Figure 11 - File Server Resource Manager Quota template properties

Figure 12 - File Server Resource Manager MMC – Custom Quota template (Redirection)

Figure 13 - File Server Resource Manager MMC – Quotas

After the users are enrolled and running within the deployment for a period of time, the team changes the individual quotas to enforce a hard quota threshold preventing users from exceeding their allocated limits. The process for enrolling users will be covered in the Operational Procedures section of this white paper.

Implementing file screens

The team created a file screening template and applied it to each of the six folders corresponding to the shares created on the file servers. The template enables a passive screen initially that generates an event log entry and an e-mail notification to the administrator and user that attempts to save files with a screened extension. The team chose to screen PST file types.

47

Page 48: Implementing an End-User Data Centralization Solution

PST files are used by the Microsoft Outlook e-mail client to store content outside of the online mailbox. The deployment team did not want to maintain PST files on the file servers to ensure that the users e-mail lifecycle is managed within their online mail store.

During enrollment users are notified that they are not allowed to store PST files on the file servers. The users will be warned that these files will not be backed up as part of our data protection model and that an active screen will be applied to prevent PST files being stored on the server within a period of time. If the user has a PST file stored on the server, they will be advised to move the file to an alternate storage location.

After an active screen is enforced, the user will be prevented from storing PST files in the future.

Below are sample representations of the File Server Resource Manager MMC showing File Group Properties, File Screen Template Properties, File Screens, and File Screen Properties (passive screening).

Figure 14 - File Server Resource Manager MMC – Custom File Group Properties for *.pst files

Figure 15 - File Server Resource Manager MMC – File Screen Template Properties

48

Page 49: Implementing an End-User Data Centralization Solution

Figure 16 - File Server Resource Manager MMC – File Screens

Figure 17 - File Server Resource Manager MMC – File Screen Properties (Passive screening)

49

Page 50: Implementing an End-User Data Centralization Solution

Shadow Copy for Shared Folders

Shadow Copy for Shared Folders is enabled on the volume supporting Folder Redirection content. The default schedule settings are maintained with two snapshots per day, although the team extended the snapshots to run on Saturday and Sunday to cater for possible changes that occur over the weekend. As per the Planning section of this white paper, the team expected a relatively light I/O load and did not allocate dedicated storage for shadow copies.

Figure 18 - Shadow Copy for Shared Folders and schedule

Group Policy settings

The deployment team implemented a number of Group Policy objects (GPOs) linked to the root of each domain to enable folder redirection and to enhance the user experience based on location and client type. The GPOs are scoped to the users and computers via global security group membership with further scoping by using WMI filters to evaluate queries that detect operating system versions and classes of computers to include desktops or laptops where appropriate.

50

Page 51: Implementing an End-User Data Centralization Solution

The following tables (Table 13 and 14) provide the description of the GPOs and how they are scoped to security groups.

Policy Name Description

Policy0 Computer Specific Enablement

Policy1 Computer Specific Folder Redirection for Head and Near Branch Offices

Policy2 Computer Specific Folder Redirection – Far Branch Offices

Policy3 Slow Link and Background Synchronization for Near Branch Office Desktops

Policy4 Slow Link and Background Synchronization for Windows Vista Laptops

Policy5 Background Synchronization for Windows 7 Laptops

Policy6 Exclusion List

Policy7 Offline Directory Rename Delete – Windows Vista

Table 13 - Policy setting definitions

User Location and Security Groups and Object

Headquarters Far branch Near branch Object added

Domain2 Domain5 Domain7 Domain9 Domain3 Domain4

Policy0_SG Policy0_SG Policy0_SG Policy0_SG Policy0_SG Policy0_SG User & Computer

Policy1_SG Policy1_SG Policy1_SG Policy1_SG Policy1_SG Policy1_SG User

n/a n/a n/a n/a n/a Policy2_SG User

n/a n/a n/a n/a Policy3_SG n/a User & Computer

Policy4_SG Policy4_SG Policy4_SG Policy4_SG Policy4_SG Policy4_SG User & Computer

Policy5_SG Policy5_SG Policy5_SG Policy5_SG Policy5_SG Policy5_SG Computer

Policy6_SG Policy6_SG Policy6_SG Policy6_SG Policy6_SG Policy6_SG Computer

Policy7_SG Policy7_SG Policy7_SG Policy7_SG Policy7_SG Policy7_SG Computer

Table 14 - User location and security groups and object

51

Page 52: Implementing an End-User Data Centralization Solution

The GPO identified as Policy1, for example, enables Computer Specific Folder Redirection for head offices and near branch offices for users that are made members of a global security group called Policy1_SG within Domain2. The same GPO ,which is duplicated across the other domains, is applied to users that are made members of the corresponding global security group called Policy1_SG in Domain5, Domain7, Domain9, Domain3, and Domain4.

The GPO identified as Policy6, as another example, enables an Exclusion List against all computers that are made members of a global security group called Policy7_SG within Domain2. The same GPO, which is duplicated across the other domains, is applied to computers that are made members of the corresponding global security group called Policy7_SG in Domain5, Domain7, Domain9, Domain3, and Domain4.

Further scoping is achieved with the use of WMI filters that are discussed within the following GPOs.

Computer-specific Folder Redirection GPOs

The team implemented a custom solution requiring two GPOs to enable computer-specific folder redirection for all users enrolled in the service. For the purpose of documentation we will refer to the GPOs as “Computer-Specific Enablement” and “Computer-Specific Folder Redirection.”

Computer-specific enablement GPO

The first GPO, “Computer-Specific Enablement,” defined in the table below, invokes both user configuration and computer configuration parameters to call logon and startup scripts. This GPO prepares the client computer for Folder Redirection by creating a custom WMI namespace, creating a registry key, and querying a database table to determine if a user has Folder Redirection enabled on a specific client computer. This GPO does not perform Folder Redirection, but establishes a series of prerequisites.

Computer-Specific Enablement

General policy settings

Links Applied to: See the “User Location and Security Groups and Object” tableSecurity filtering Applied to: Policy0_SGDelegation Deployment team delegated Edit settings permission

User Configuration

Policies\Windows Settings\Scripts\Logon See Appendix A (Computer Specific Folder Redirection Logon Script)

Computer ConfigurationPolicies\Windows Settings\Scripts\Startup See Appendix A (Computer Specific WMI Namespace Installation)

Table 15 - Computer specific enablement

The Startup Script which we identify as the “Computer-Specific WMI Namespace Installation” (outlined in Appendix A) is used to create a WMI namespace called “MachineBasedFR” on all client computers enrolled in the service. The namespace is created by using the “Computer Specific MOF” also outlined in Appendix A. The policy is applied at startup, so the client computer will need to be restarted to create the WMI namespace.

The Logon Script which we identify as the “Computer-Specific Folder Redirection Logon Script” (outlined in Appendix A) is run during user logon and determines if Folder Redirection has been enabled for the specific user on the client computer. The Logon Script verifies that the client operating system is Windows Vista or above, creates a temporary working location on the client computer, copies two custom files (MachineBasedFR.exe and MachineBasedFR.config), and executes MachineBasedFR.exe.

52

Page 53: Implementing an End-User Data Centralization Solution

The executable performs two tasks.

1. The first task uses the MachineBasedFR.config file (a standard .NET application configuration file) to query a database table and determine if the user has the client computer enabled for redirection.

2. The second creates a registry key called MachineSpecificFR within the HKEY_CURRENT_USER hive, containing a REG_DWORD named ApplyFR.

The sample table below maps the users to specific client computers on which redirection should be enabled:

<User1>:<Machine1>,<Machine2>,<Machine3><User2>:<Machine2><User3>:<Machine4>,<Machine5>

If Folder Redirection is enabled for the client computer based on the table query, the MachineBasedFR.exe will set the ApplyFR registry value to 1. If it is not, the default value of 0 is maintained.

Computer Specific Folder Redirection GPOThe second GPO called “Computer-Specific Folder Redirection” defined in the table below specifies User Configuration parameters and is scoped to users via a WMI filter that evaluates a WMI query to be true where: “SELECT ApplyFR FROM MachineBasedFR where ApplyFR = 1”.

The Folder Redirection setting defines a path “\\File -Server -1\UserData_Domain (“X”) \%username%\documents \” which represents the path to the users “My Documents” folder. Other folders redirected within the policy setting not listed in the table are Contacts, Desktop, Downloads, Favorites, Links, My Music, My Pictures, My Videos, Saved Games, and Searches.

The default settings are maintained to grant user’s exclusive rights to their redirected content and move the contents of Documents to the new location. The team chose to enable a non-default parameter to redirect content back to the local user profile location if a user is removed from the deployment.

Computer-Specific Folder Redirection – Head and Near Branch Offices

General policy settings

Links Applied to: See the “User Location and Security Groups and Object” tableSecurity filtering Applied to: Policy1_SGWMI filtering WMI filter to identify:

SELECT ApplyFR FROM MachineBasedFR where ApplyFR = 1

Note: MachineBasedFR refers to a custom WMI namespaceDelegation Deployment team delegated Edit settings permission

User Configuration

Windows settingsFolder Redirection Documents folder used for reference in this policy setting example

Target TabSetting: Advanced-Specify location for various user groups

Security Group MembershipGroup: Domain X Advanced Folder Redirection Global Security GroupPath: \\File - Server-1\UserData_domain(“X”)\%username%\documents\

53

Page 54: Implementing an End-User Data Centralization Solution

Note: “X” represents the user’s domain name for redirecting content to the appropriate directory/folder on the file server

Settings TabEnabled – Grant user exclusive rights to DocumentsEnabled – Move the contents of Documents to the new locationEnabled – Redirect the folder back to the local user profile location when policy setting is removed

Table 16 – Computer-specific folder redirection – head and near branch offices

The “Computer-Specific Enablement” and “Computer-Specific Folder Redirection” GPOs are duplicated for users in the far branch office to support redirection of their folders with a different path and an alternate script location.

Background Sync and Slow Link GPO Definition

The following table provides an overview of the various policy settings used to enable Slow Link and Background Synchronization for different categories of users (local, near branch office, far branch office, and mobile) and client type (Windows Vista and Windows 7). There are three applicable GPOs that are detailed below: (Policy 3), (Policy 4), and (Policy 5).

Slow Link and Background Synchronization Group Policy Assignments

User type Applied to Slow Link Background Synchronization

Laptop Desktop Windows Vista

Windows 7 Windows Vista Window 7

Local X Default Default N/A Default

Near branch X 1 ms(Policy 3)

1 ms(Policy 3)

Custom Script(Policy 3)

48 hours(Policy 3)

Far branch X Default Default N/A Default

Mobile X 35 ms(Policy 4)

Default Custom Script(Policy 4)

48 hours(Policy 5)

Table 17 - Slow Link and Background Synchronization Group Policy assignments

Policy 3 GPO is targeted for users located in near branch offices; these are locations that are connected via a WAN link within the same geographical location as the head office. Round Trip time (RTT) during typical daily usage is in a range between 80 and 120 ms.

The GPO has two elements that impact both Windows Vista and Windows 7 clients. The first element is to configure Slow Link mode with a latency value of 1 ms (0 ms is not a supported value). This forces clients to work in an offline mode so that all read and write activity in satisfied from the Offline Files cache on their local computer. This provides the clients a better experience than having them access their file content located on the head office file server over a WAN link.

The second element sets Background Synchronization for both Windows Vista and Windows 7 clients. The synchronization interval for Windows Vista (via a custom executable calling Offline Files APIs) and Windows 7 is set to six hours. Windows 7 clients that have not achieved synchronization within the default period will be forcibly synchronized after 48 hours. The forced synchronization is not available for Windows Vista clients.

54

Page 55: Implementing an End-User Data Centralization Solution

Policy 4 GPO is specific to Windows Vista-based mobile users and has two elements. The first element sets the Slow Link mode with a latency value of 35 ms. This enables mobile users to work in an offline mode when connected to the corporate network via RAS or alternate locations that offer high latency network access to their file server. When that user reconnects via a LAN link with limited to no latency, they can revert to working in an online mode.

Note: Windows Vista will not revert to an online mode from an offline slow connection state unless the user selects the “Work Online” button in Windows Explorer. This behavior has changed in Windows 7 in that the client will automatically switch between online and offline mode based on network latency detection without user interaction.

The second element of the GPO enables synchronization every six hours if the Windows Vista client is maintained in an offline state. This again is achieved via custom script and executable.

Policy 5 GPO sets Background Synchronization for Windows 7 mobile users. If the client has not synchronized within the default interval of six hours, it will be forcibly synchronized after 48 hours.

Slow Link and Background Synchronization for Near Branch Office Desktops

This GPO is applied to all desktop users in the near branch office location.

Scoping is enabled via a WMI filter that evaluates a WMI query to be true: “Win32_ComputerSystem WHERE PCSystemType <>2”, which defines a laptop class computer.

The GPO sets the Slow Link latency to 1 ms to ensure that users are maintained in offline mode, and sets the Background Sync to 48 hours to force synchronization if the client has not successfully synchronized by using Computer Configuration and User Configuration settings.

Slow Link and Background Synchronization for Near Branch Office Desktops

General GPO settings

Links Applied to: See the “User Location and Security Groups and Object” tableSecurity filtering Applied to: Policy3_SGWMI filtering WMI filter to identify:

Win32_ComputerSystem WHERE PCSystemType <>2Delegation Deployment team delegated Edit settings permission

Computer Configuration

Policies\Administrative Templates\Network\Offline Files\Setting - Configure Slow Link mode

EnabledOptions: UNC Paths:Value name \\FileServer\Share_UserData_Domain1\*ValueLatency = 1

Policies\Administrative Templates\Network\Offline Files\Setting - Configure Background Sync

EnabledMaximum Allowed Time Without A Sync (minutes)2880

User Configuration

Policies\Windows Settings\Scripts (Logon/Logoff) See Appendix A (Windows Vista Background Sync logon script and associated files)

Table 18 - Slow Link and Background Synchronization for near branch office desktops

55

Page 56: Implementing an End-User Data Centralization Solution

Slow Link and Background Synchronization for Windows Vista Laptops

This GPO is applied to all laptops running Windows Vista.

Scoping is enabled via a WMI filter that evaluates two WMI queries to be true: Win32_ComputerSystem WHERE PCSystemType = 2, which defines a laptop class computer, and Win32_OperatingSystem WHERE (BuildNumber >= 6001) or (BuildNumber < 7000), which includes all Windows Vista clients with service packs, and excludes Windows Vista RTM and any Windows 7 client build.

The GPO sets the Slow Link latency to 35 ms to ensure that Windows Vista mobile users are maintained in offline mode by using Computer Configuration and User Configuration settings.

Slow Link and Background Synchronization for Windows Vista Laptops

General GPO settings

Links Applies to: See the “User Location and Security Groups and Object” tableSecurity filtering Applies to: Policy4_SGWMI filtering WMI filter to identify:

Win32_ComputerSystem WHERE PCSystemType = 2Win32_OperatingSystem WHERE (BuildNumber >= 6001) or (BuildNumber < 7000)

Delegation Deployment team delegated Edit settings permission

Computer Configuration

Policies\Administrative Templates\Network\Offline Files\Setting - Configure Slow Link mode

EnabledOptions: UNC Paths:Value name \\FileServer\UserData _Domain(“X”)\*ValueLatency = 80

User Configuration

Policies\Windows Settings\Scripts (Logon/Logoff) See Appendix A (Windows Vista Background Sync logon script and associated files)

Table 19 - Slow Link and Background Synchronization for Windows Vista laptops

Background Synchronization for Windows 7 Laptops

This GPO is applied to all laptops running Window 7.

Scoping is enabled via a WMI filter that evaluates two WMI queries to be true: Win32_ComputerSystem WHERE PCSystemType = 2, which defines a laptop class computer, and Win32_OperatingSystem WHERE (BuildNumber >= 7000), which includes any Windows 7 build and above.

56

Page 57: Implementing an End-User Data Centralization Solution

The GPO sets the Background Sync to force clients to synchronize content from their offline cache every 48 hours by using Computer Configuration settings.

Background Synchronization for Windows 7 Laptops

General GPO settings

Links Applies to: See the “User Location and Security Groups and Object” tableSecurity filtering Applies to: Policy5_SGWMI filtering WMI filter to identify:

Win32_ComputerSystem WHERE PCSystemType = 2Win32_OperatingSystem WHERE (BuildNumber >= 7000)

Delegation Deployment team delegated Edit settings permission

Computer Configuration

Policies\Administrative Templates\Network\Offline Files\Setting - Configure Background Sync

EnabledMaximum Allowed Time Without A Sync (minutes)2880

Table 20 - Background Synchronization for Windows 7 laptops

Exclusion List

This GPO is applied to all Windows 7 clients.

Scoping is enabled via a WMI filter that evaluates a WMI query to be true: Win32_OperatingSystem WHERE BuildNumber >= 7000, which includes any Windows 7 build and above.

The Exclusion List GPO prevents Windows 7 clients from creating unsupported file types whether in online or offline mode by using Computer Configuration settings.

Exclusion List

General GPO settings

Links Applies to: See the “User Location and Security Groups and Object” tableSecurity filtering Applies to: Policy6_SGWMI filtering WMI filter to identify:

Win32_OperatingSystem WHERE BuildNumber >= 7000 Delegation Deployment team delegated Edit settings permission

Computer Configuration

Policies\Administrative Templates\Network\Offline FilesSetting - Exclude files from being cached

Enabled Options: Extensions: *.PST

Table 21 - Exclusion List

Offline Directory Delete Rename - Windows Vista

This GPO is applied to all Windows Vista users.

Scoping is enabled via a WMI filter that evaluates a WMI query to be true: Win32_OperatingSystem WHERE (BuildNumber >= 6001) or (BuildNumber < 7000), which includes all Windows Vista clients with service packs, and excludes Windows Vista RTM and any Windows 7 client build.

57

Page 58: Implementing an End-User Data Centralization Solution

The GPO enables Windows Vista users post SP1 to delete or rename folders that were cached by Offline Folders when working in offline mode. The GPO creates a registry key and sets two DWORDs within the HKEY_LOCAL_MACHINE registry hive by using a script invoked via the Computer Configuration settings.

Note: The registry key is read when the Offline Files service starts. Computer startup scripts are run after services such as Offline Files are started. Therefore, the client computer must be restarted after the registry key is set to enforce the change.

Offline Directory Rename Delete – Windows Vista

General GPO settings

Links Applies to: See the “User Location and Security Groups and Object” tableSecurity filtering Applies to: Policy7_SGWMI filtering WMI filter to identify:

Win32_OperatingSystem WHERE (BuildNumber >= 6001) or (BuildNumber < 7000)Delegation Deployment team delegated Edit settings permission

Computer Configuration

Policies\Windows Settings\Scripts (Startup/Shutdown)

See Appendix A (Windows Vista offline rename delete script)

Table 22 - Offline Directory Rename Delete script – Windows Vista

Data Protection Manager 2007 Configuration

The team leveraged an existing Data Protection Manager (DPM) 2007 SP1 deployment to enable backup for the failover cluster and stand-alone file servers. The DPM infrastructure is physically located within the same campus as the failover cluster file server enabling LAN based connectivity, whereas the stand-alone file server is located over a WAN link that typically has an RTT (Round Trip Time) of approximately 250 ms.

The server is a stand-alone configuration as per specification below. The DPM server uses iSCSI Storage with eleven 1-terabyte SATA drives in a RAID 6 configuration. The storage is configured as four 2-terabyte LUNs to support backup content for the folder redirection deployment and other infrastructure managed by the deployment team.

Data Protection Manager Server Specifications

Processor 2 x Quad-Core Processors (2.33 GHz)Memory 8 GBNetwork adapter 2 x 1 GB (1 for Corp Network, 1 for iSCSI Storage Connectivity)Internal controller Embedded RAID Controller (read cache only)Internal drives 2 x 146-GB SAS drivesOS LUN / volume 146-GB Mirror for C: Drive (Operating System) External controller 1-GB network adapter to external iSCSI storage arrayExternal drives 11 x 1-terabyte SATA drives

9Data+2Parity (RAID6) Data LUN / volume 4 x 2-terabyte (DPM backup)

Table 23 - Data Protection Manager server specifications

The team has configured two separate DPM protection groups, one for each file server. This allows for selective optimizations as required to cater for LAN- and WAN-based backup methods.

The protection group details for the failover cluster file server (File-Server-1) and for the stand-alone file server (File-Server-2) are highlighted in the following table. The team maintained many of the default settings, but did

58

Page 59: Implementing an End-User Data Centralization Solution

modify settings to exclude protection for PST files and adjust synchronization frequency to every two hours for File-Server-2 located in the far branch office.

Protection Group Details

Failover cluster Stand-alone

Group members

Available members File-Server-1 (agents deployed to Node1 and Node2)

File-Server-2

Selected members J:\FRCSC Deployment\J:\Logon_Scripts

E:\FSCSC Deployment\E:\Logon_Scripts

Excluded folders 0 0Excluded file types 1 (.pst) 1 (.pst)

Data protection method

Short term protection Disk Disk

Short-term-goals

Retention range 14 days 14 daysSynchronization frequency Every 15 minutes Every 2 hoursRecovery points to file 8:00 A.M., 12:00 P.M., and 9:00 P.M.

Everyday8:00 A.M., 12:00 P.M., and 9:00 P.M. Everyday

Optimize performance

Network N/A Enable on-the-wire-compressionConsistency check Schedule daily consistency check

11:00 P.M.Maximum duration (hours): 8

Schedule daily consistency check 1:00 A.M.Maximum duration (hours): 8

Table 24 - Protection group details

Another area of interest is a requirement to enable over-the-wire compression, throttling, and consistency checks to address different optimization requirements based on the protected servers’ geographical location.

Network throttling

The team chose to minimize the impact that data protection replication has on the network by enabling DPM’s network throttling capability.

Given the capabilities on the Microsoft WAN link between the head office and the far branch office and its utilization, the team chose the following settings as in the Throttle Properties dialog box:

59

Page 60: Implementing an End-User Data Centralization Solution

Figure 19 - Far branch office throttle settings

On-the-wire compression

This setting allows more data throughput without negatively affecting network performance. However, this option does add to the CPU load on both the DPM server and the protected file servers. Compression decreases the size of data being transferred during replica creation and synchronization.

The team enabled on-the-wire compression for the server located in the far branch office but maintained the default for the server located in the head office.

Far Branch Office Compression Settings Head Office Compression Settings

Figure 20 - DPM compression settings

Consistency checks

The team schedules a daily consistency check during off-peak hours to ensure that the replica is consistent with the protected data. A scheduled consistency check will only run if inconsistencies are detected during synchronization. Keep in mind that the performance of the file server will be affected while the consistency check is running.

Far Branch Office Consistency Settings Head Office Consistency Settings

60

Page 61: Implementing an End-User Data Centralization Solution

Figure 21 - DPM optimization performance

61

Page 62: Implementing an End-User Data Centralization Solution

Operational Procedures

Adding Users to the Service

The process of adding users to the service is done in two stages. As part of the first stage, all Group Policy settings are applied with the exception of the Offline Files Exclusion List policy. During the second stage, directory quotas are changed from soft to hard quotas, file screens are changed from passive to active, and the Offline Files Exclusion List policy setting is deployed.

Figure 22 defines a flow chart explaining the user enrollment procedure:

Figure 22 - Overview of adding users to the deployment

Stage one

The administrator adds user and computer objects to the appropriate security groups listed in Table 14 (1) which causes all Group Policy settings to be applied with the exception of the Offline Files Exclusion List policy setting. Then, the names of the computers specified for folder redirection for given users are added into the appropriate database (2). Following this step, users are notified about their enrollment and instructed to restart their computers and log on (3). After the initial user logon, the user profile folders’ content is moved to the centralized file server location, and soft directory quotas and passive file screens are automatically applied.

62

Page 63: Implementing an End-User Data Centralization Solution

Stage two

After successful folder redirection, File Server Resource Manager storage reporting is used to check the status of the soft directory quotas and passive file screens (5). Users above the quota threshold are instructed to remove content and comply with their allocated storage space limit or to provide justification for an exception allowing them to store more data on the server. Similarly, users with excluded file types on the server are instructed to remove the files (6). After the quota and excluded file requirements are met by the user, directory quotas are changed from soft to hard (7), file screens are changed from passive to active (8), computer objects are added to the appropriate security group for the Offline Files Exclusion List policy setting, and the user is notified to restart the computer (9).

Removing Users from the Service

Removing users from the service is accomplished by removing user and computer objects from the appropriate security groups. Just as adding the objects caused the Group Policy settings to be applied, the removal from the security group membership causes the Group Policy settings to be reverted back to the default settings.

After the user and computer objects are removed from the security groups, users are instructed to restart their computers and log on. While logging on to computers previously specified for folder redirection, the data from the centralized file server is copied back to the local hard drive. In addition, the user profile folders are reverted to their default locations. Because Folder Redirection is removed during the logon process, the user logon will take longer if there is a large amount of redirected content.

63

Page 64: Implementing an End-User Data Centralization Solution

ConclusionIn this document, we explained how to successfully put in place an end-user data centralization and backup solution by using Microsoft technologies. We delved into the key planning considerations critical to a successful deployment and detailed how the solution was implemented within Microsoft.

Besides providing guidance to help customers replicate such a solution, the primary objective of this effort was for the Microsoft File Server Quality Assurance group to validate technologies under development within a production environment. This ensured that the solution being shipped to our customers was proven to be capable of fulfilling the business needs established in this document.

While, through this deployment, we presented a cohesive solution responding to typical end user data centralization and backup needs, some organizations may have additional requirements going beyond the scope of what has been discussed in this white paper. Additional requirements that were considered during the initial phases of the project included:

• User documents classificationAbility to classify documents and perform automatic tasks based on the document class. For example, this can be useful to manage data of people who left the deployment by automatically moving their files from the file servers to an alternate location. You can use Windows Server 2008 R2 File Classification Infrastructure (http://go.microsoft.com/fwlink/?LinkId=166667) for that purpose.

• Disaster recovery with continuous availabilityAbility to continuously maintain service availability (including adding or removing users) in case of a disaster at the site where the file servers are located. You can use the Distributed File System Replication (DFSR) and Distributed File System Namespace (DFSN) features in Windows Server 2008 R2 to achieve this result.

• Further service health monitoringAbility to closely monitor the health of the system from the server infrastructure to the client computers. You can use Microsoft System Center Operations Manager 2007 (http://go.microsoft.com/fwlink/?LinkId=166668) for that purpose.

• VirtualizationReduction of hardware and management costs through virtualization and consolidation of the servers composing the solution by using the Hyper-V feature (http://go.microsoft.com/fwlink/?LinkId=166669), which is shipped as part of Windows Server 2008 R2.

These requirements will be investigated to enhance the existing solution with a plan to implement the Microsoft technologies mentioned above.

64

Page 65: Implementing an End-User Data Centralization Solution

Index of TablesTable 1 - SMB version negotiation.............................................................................................................................11

Table 2 - Facility classification....................................................................................................................................15

Table 3 - Head office server capacity sizing................................................................................................................18

Table 4 - Far branch server capacity sizing.................................................................................................................18

Table 5 - Network characteristics impact...................................................................................................................23

Table 6 - Working states............................................................................................................................................26

Table 7 - Optimizing folder states based on users’ categories...................................................................................28

Table 8 - Time to complete initial full replica with DPM............................................................................................32

Table 9 - Time to complete scheduled synchronizations with DPM...........................................................................33

Table 10 - Clustered server specifications..................................................................................................................42

Table 11 - Stand-alone server specifications..............................................................................................................43

Table 12 - Operating system configuration................................................................................................................43

Table 13 - Policy setting definitions............................................................................................................................51

Table 14 - User location and security groups and object...........................................................................................51

Table 15 - Computer specific enablement.................................................................................................................52

Table 16 – Computer-specific folder redirection – head and near branch offices......................................................54

Table 17 - Slow Link and Background Synchronization Group Policy assignments.....................................................54

Table 18 - Slow Link and Background Synchronization for near branch office desktops............................................55

Table 19 - Slow Link and Background Synchronization for Windows Vista laptops....................................................56

Table 20 - Background Synchronization for Windows 7 laptops................................................................................57

Table 21 - Exclusion List.............................................................................................................................................57

Table 22 - Offline Directory Rename Delete script – Windows Vista..........................................................................58

Table 23 - Data Protection Manager server specifications.........................................................................................58

Table 24 - Protection group details............................................................................................................................59

65

Page 66: Implementing an End-User Data Centralization Solution

Index of FiguresFigure 1 - Geographic distribution...............................................................................................................................7

Figure 2 - Network infrastructure and server locations.............................................................................................16

Figure 3 - FSCT report output.....................................................................................................................................19

Figure 4 – FSCT server scenarios throughput.............................................................................................................20

Figure 5 – Offline Files state transition.......................................................................................................................27

Figure 6 - Solution architecture..................................................................................................................................35

Figure 7 - Domain representation..............................................................................................................................41

Figure 8 - Failover cluster representation..................................................................................................................44

Figure 9 - Failover Cluster Manager MMC representation.........................................................................................45

Figure 10 - Windows Explorer share representation..................................................................................................46

Figure 11 - File Server Resource Manager Quota template properties......................................................................47

Figure 12 - File Server Resource Manager MMC – Custom Quota template (Redirection)........................................47

Figure 13 - File Server Resource Manager MMC – Quotas.........................................................................................47

Figure 14 - File Server Resource Manager MMC – Custom File Group Properties for *.pst files................................48

Figure 15 - File Server Resource Manager MMC – File Screen Template Properties..................................................48

Figure 16 - File Server Resource Manager MMC – File Screens.................................................................................49

Figure 17 - File Server Resource Manager MMC – File Screen Properties (Passive screening)...................................49

Figure 18 - Shadow Copy for Shared Folders and schedule........................................................................................50

Figure 19 - Far branch office throttle settings............................................................................................................60

Figure 20 - DPM compression settings.......................................................................................................................60

Figure 21 - DPM optimization performance...............................................................................................................61

Figure 22 - Overview of adding users to the deployment..........................................................................................62

66

Page 67: Implementing an End-User Data Centralization Solution

Appendix A – Scripts

Computer Specific Folder Redirection

Computer Specific Folder Redirection logon scriptThe script runs at logon and determines if Folder Redirection should apply for the specified user on the specified computer. The script calls an executable named “MachineBasedFR.exe” which retrieves the name of the logged on user and the name of the computer. It then queries a database to determine if the user needs to have Folder Redirection enabled for the specific computer. The executable creates the following registry entry: HKCU\MachineSpecificFR\ApplyFR and initializes it with 1 if Folder Redirection needs to be applied to the specific computer for the user or to 0 otherwise.

Computer Specific Folder Redirection::     // Description:::     // The script runs at logon & determines if Folder Redirection::     // should apply for the specified user on the specified computer.::     // It creates a following registry entry: HKCU\MachineSpecificFR\ApplyFR

@echo off

::     // ::     // SET GLOBAL VARIABLES::     //

SET LOCAL_PATH=%TEMP% SET LOCAL_LOG_FILE=%USERNAME%_%COMPUTERNAME%.logSET OUTPUT_PATH=\\File-Server-1\data$\Machine_Specifc_FR\DomainXSET COMMAND_PATH=\\File-Server-1\Logon_scripts$\Machine_Specifc_FR\DomainX

::     // ::     //  Get the OS VERSION::     //

VER > %TEMP%\Machinespecificfolderredirection.OSVersion.txtFINDSTR /I /C:"Version 5." %TEMP%\Machinespecificfolderredirection.OSVersion.txtIF %ERRORLEVEL% EQU 0 (       echo This Machine is running pre-vista, Exiting ... >> %LOCAL_LOG_FILE%       goto :eof)

::     // ::     // Create a temp directory & copy the user logon script::     //

md  %LOCAL_PATH%\Machinespecificfolderredirection.UserLogonxcopy /y  %COMMAND_PATH%\MachineBasedFR*%LOCAL_PATH%\Machinespecificfolderredirection.UserLogon

::     // ::     // Execute the script::     //

Pushd %TEMP%\Machinespecificfolderredirection.UserLogonMachineBasedFR.exe >> %LOCAL_LOG_FILE%

::     //::     // Copy Local Log File to Server::     //

COPY /Y %LOCAL_LOG_FILE% %OUTPUT_PATH%

goto :eof

67

Page 68: Implementing an End-User Data Centralization Solution

Computer Specific WMI Namespace Installation startup script

This startup script creates a custom WMI namespace on the client. The custom WMI namespace properties are defined in the MachineBasedFR.mof support file.

Computer-Specific WMI Namespace Installation:: // Description::: // This is the machine startup script for the Computer Specific Folder Redirection:: // Deployment. This script installs a custom WMI Namespace on the computer.

@echo off

:: //:: // SET GLOBAL VARIABLES:: //

SET LOG_FILE=\\File-Server-1\data$\Computer_Specific_FR\DomainX\%COMPUTERNAME%.logSET MOF_SOURCE=\\File-Server-1\logon_scripts$\Computer_Specific_FR\DomainX\MachineBasedFR.mof

:: Install the MOFmofcomp -class:forceupdate %MOF_SOURCE%

:: Update the log fileecho %DATE% %TIME% : Computer Startup Executed >> %LOG_FILE%

goto :eof

Computer Specific (MOF) support file

This is a support file used in the Computer Specific Folder Redirection startup script. The name of the support file is MachineBasedFR.mof and defines the properties of a custom WMI namespace.

MachineBasedFR.mof#pragma namespace ("\\\\.\\Root\\cimv2")

[DYNPROPS]class MachineBasedFR{     [key]string  Keyname="";     uint32     ApplyFR;};

[DYNPROPS]instance of MachineBasedFR{     KeyName="MachineBasedFR";     [PropertyContext("local|HKEY_CURRENT_USER\\MachineBasedFR|ApplyFR"), Dynamic, Provider("RegPropProv")] ApplyFR;};

Windows Vista Background Synchronization

The Windows Vista Background Sync logon script creates a scheduled task to periodically synchronize offline content on Windows Vista clients. Clients in Offline (Slow Connection) and Offline (Working Offline) modes will get the benefit of keeping their files up to date by using this task. This script uses Scheduler Task to create a Background Sync task named "BackgroundSync Task For User %USERNAME%" on the client. This task can be

68

Page 69: Implementing an End-User Data Centralization Solution

viewed from the Task Scheduler. The logon script uses an XML file that provides information such as user name and how often the task should run to create the scheduled task. Here, the timer is set to sync every 6 hours. In addition, the script checks for updates to fullsync.exe and provides the option to delete the task from the user’s computer.

Required support files:

Support file Description Location

Filever.exeTool to determine version of FullSync.exe

Download from http://go.microsoft.com/fwlink/?LinkId=166670

FullSync.exeBinary executed by scheduled task See below

FullSync.xmlSettings used to create scheduled task See below

Vista Background Sync:: // Description::: // This script copies the tools necessary for Background Sync from a remote:: // location to a local directory. It then creates a scheduler task to Sync:: // the offline files at predetermined time intervals.

@echo off

setlocal ENABLEDELAYEDEXPANSION:: //:: // Initialize Global Variables:: //

SET COMMAND_PATH=\\File_Server-1\Logon_Scripts$\Vista_Background_Sync_Task\DomainXSET OUTPUT_PATH="\\File_Server-1\data$\Vista_Background_Sync_Task\DomainX\"SET VISTA_BG_SYNC_TASK_TEMP_FILE=%TEMP%\VISTA_BG_SYNC_TASK_%USERNAME%_%COMPUTERNAME%.TXTSET VER_COMMAND_BIN=%SANDBOX_DIR%\FILEVER.EXESET CurrentDir=%TEMP%SET SANDBOX_DIR=%TEMP%\FS_SANDBOXSET OPTION=%1

:: //:: // Validate Parameters:: //

IF /I {"%OPTION%"} == {"/?"} goto :USAGE

:: //:: // COLLECT BUILD INFORMATION.:: //

call :ECHOANDLOG Checking Version of OS ...ECHO VERSION_INFORMATION: > %VISTA_BG_SYNC_TASK_TEMP_FILE%VER >> %VISTA_BG_SYNC_TASK_TEMP_FILE%

:: //:: // Check Build Version is not Win7 or higher:: //

FINDSTR /I /C:"Version 6.1" %VISTA_BG_SYNC_TASK_TEMP_FILE% IF %ERRORLEVEL% EQU 0 (

call :ECHOANDLOG This Machine is running Win7 or higher, Exiting ...goto :COPYLOGS

)

:: //

69

Page 70: Implementing an End-User Data Centralization Solution

:: // Force delete the existing task and create a new one:: //

IF /I {"%OPTION%"} == {"/D"} goto :DELETE

:: //:: // Validate the presence of FULLSYNC tool on the Network Location:: //

md %SANDBOX_DIR%

:: //:: // Copy the binaries to the local Sandbox directory:: //

call :ECHOANDLOG Copying necessary binaries to Sandbox ....COPY /Y %COMMAND_PATH%\%PROCESSOR_ARCHITECTURE%\* %SANDBOX_DIR%

:: if not exist %COMMAND_PATH%\%PROCESSOR_ARCHITECTURE%\fullsync.exe (:: call :ECHOANDLOG FULLSYNC tool is not present on the specified network location

%COMMAND_PATH%\%PROCESSOR_ARCHITECTURE%; Exiting...:: goto :COPYLOGS::)

cd /d %TEMP%

:: //:: // Check if FULLSYNC tool is already installed for the current user:: //

if not exist %TEMP%\fullsync.exe goto :CREATE%VER_COMMAND_BIN% %TEMP%\fullsync.exe >> %VISTA_BG_SYNC_TASK_TEMP_FILE%

:: //:: // Compare and copy the new version of the tool:: //

call :ECHOANDLOG Comparing Local and Network File Version for any updates in the FULLSYNC tool ...

for /f "tokens=5" %%I in ('%SANDBOX_DIR%\FILEVER.EXE %TEMP%\fullsync.exe') do (SET LOCAL_FILE_VER=%%Icall :ECHOANDLOG Local File Version=%%I

)

for /f "tokens=5" %%I in ('%SANDBOX_DIR%\FILEVER.EXE %SANDBOX_DIR%\fullsync.exe') do (SET NETWORK_FILE_VER=%%Icall :ECHOANDLOG Network File Version=%%I

)

if /I %LOCAL_FILE_VER% EQU %NETWORK_FILE_VER% (call :ECHOANDLOG FULLSYNC tool is Up to date, Exiting ...goto :COPYLOGS

) else (call :ECHOANDLOG New Version of FULLSYNC tool is available, Updating the Scheduled Task

"Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%"goto :DELETEANDCREATE

)

:CREATE:: //:: // Copy the binaries to the local directory:: //call :ECHOANDLOG Copying necessary binaries ....COPY /Y %COMMAND_PATH%\%PROCESSOR_ARCHITECTURE%\* %TEMP%

:: //:: // Create Schedule Task for FULLSYNC:: //call :ECHOANDLOG Creating a Schedule task for FULLSYNC tool

70

Page 71: Implementing an End-User Data Centralization Solution

schtasks /create /XML %TEMP%\FULLSYNC.xml /tn "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%"

if %ERRORLEVEL% NEQ 0 (call :ECHOANDLOG Creating Scheduled Task for FULLSYNC Task Failed with Error:%ERRORLEVEL

% ;Exiting...goto :DELETE

)

:: //:: // Run Schedule Task for FULLSYNC:: //:: call :ECHOANDLOG Running Schedule task: "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%":: schtasks /Run /tn "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%":: if %ERRORLEVEL% NEQ 0 (:: call :ECHOANDLOG Running Scheduled Task for FULLSYNC failed with Error:%ERRORLEVEL% ;Exiting...:: goto :DELETE::)

goto :COPYLOGS

:: //:: // Display Usage:: //

:USAGE ECHO [USAGE]: ECHO Vista_Background_Sync_Logon_Script.cmd [/D] ECHO Script to Create and Schedule a Background Sync task ECHO The Scheduler task that it creates: "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%" ECHO /D: To Delete the existing Scheduled Task

goto :EOF

:ECHOANDLOG

:: //:: // Log messages in a temp file:: //ECHO. >> %VISTA_BG_SYNC_TASK_TEMP_FILE%ECHO %*ECHO %* >> %VISTA_BG_SYNC_TASK_TEMP_FILE%goto :EOF

:DELETEANDCREATE

:: //:: // Delete and Create the existing Scheduled Task:: //call :ECHOANDLOG End the existing scheduled Task "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%"schtasks /End /tn "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%"call :ECHOANDLOG Delete the existing scheduled Task "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%"schtasks /Delete /tn "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%" /Fgoto :CREATE

:DELETE

:: //:: // Simply Delete and existing Scheduled Task:: //

call :ECHOANDLOG End the existing scheduled Task "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%"schtasks /End /tn "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%"call :ECHOANDLOG Delete the existing scheduled Task "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%"

71

Page 72: Implementing an End-User Data Centralization Solution

schtasks /Delete /tn "Vista Background Sync for User=%USERNAME%, Domain=%USERDOMAIN%" /F

for /f %%I in ('dir /b /s %SANDBOX_DIR%') do (call :ECHOANDLOG Deleting %TEMP%\%%~nxI ...del /F %TEMP%\%%~nxI

)

:COPYLOGS

:: //:: // Copy Local Log File to Server:: //

copy /y %VISTA_BG_SYNC_TASK_TEMP_FILE% %OUTPUT_PATH%rmdir /s /q %SANDBOX_DIR%

:EOF

FullSync.exe// THIS CODE AND INFORMATION IS PROVIDED "AS IS" WITHOUT WARRANTY OF// ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO// THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A// PARTICULAR PURPOSE.//// Copyright (c) Microsoft Corporation. All rights reserved////// This program uses the COM APIs to trigger a data synchronization through// Sync Center. Once built, this executable can be used in conjunction with // a scheduled task to trigger periodic synchronization of data when that// synchronization is managed through Sync Center. Offline Files is an example// of data that is managed in this way.//// The progress and final status of this sync activity can be seen through // the Sync Center UI.//

#include "stdafx.h"#include <windows.h>

#include <stdio.h>#include <tchar.h>

#include <SyncMgr.h> // Needed for SyncMgr interfaces#include <comutil.h>

//// This routine issues a synchronization request to Sync Center using its// ISyncMgrControl COM API. The "silent" parameter determines whether the// program prints out the status of its steps or not.//HRESULTSyncCenter_SyncAll( BOOL silent ){ HRESULT hr; ISyncMgrControl *m_pSyncControl; // Reference to SyncMgr that is used to control sync activity

if( !silent ) { printf( "Binding to ISyncMgrControl...\n" ); }

72

Page 73: Implementing an End-User Data Centralization Solution

// // Get instance of Sync Manager // hr = CoCreateInstance( __uuidof(SyncMgrControl), NULL, CLSCTX_ALL, IID_PPV_ARGS(&m_pSyncControl) ); if( SUCCEEDED(hr) ) { if( !silent ) { printf( "Calling SyncAll...\n" ); }

// // Issue the sync request to Sync Center. Sync Center processes requests // sequentially. If any other sync is already running, this sync task // will run once that task finishes. The StartSyncAll() will return // once the sync task is queued, not once it has completed. // m_pSyncControl->StartSyncAll( NULL ); m_pSyncControl->Release(); } else { if( !silent ) { printf( "Bind failed : %x\n", hr ); } }

return hr;}

void __cdecl wmain( int argc, WCHAR** argv ){ CoInitializeEx( NULL, COINIT_MULTITHREADED );

SyncCenter_SyncAll( (argc > 1) ? TRUE : FALSE );

CoUninitialize();}

FullSync.xml<?xml version="1.0" encoding="UTF-16"?><Task version="1.2" xmlns="http://schemas.microsoft.com/windows/2004/02/mit/task"> <RegistrationInfo> <Source>Microsoft Corporation</Source> <Author>Microsoft Corporation</Author> <Description>This task controls periodic synchronization of Offline Files when the user is logged on</Description> <URI>\Microsoft\Windows\SyncCenter\FullSync</URI> </RegistrationInfo> <Triggers> <TimeTrigger> <Repetition> <Interval>PT360M</Interval> <StopAtDurationEnd>false</StopAtDurationEnd> </Repetition> <StartBoundary>2008-01-01T00:00:00</StartBoundary> <Enabled>true</Enabled> <ExecutionTimeLimit>PT30M</ExecutionTimeLimit> </TimeTrigger> </Triggers> <Settings> <IdleSettings>

73

Page 74: Implementing an End-User Data Centralization Solution

<Duration>PT10M</Duration> <WaitTimeout>PT1H</WaitTimeout> <StopOnIdleEnd>true</StopOnIdleEnd> <RestartOnIdle>false</RestartOnIdle> </IdleSettings> <MultipleInstancesPolicy>IgnoreNew</MultipleInstancesPolicy> <DisallowStartIfOnBatteries>false</DisallowStartIfOnBatteries> <StopIfGoingOnBatteries>false</StopIfGoingOnBatteries> <AllowHardTerminate>true</AllowHardTerminate> <StartWhenAvailable>true</StartWhenAvailable> <RunOnlyIfNetworkAvailable>true</RunOnlyIfNetworkAvailable> <AllowStartOnDemand>true</AllowStartOnDemand> <Enabled>true</Enabled> <Hidden>false</Hidden> <RunOnlyIfIdle>false</RunOnlyIfIdle> <WakeToRun>false</WakeToRun> <ExecutionTimeLimit>P1D</ExecutionTimeLimit> <Priority>7</Priority> </Settings> <Actions Context="Author"> <Exec> <Command>%TEMP%\fullsync.exe</Command> <Arguments>none</Arguments> </Exec> </Actions></Task>

Windows Vista Offline Rename Delete

This script enables the Offline Rename Directory for Windows Vista SP1 and Windows Vista Service Pack 2 (SP2) clients.

Vista offline rename delete

:: // Description::: // This script enables Offline Rename Directory for Vista SP1 and Vista SP2 clients:: // Note that Machine should have installed with KB942845.:: // Reference http://support.microsoft.com/kb/942845

:: // The script creates a local log file and copies the local log file to the path defined in OUTPUT_PATH:: // The script uses the redirected folders location defined in REDIRECTED_FOLDERS_LOCATION

@echo off

:: //:: // Initialize Global Variables:: //

SET REDIRECTED_FOLDERS_LOCATION=\\File-Server-1\RedirectedFoldersShareNameSET OUTPUT_PATH="\\File-Server-1\data$\Offline_Rename_Delete\DomainX\"SET OFFLINEDIRRENAME_TEMP_FILE=%TEMP%\OFFLINEDIRRENAME_%USERNAME%_%COMPUTERNAME%.TXTSET CurrentDir=%TEMP%

:: //:: // Delete existing Local Log File if Present:: //

IF EXIST %OFFLINEDIRRENAME_TEMP_FILE% DEL %OFFLINEDIRRENAME_TEMP_FILE%

:: //:: // Create Header Information for Log File:: //

ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%

74

Page 75: Implementing an End-User Data Centralization Solution

ECHO EXECUTION DATE AND TIME: %DATE% %TIME% >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO COMPUTERNAME: %COMPUTERNAME% >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%

cd /d %TEMP%

:: //:: // Check if OfflineDirRenameDelete Key Exists:: // If Registry Key Exist Skip Creation of Keys:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG Check if OfflineDirRenameDelete Key Exists...ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%

REG QUERY "HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\NetCache" /V OfflineDirRenameDeleteIF %ERRORLEVEL% EQU 0 GOTO :OfflineDirRenameDelete_Exists

:: //:: // Registry Key Does Not Exist and Key Needs to be Created:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG OfflineDirRenameDelete Key Does Not Exist. Continuing Script...ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%

GOTO :ENABLE

:OfflineDirRenameDelete_Exists

:: //:: // Registry Key Exists and Exiting Script:: //

ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG OfflineDirRenameDelete Key Exists. Exiting Script...ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%

GOTO :LOG_SETTINGS

:ENABLE

:: //:: // Set the necessary Reg Key for Offline File Directory Rename and Delete...:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG Setting the necessary Registry for Offline File Directory Rename and DeleteECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%

REG ADD "HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\NetCache" /v OfflineDirRenameDelete /t REG_DWORD /d 1 /fREG ADD "HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\NetCache\OfflineDirRenameDeleteList" /v %REDIRECTED_FOLDERS_LOCATION% /t REG_DWORD /d 1 /f

:: //:: // Verify OfflineDirRenameDelete Registry Key was created successfully.:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG Check that OfflineDirRenameDelete was created...ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%

REG QUERY "HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\NetCache" /V OfflineDirRenameDelete

75

Page 76: Implementing an End-User Data Centralization Solution

IF %ERRORLEVEL% EQU 0 GOTO :OfflineDirRenameDelete_Creation_SuccessfulIF %ERRORLEVEL% NEQ 0 GOTO :OfflineDirRenameDelete_Creation_FailedGOTO :EOF

:OfflineDirRenameDelete_Creation_Successful

:: //:: // OfflineDirRenameDelete was created successfully.:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG OfflineDirRenameDelete created successfully...ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%GOTO :CheckOfflineDirRenameDeleteList

:OfflineDirRenameDelete_Creation_Failed

:: //:: // OfflineDirRenameDelete was NOT created successfully.:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG Error....OfflineDirRenameDelete Registry Key Not created...ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%GOTO :LOG_SETTINGS:CheckOfflineDirRenameDeleteList

:: //:: // Verify OfflineDirRenameDeleteList Registry Key was created successfully.:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG Check that OfflineDirRenameDeleteList was created...ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%

REG QUERY "HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\NetCache\OfflineDirRenameDeleteList" /v %REDIRECTED_FOLDERS_LOCATION%IF %ERRORLEVEL% EQU 0 GOTO :OfflineDirRenameDeleteList_Creation_SuccessfulIF %ERRORLEVEL% NEQ 0 GOTO :OfflineDirRenameDeleteList_Creation_FailedGOTO :EOF

:OfflineDirRenameDeleteList_Creation_Successful

:: //:: // OfflineDirRenameDelete created successfully.:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG OfflineDirRenameDelete created successfully...ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%GOTO :LOG_SETTINGS

:OfflineDirRenameDeleteList_Creation_Failed

:: //:: // OfflineDirRenameDeleteList was NOT created successfully.:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%call :ECHOANDLOG Error....OfflineDirRenameDeleteList Not created...ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%GOTO :LOG_SETTINGS:LOG_SETTINGS

:: //:: // Log settings to Local Log File

76

Page 77: Implementing an End-User Data Centralization Solution

:: //

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO NetCache Registry Key Settings: >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%REG QUERY "HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\NetCache" >> %OFFLINEDIRRENAME_TEMP_FILE%

ECHO. >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO NetCache\OfflineDirRenameDeleteList Registry Key Settings: >> %OFFLINEDIRRENAME_TEMP_FILE%ECHO ---------------------------- >> %OFFLINEDIRRENAME_TEMP_FILE%REG QUERY "HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\NetCache\OfflineDirRenameDeleteList" >> %OFFLINEDIRRENAME_TEMP_FILE%

:: //:: // Copy Local Log File to Server:: //

COPY /Y %OFFLINEDIRRENAME_TEMP_FILE% %OUTPUT_PATH%

goto :EOF

:ECHOANDLOG

:: //:: // Log messages to Local Log File:: //REM echo. >> %OFFLINEDIRRENAME_TEMP_FILE%echo %*echo %* >> %OFFLINEDIRRENAME_TEMP_FILE%

goto :EOF

:EOF

77