virtual design master challenge 1 - joe

15
After the Outbreak: Rebuilding the World Original Date: 08/04/13 Revision Date: 08/08/13 Written By: Joe Graziano Sr. Infrastructure Engineer W.R.O – World Rebuild Organization

Upload: tovmug

Post on 25-Dec-2014

575 views

Category:

Technology


0 download

DESCRIPTION

The first Technology driven reality competition showcasing the incredible virtualization community members and their talents. Virtually Everywhere · virtualdesignmaster.com

TRANSCRIPT

Page 1: Virtual Design Master Challenge 1 - Joe

After the Outbreak: Rebuilding the World

Approved:________________________________________ Date:_____________

Original Date: 08/04/13

Revision Date: 08/08/13

Written By: Joe GrazianoSr. Infrastructure EngineerW.R.O – World Rebuild Organization

Page 2: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

Contents

1. INTRODUCTION........................................................................................................................................................2

2. PROBLEM/OPPORTUNITY...................................................................................................................................2

3. PROJECT OBJECTIVES...........................................................................................................................................2

4. SCOPE............................................................................................................................................................................3

5. PREREQUISITES / ASSUMPTIONS..................................................................................................................3

6. SYSTEM ARCHITECTURE AND ARCHITECTURE DESIGN....................................................................4

7. SYSTEM DETAILED DESIGN....................................................................................................................................5

8. DIAGRAMS....................................................................................................................................................................8

9. ACRONYMS...................................................................................................................................................................8

10. PEOPLE/RESOURCES.............................................................................................................................................8

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 1 –

Last printed 8/7/2013 7:54 PM

Page 3: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

1. Introduction

1.1. This document details the task of building up our world’s datacenters to reintroduce a working infrastructure to a land decimated by virus, undead and chaos. 3 datacenters have been located and are waiting for hardware, users and life to again echo through their halls. We will discuss creation of a Windows 2008 active directory domain and an Exchange 2013 email system. A SharePoint farm will be built to enable online document and data storage, collaboration among users and teams as well as the creation of an intranet and web presence for the organization. There will also be a VDI environment for local users as well as remote users with the help of a Cisco VPN client.

2. Problem/Opportunity

2.1. The world is in disarray after a virus outbreak that turned many into zombies. A wealthy philanthropist has recruited us to build a new infrastructure that will enable us to begin putting the world back together. The virus was stopped, but not before much of the world was overrun by zombies. The valiant survivors have taken care of the undead, and for now, we’ve won. A famous billionaire, Mr. Phil N. Thropist, has stepped in to help rebuild. His team of engineers has the Internet backup and running but many datacenters were destroyed. This is our opportunity as a massive warehouse was found containing almost limitless hardware, from April 2008, but it will be just what is needed to build out 3 datacenters. With hardware and software in hand we will tackle each datacenter. Server, network, telecommunication hardware will be racked. VMware clusters will be created. San’s with terabytes of disk will be provisioned and client VDI environments will be built to interface with critical software solutions like Exchange and SharePoint for quick and timely communication with other survivors around the world. This is our time to shine and bring life out of the chaos that has been our existence since the deadly virus nearly wiped out civilization as we know it.

3. Project Objectives

3.1. Use hardware found in the secured warehouse to create buildup 3 datacenters

3.2. Establish email connectivity so that teams can communicate/collaborate quickly and effectively

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 2 –

Last printed 8/7/2013 7:54 PM

Page 4: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

3.3. Create a document/data storage solution for teams to share and store all critical documents and data gathered

3.4. Focus on standardizing the datacenter design to be repeatable as new buildings/locations are found

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 3 –

Last printed 8/7/2013 7:54 PM

Page 5: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

4. Scope

4.1. Design and implement 3 datacenters in the buildings salvaged form the disaster caused by the virus outbreak

4.2. Create a WAN infrastructure between the 3 locations and remote access so that teams can work to collaborate and rebuild our civilization

4.3. Establish Email system as well as data storage and collaboration for team to use as rebuild efforts begin

4.4. Utilize found hardware/software from a massive warehouse to build the datacenters

4.5. Document and normalize design to turn up new datacenters as they are found

5. Prerequisites / Assumptions

5.1. With the help of Mr. Thropist’s engineering team the internet is once again functional and we have access to all of the data and resources that were currently online at the time of the disaster.

5.2. While fixing ‘The Internet’ the engineering team also established WAN connectivity between datacenters at or near 100mbps. This will be critical in making sure that all 3 datacenters can communicate with each other and the remote users who will be scouring the area, country, world, in the hopes of finding more survivors and taking on the daunting task of rebuilding the world.

5.3. The datacenters that have been found were cleaned up and tested for power, HVAC both in and outside the datacenter room where the server and networking equipment will be housed. Network cabling has been run in the datacenter and server and networking racks are available as well.

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 4 –

Last printed 8/7/2013 7:54 PM

Page 6: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

6. System Architecture and Architecture Design

6.1. The datacenters that we are building are to become the backbone of our new world’s infrastructure. Time is paramount so for now function will be a much bigger concern than form. But it is vitally important that all of the hardware and software specifications laid out in this document be scalable to the needs of newly found buildings. These designs are going to be setup to be quickly repeatable as well. This way we can multiply or divide the data to accommodate much larger and much smaller spaces.

6.2. The Server and Storage infrastructure will consist of Dell Poweredge R905 servers with maximum ram, multiple core and socket processors to handle a large amount of Virtual Servers and VDI clients. The data will be saved on a Pillar Axiom 500 SAN that features multiple levels of redundancy.

6.3. The software solutions that will be implemented include an Exchange 2013 environment with redundant CAS/HA and Mailbox servers across the datacenters with a DAG configuration allowing the mailbox databases to be replicated in all datacenters for fault tolerance. A SharePoint environment will be created to allow for file storage and collaboration for functional teams and will be managed by a SQL 2012 back end.

6.4. A networking infrastructure will be created to handle traffic internal and across the WAN to the 3 datacenters. This will be accomplished with Cisco Pix Firewalls, ASR 1000 WAN routers, Nexus 7000 core switches, Catalyst IDF switches and a Cisco wireless system in each datacenter using Cisco 2100 series wireless controllers and Cisco Aironet 1250 access points to provide the WiFi signal needed.

6.5. A telecommunications infrastructure will also be created using Avaya S8500 media servers in each datacenter and Avaya 4610 VOIP phones for office and remote communication.

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 5 –

Last printed 8/7/2013 7:54 PM

Page 7: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

7. System Detailed Design

7.1. Hardware Detailed Design

7.1.1.3 Cisco ASR 1000 WAN routers will be used, one in each datacenter and will provide the connectivity for the WAN link between datacenters and internet traffic that remote users will be traveling on to get through to the client VPN tunnel. Every environment that will need WAN connectivity needs a good edge router and for it’s time the ASR 1000 series was one of the fastest and most productive edge routers you could find.

7.1.2.The Nexus 7000 switch was Cisco’s flagship switch in the early part of 2008. 3 of these switches will sit at the core of the network in each datacenter with dual supervisors for system redundancy and up to 8 more 48 port 1GB Ethernet blades. The switch will be the default gateway for each datacenter and will have VLANS for the Server, Client, VMotion, and Wireless networks.

7.1.3.With the office buildings not being super organized we are expecting seating to be a premium and everyone having access to network drops can’t be guaranteed. For this reason we will add a Cisco 2100 Wireless network controller in each datacenter and will place 6 Cisco Aironet 1250 access points on each floor throughout the buildings to accommodate all the users we’re expecting. Wireless will also allow a much larger group of users to be on site as well as provide more mobility for users with laptops and other Wi-Fi devices.

7.1.4.Using the Cisco PIX 515e Firewall the datacenters will be able to ensure that, even in a post-apocalyptic world access is secure. The firewall will also enable the engineers to create remote access VPN tunnels for clients who need to access the network from remote locations such as internet ready sites, like cyber cafes’ or from their laptops using the Cisco Any Connect VPN client.

7.1.5.Telephony will be accomplished with the Avaya S8500 media gateways that then connect to Avaya SCC controllers carrying the (XX cards) that are then connected to the PSTN. In the offices and remote we can use Avaya 4610SW IP phones as the Avaya can make use of SIP trunking and when a user establishes a VPN tunnel from their home DSL the phone will also be able to communicate. In addition to the S8500 the telecom rack will need an Avaya SCC Control Carrier cabinet that will house a TN464 DS1 card, a TN2602 Med Pro card, a TN2312 IPSI card and a TN799 CLAN card. These are components of the solution and are needed to make the

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 6 –

Last printed 8/7/2013 7:54 PM

Page 8: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

connection to the PSTN as well as support the internal phone infrastructure.

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 7 –

Last printed 8/7/2013 7:54 PM

Page 9: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

7.1.6.For the SAN in our datacenters we found the Axiom 500 from Pillar Data systems. Pillar was once an independent organization and today is part of Oracle. The Axiom 500 will provide connectivity using either fiber channel or ISCSI. Up to 64 bricks can be interconnected, easily allowing us access to the 300TB and maximum we have for this phase of the design. We will also setup SAN replication between the datacenters using the NSS (Network Storage Server) from Falconstor. This virtual appliance runs in each datacenter and allows the Pillar to replicate to the other Axioms and will be a key component in the BC/DR phase of our project.

7.1.7.The ESXi 5.1 hosts will run on Dell PowerEdge R905 Server. These servers boast 256GB Ram, 4x AMD Opteron 8000 processors with 6 cores per socket, expansion slots for ample network adapters and fiber cards for SAN connectivity. The Datacenter diagram will list the number of servers in each datacenter to accommodate the Server VM’s as well as the VDI environment.

7.1.8.We will place 4 Catalyst 2960S 10/100/1000 switches on each floor of the buildings which will utilize 1GB Ethernet uplinks to the Nexus 7000 in the datacenter. The network diagrams for each datacenter show the number of switches required for each floor of the building to support the maximum number of onsite users.

7.1.9.Dell M1530 laptops will be deployed for users who have the need to leave the datacenters and bravely venture outside. In another life these were graphics design and gaming power house laptops they will be more than capable of connecting back to the datacenter using the Cisco Any Connect VPN client over the DSL networks available in users homes and remote locations.

7.1.10. The desktops found were Dell Optiplex 755 systems that will be configured with a local copy of Windows 7 professional as well as the Horizon View client which will give users’ access to a Virtual Desktop for central access and storage of data. The workstations also come equipped with a Broadcom wireless network adapter in the event that crowding occurs in the datacenter they are working in.

7.2. Software Detailed Design

7.2.1.Exchange 2013 will be used to create a fault tolerant email solution by building 5 VM servers at each datacenter. 2 servers will hold the CAS and HT role, one server will reside in the DMZ and have the edge transport role and the other 2 VM servers will have the Mailbox server role for the Exchange databases. The mailbox

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 8 –

Last printed 8/7/2013 7:54 PM

Page 10: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

servers will also be part of a DAG and configured so that the 2 backup copies of all databases will be held offsite in the other two datacenters. This will prevent a single point of failure for any Exchange database. OWA will also be configured to allow remote teams and individuals to access their mail even if they find themselves at an internet available location but without their laptops.

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 9 –

Last printed 8/7/2013 7:54 PM

Page 11: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

7.2.2.Windows 2008 R2 will be installed as the Server operating system for all the VM servers in the datacenters. There will be at least 2 domain controllers in each datacenter running DNS, DHCP, NTP, and all will host copies of the global catalog.

7.2.3.MS Office 2010 will be installed on the remote access laptops, desktops and VDI images so that users can access Outlook, create documents, spreadsheets and presentations when/if needed.

7.2.4.A SharePoint 2013 farm will be created to accomplish the design goal of having a central repository for documents and data created and needed by the support teams. An intranet page will also be built with specific access controls for teams to keep their data separate in an attempt to minimize overlap and confusion. The SharePoint farm will be supported on the back end by a SQL 2012 database cluster.

7.2.5.We will create a SQL 2012 database cluster for the SharePoint environment. This cluster will house the database(s) needed by SharePoint. Using a multisite failover cluster design we will enable failover of the SQL 2012 database(s) between the datacenters.

7.2.6.VMware ESXi 5.1 will be installed on all of the Dell PowerEdge R905 servers. DC1 will have 100 servers. DC2 will have 20 servers and DC3 will have 10 servers. Each datacenter will have a Server Cluster and a VDI cluster. This will help to keep resources separate so the server and client environment don’t interfere with each other’s performance. Local subnets have been identified for all of the datacenters and a routing will be configured and enabled for the datacenters. HA/DRS will be enabled but for now only for each local datacenter as the WAN link isn’t setup to handle a long distance VMotion scenario.

7.2.7.VMware Horizon View will be installed and configured so that we can install and configure Windows 7 workstation images and rapidly deploy up to 3000 virtual desktops to users as needed across the 3 datacenters.

7.2.8.A VMware VCenter Server 5.1 will be installed in each Datacenter and the 3 servers will be configured in linked mode so that each datacenter will be aware of its partners. While this is not a must for this phase of the project it will make phase 2 go a great deal smoother when BC/DR is needed.

7.2.9.The Cisco Any Connect VPN Client will be installed on all Laptops and any desktops that will leave the datacenters so that a remove

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 10 –

Last printed 8/7/2013 7:54 PM

Page 12: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

client VPN can be established back to the datacenter network. This will enable teams outside the office the access they need to continue to work and collaborate.

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 11 –

Last printed 8/7/2013 7:54 PM

Page 13: Virtual Design Master Challenge 1 - Joe

Rebuilding the World

8. Diagrams

8.1. Network Diagrams RebuildingTheWorld.vsdx8.2. IP Spreadsheet RebuildingTheWorld-IPAddressScheme.xls8.3. Hardware Manifest RebuildingTheWorld-HardwareManifest.xlsx

9. Acronyms

9.1. CAS – Client access Server (Exchange 2013)9.2. HT – Hub Transport (Exchange 2013)9.3. DAG – Database Availability Group (Exchange 2013)9.4. BC – Business Continuity9.5. DR – Disaster Recovery9.6. DNS – Domain Name System9.7. HA – High Availability9.8. DRS – Distributed Resource Scheduler9.9. PSTN – Public Switched Telephone Network9.10. NSS – Network Storage Server (Falconstor)

10. People/Resources

10.1. Information Systems Project Lead – Joe Graziano10.2. Technology Team – Datacenter 110.3. Technology Team – Datacenter 210.4. Technology Team – Datacenter 310.5. Technology Team – Lan/Wan10.6. Financial/Owner – Mr. Phil N. Thropist

/tt/file_convert/549c0ff9ac7959ba2a8b460d/document.docx - 12 –

Last printed 8/7/2013 7:54 PM