celerra icon student guide

727
Copyright © 2006 EMC Corporation. All Rights Reserved. Course Introduction - 1 © 2006 EMC Corporation. All rights reserved. Celerra ICON Celerra Training for Engineering Celerra ICON Celerra Training for Engineering Course Introduction

Upload: balla-harish

Post on 10-Sep-2015

141 views

Category:

Documents


8 download

DESCRIPTION

Celerra ICON Student Guide

TRANSCRIPT

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 1

    2006 EMC Corporation. All rights reserved.

    Celerra ICONCelerra Training for EngineeringCelerra ICONCelerra Training for Engineering

    Course Introduction

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 2

    2006 EMC Corporation. All rights reserved. Course Introduction - 2

    Revision History

    CompleteFebruary 20061.0

    RevisionsCourse DateRevision Number

    Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

    THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

    Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

    AutoIS , DG, E-Infostructure, EMC, EMC2, CLARalert, CLARiiON, HighRoad, Navisphere, PowerPath, ResourcePak, SRDF, Symmetrix, The EMC Effect, VisualSAN, and WideSky are registered trademarks, and Access Logix, ATAtude, Automated Resource Manager, AVALONidm, C-Clip, CacheStorm, Celerra, Celerra Replicator, Centera, CentraStar, CLARevent, Connectrix, CopyCross, CopyPoint, CrosStor, Direct Matrix, Direct Matrix Architecture, EDM, E-Lab, EMC Automated Networked Storage, EMC ControlCenter, EMC Developers Program, EMC Enterprise Storage, EMC EnterpriseStorage Network, EMCLink, EMC OnCourse, EMC Proven, Enginuity, FarPoint, FLARE, GeoSpan, InfoMover, MirrorView, OnAlert, OpenScale, PowerVolume, RepliCare, SafeLine, SAN Manager, SDMS, SnapSure, SnapView, SnapView/IP, SRDF, StorageScope, SymmAPI, SymmEnabler, TimeFinder, Universal Data Tone, where information lives are trademarks of EMC Corporation.

    All other trademarks used herein are the property of their respective owners.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 3

    2006 EMC Corporation. All rights reserved. Course Introduction - 3

    Prerequisitesy Successful completion of the following EMC courses:

    EMC Technology Foundations (ETF) or the NAS Foundations self-study module from that courseCelerra Features and Functionality (Knowledgelink)Choice of the following NAS hardware platforms based on relevance is

    recommended (Knowledgelink Self-Study) CNS Architectural Overview self-study NS Series Architectural Overview self-study NSX Architectural Overview self-study

    y Basic knowledge of: UNIX Administration Microsoft Windows 2000/2003 TCP/IP networking Storage systems concepts

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 4

    2006 EMC Corporation. All rights reserved. Course Introduction - 4

    Course Objectivesy Describe the functional components and operations of the major building

    blocks that make up a Celerra NAS solutiony Install the operating system and NAS software on a Control Station and the

    DART operating environment on a Data Movery Configure Network Interfacesy Configure a Celerra Data mover for high availability

    Back-end Data Mover failover Network high availability Describe the storage configuration requirements for both a CLARiiON and Symmetrix

    back-end

    y Configure and manage Celerra volumes and file systemsy Export Celerra file systems for NFS and CIFS accessy Manage CIFS in both Windows only and mixed environmentsy Implement and manage SnapSure and Celerra Replicatory Implement Celerra iSCSI target

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 5

    2006 EMC Corporation. All rights reserved. Course Introduction - 5

    Agenda Day 1y Class Introductiony Celerra Overviewy Hardware Overviewy Software Installation Conceptsy Planning, Installing, and Configuring a Gateway Systemy Installation Lab

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 6

    2006 EMC Corporation. All rights reserved. Course Introduction - 6

    Agenda Day 2y Celerra Management & Support

    Command Line Interface Celerra Manager

    y Configuring Network Interfacesy Data Mover Failover y Network High Availabilityy Lab:

    Upgrading NAS software Configuring Network Interfaces Configuring Data Mover Failover Test and Verify

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 7

    2006 EMC Corporation. All rights reserved. Course Introduction - 7

    Agenda Day 3y Back-end Storage Configuration

    Review CLARiiON storage concepts Symmetrix IMPL.bin file requirements

    y Configuring Celerra Volumes and File Systemsy Exporting File Systems for NFS Accessy Introduction to CIFS and Standalone CIFS Servery Lab:

    Configuring Volumes and File Systems Exporting File Systems for NFS access Test and verify Data Mover failover with NFS clients Standalone CIFS Server Configuration

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 8

    2006 EMC Corporation. All rights reserved. Course Introduction - 8

    Agenda Day 4y User Mapping in a CIFS Environmenty Configuring CIFS Servers on the Data Movery File System Permissionsy Virtual Data Movery Lab:

    Usermapper CIFS Configuration Windows Integration VDMs

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 9

    2006 EMC Corporation. All rights reserved. Course Introduction - 9

    Agenda Day 5

    y SnapSure Concepts and Configurationy Celerra Replicator Overviewy iSCSI Concepts and Implementationy Lab:

    Snapsure Implementation Local Celerra Replication iSCSI Implementation with Windows Host

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Course Introduction - 10

    2006 EMC Corporation. All rights reserved. Course Introduction - 10

    Closing Slide

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 1

    2006 EMC Corporation. All rights reserved.

    Celerra ICONCelerra Training for EngineeringCelerra ICONCelerra Training for Engineering

    Celerra Overview

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 2

    2006 EMC Corporation. All rights reserved. Celerra Overview - 2

    Revision History

    UpdatesMay 20061.2

    CompleteFebruary 20061.0

    RevisionsCourse DateRevision Number

    Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

    THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

    Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

    AutoIS , DG, E-Infostructure, EMC, EMC2, CLARalert, CLARiiON, HighRoad, Navisphere, PowerPath, ResourcePak, SRDF, Symmetrix, The EMC Effect, VisualSAN, and WideSky are registered trademarks, and Access Logix, ATAtude, Automated Resource Manager, AVALONidm, C-Clip, CacheStorm, Celerra, Celerra Replicator, Centera, CentraStar, CLARevent, Connectrix, CopyCross, CopyPoint, CrosStor, Direct Matrix, Direct Matrix Architecture, EDM, E-Lab, EMC Automated Networked Storage, EMC ControlCenter, EMC Developers Program, EMC Enterprise Storage, EMC EnterpriseStorage Network, EMCLink, EMC OnCourse, EMC Proven, Enginuity, FarPoint, FLARE, GeoSpan, InfoMover, MirrorView, OnAlert, OpenScale, PowerVolume, RepliCare, SafeLine, SAN Manager, SDMS, SnapSure, SnapView, SnapView/IP, SRDF, StorageScope, SymmAPI, SymmEnabler, TimeFinder, Universal Data Tone, where information lives are trademarks of EMC Corporation.

    All other trademarks used herein are the property of their respective owners.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 3

    2006 EMC Corporation. All rights reserved. Celerra Overview - 3

    Module Objectivesy Describe the current Celerra NAS product offeringy Locate resources used in setting up and maintaining a

    Celerra Documentation CD Support Matrix NAS Engineering Websites

    y Describe the environment that is used for the hands-on lab exercises

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 4

    2006 EMC Corporation. All rights reserved. Celerra Overview - 4

    EMC NAS VisionInfinite Scalability

    Massive Consolidation Workloads Scalable File Services For Grids Data Service Continuity

    Optimized Data Placement Object Level ILM Filesystem Virtualization System Virtualization

    Global Accessibility Unified Name Space Wide Area Filesystems

    Centralized Management Information Security Unified Management

    Deliveringon ILM

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 5

    2006 EMC Corporation. All rights reserved. Celerra Overview - 5

    EMC NAS Platforms

    SIMPLE WEB-BASED MANAGEMENT

    NS704G Four Data Movers 48 TB usable Fibre

    Channel, ATA capacity

    32 Gigabit Ethernet network ports (24 Copper, eight Optical)

    Eight Fibre Channel HBAs

    CLARiiON or Symmetrix storage

    NS500/350G One or two Data Movers 8 or 16 TB usable Fibre

    Channel, ATA capacity Four or eight Gigabit

    Ethernet network ports (Copper)

    Two or four Fibre Channel HBAs

    CLARiiON or Symmetrix storage

    NS700G One or two Data Movers 16 or 32 TB usable Fibre

    Channel, ATA capacity 8 or 16 Gigabit Ethernet

    network ports (Copper / Optical)

    Two or four Fibre Channel HBAs

    CLARiiON or Symmetrix storage

    NS500/350 One or two Data Movers 8 or 16 TB usable Fibre

    Channel, ATA capacity Four or eight Gigabit

    Ethernet network ports (Copper)

    Two Fibre Channel HBAs per Data Mover

    Integrated CLARiiON

    NS700 One or two Data Movers 16 or 32 TB usable Fibre

    Channel, ATA capacity 8 or 16 Gigabit Ethernet

    network ports (Copper / Optical)

    Two Fibre Channel HBAs per Data Mover

    Integrated CLARiiON

    Celerra NSX Four to eight X-Blades 112 TB usable Fibre

    Channel, ATA capacity 64 Gigabit Ethernet

    ports (48 Copper, 16 Optical)

    16 Fibre Channel HBAs CLARiiON or Symmetrix

    storage

    NS500/NS350/NS700

    DARTCLARiiON

    Integrated NAS

    One or two Data Movers

    High AvailabilityNS500G / NS700G

    DARTCLARiiON, SymmetrixNAS gateway to SAN

    One or two Data Movers

    High AvailabilityNS704G

    DART

    CLARiiON, Symmetrix

    NAS gateway to SANFour Data Movers

    Advanced ClusteringCelerra NSX

    DARTCLARiiON, SymmetrixNAS gateway to SANFour to eight X-BladesAdvanced Clustering

    NS704 Four Data Movers 48 TB usable Fibre

    Channel, ATA capacity 32 Gigabit Ethernet

    network ports (24 Copper, 8 Optical)

    Integrated CLARiiON

    NS704

    DARTCLARiiON

    Integrated NASFour Data Movers

    Advanced Clustering

    EMC offers the broadest range of NAS platforms. In addition to the platforms above, a legacy 14 Data Mover CNS/CFS configurations was available in the past. While the hardware was considerable different, it ran the same DART operating system as the current offerings. For a short time, we also offered, NetWin 110/200, a WSS 2003 based low-end configuration. Note: the NS600 is no-longer available.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 6

    2006 EMC Corporation. All rights reserved. Celerra Overview - 6

    Documentation

    http://powerlink.emc.com/km/appmanager/km/secureDesktop?_nfpb=true&_pageLabel=servicesDocLibPg&internalId=0b01406680024e3f&_irrt=true

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 7

    2006 EMC Corporation. All rights reserved. Celerra Overview - 7

    EMC NAS Interoperability Matrix

    http://www.emc.com/interoperability/matrices/nas_interoperability_matrix.pdf

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 8

    2006 EMC Corporation. All rights reserved. Celerra Overview - 8

    NAS Engineering Home

    http://naseng/default.html

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 9

    2006 EMC Corporation. All rights reserved. Celerra Overview - 9

    NAS Support

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 10

    2006 EMC Corporation. All rights reserved. Celerra Overview - 10

    Lab Scenario for Hurricane Marine, LTDy Real-world simulationy Preconfigured

    NIS W2K and UNIX user accounts DNS

    y Multiple Operating Systems SUN Win2k

    y Managed Ethernet Switches VLANs and segregated network High Availability

    y Not optimized for performance

    As you proceed through this course, you will find it useful to understand how the Celerra lab is configured. In the lab, you will work for a fictitious company, Hurricane Marine, LTD, a manufacturer of yachts.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 11

    2006 EMC Corporation. All rights reserved. Celerra Overview - 11

    UNIX Environment

    NIS Domain:hmarine.com

    NIS Server: nis-master

    NIS Domain:hmarine.com

    NIS Server: nis-master

    sun310.127.*.13

    sun410.127.*.14

    UNIX ClientsUNIX Clients

    sun110.127.*.11

    sun210.127.*.12

    nis-master10.127.*.163

    sun510.127.*.15

    sun610.127.*.16

    UNIX environment for Hurricane Marine LTD

    Hurricane, Marines UNIX network, is supported by one NIS master server. That servers host name is nis-master. Your instructor will play the role of the administrator and will hold the password to nis-master confidential. On the other hand, you will be logging in to your UNIX workstations as various NIS users, as well as integrating the Celerra with NIS.

    For a list of NIS users and groups, see Appendix D, Hurricane Marines UNIX Users and Group Memberships.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 12

    2006 EMC Corporation. All rights reserved. Celerra Overview - 12

    Windows 2000 Network

    RootDomain:

    hmarine.com

    Domain Controller:hm-1.hmarine.com

    Sub Domain:corp.hmarine.com

    Domain Controller:hm-dc2.hmarine.com

    Computer Accounts:w2k1, w2k2, w2k3, w2k4

    Data Movers

    All user accounts

    Windows 2000 network for Hurricane Marine, LTD

    Hurricane Marine will soon be implementing a Microsoft Windows 2000 network in Native Mode. They will need to test Celerra functionality to support Active Directory.

    The Windows 2000 network is comprised of two domains. The hmarine.com domain is the root of the forest, while corp.hmarine.com is a subdomain of the root. While the root domain is present solely for administrative purposes at this time, corp.hmarine.com will hold containers for all users, groups, and computer accounts.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 13

    2006 EMC Corporation. All rights reserved. Celerra Overview - 13

    Network Configurationy 5 separate, routed, TCP/IP

    subnets

    y Multiple VLANs

    RouterRouter

    Subnet D 10.127.*.96Subnet D 10.127.*.96 Subnet E 10.127.*.128Subnet E 10.127.*.128

    NIS & 2000 ServersNIS & 2000 ServersSubnet F 10.127.*.160Subnet F 10.127.*.160

    UNIX ClientsUNIX Clients

    Subnet A 10.127.*.0Subnet A 10.127.*.0

    Windows 2000 ClientsWindows 2000 Clients

    Subnet C 10.127.*.64Subnet C 10.127.*.64

    EMC Celerra/SymmetrixEMC Celerra/Symmetrix

    20002000NISNIS

    y UNIX and W2000 Clientsy DNS and NIS

    EMC Celerra/SymmetrixEMC Celerra/Symmetrix

    Network configuration for Hurricane Marine, LTD

    Some important features of Hurricane Marines network are as follows:y The work performed by different employees presents differing needs. For example, while the sales

    staff all use Microsoft Windows applications, the engineering group requires UNIX workstations.y The security for these two environments are managed separately. The UNIX network uses NIS to

    manage security. On the other hand, the Microsoft network is using a Windows 2000 (Native Mode) network for security.y Hurricane Marines network is currently divided into five networks connected by a router, for

    security reasons. y DNS has been implemented at this site for host name resolution.

    Terminology

    NIS: Network Information System

    DNS: Domain Name System

    VLAN: Virtual Local Area Network

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    EMC Celerra Overview - 14

    2006 EMC Corporation. All rights reserved. Celerra Overview - 14

    Closing Slide

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 1

    2005 EMC Corporation. All rights reserved.

    Celerra ICONCelerra Training for EngineeringCelerra ICONCelerra Training for Engineering

    Hardware Review

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 2

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 2

    Revision History

    CompleteFebruary 20061.0

    RevisionsCourse DateRev Number

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 3

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 3

    Celerra Hardware Review

    Upon completion of this module, you will be able to:

    y Identify the location of key components and interconnections for the NS500/350, NS600/700, and NSX models Data Mover Control Station Storage Processor (on CLARiiON) Private LAN Ethernet switch Call Home modem

    y Explain the difference between an NS Integrated and Gateway systems, and the difference between a direct-connected and fabric-connected gateway

    The objectives for this module are shown here.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 4

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 4

    Purpose of the Celerra Network Servery NAS provides client access to

    storage via File system layer Network services

    y Celerra functions as a file servery Supported protocols

    NFS CIFS FTP/TFTP iSCSI

    Storage Subsystem (Disks)

    File system layer

    CIFS

    shareNFS

    export

    FTP iSCSI

    target

    Celerra

    Data

    Mover

    CLARiiONor

    Symmetrix

    Windows

    Mapped drive

    Windows

    Share access

    UNIX

    NFS mount

    FTP

    Client

    iSCSI

    Initiator

    TCP/IP

    Network

    Concept of Network Attached Storage

    Network Attached Storage (NAS) provides clients with access to disk storage over an IP network. This is done by creation/management of file systems, and facilitating at least one network service to publish that file system to the network.

    Purpose of the Celerra Network Server

    The Celerra Network Server functions as a high-available, NAS file server in a TCP/IP network. Celerra provides the services of a file server via one or more of the following protocols:

    Network File System (NFS)

    Common Internet File System (CIFS)

    File Transfer Protocol (FTP) and Trivial FTP (TFTP)

    Internet Small Computer System Interface (iSCSI)

    Client access

    The network client can access the Celerras file systems via several methods. Windows clients typically access CIFS shares via a mapped network drive, or over Network Neighborhood. UNIX clients usually gain access via an NFS mount. Windows and UNIX clients can also get access over FTP, TFTP, and/or iSCSI services.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 5

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 5

    Three Main Componentsy Data Mover(s)

    DART operating system File server Highly reliable hardware and

    configurable for High Availability

    y Control Station Linux operating system Dedicated management host Configuration and monitoring

    y Storage Subsystem Completely separate CLARiiON or SymmetrixMay be dedicated or shared

    Contains All production data Complete Celerra

    configuration database

    Control Station

    Data Mover

    Storage Subsystem

    Production

    Network

    Fibre Channel

    NFS, CIFS, FTP, iSCSIServices

    Data Movers

    A Celerra system can contain one or more individual file servers running EMCs proprietary DART operating system. Each of these file servers is called a Data Mover. One or more Data Movers in a Celerra can act as a hot spare, or standby, for other production Data Movers providing high availability.

    Control Station

    The Celerra also provides one management host, the Control Station, which runs the Linux operating system, and Network Attached Storage (NAS) management services (e.g., Data Mover configuration and monitoring software). A second Control Station may also be present for redundancy.

    Separate Storage Subsystem

    All production data and the complete configuration database of the Celerra is stored on a separate storage subsystem. Data Movers contain no hard drive.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 6

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 6

    Two Types of Celerra ConfigurationsTwo main types of configuration:

    y Integrated Storage Storage subsystem is dedicated to the Celerra NS Celerra is directly connected to the storage array Storage array must be CLARiiON (without AccessLogix)

    y Gateway Storage Storage subsystem can also provide storage to other hosts Supports Symmetrix and/or CLARiiON (with AccessLogix)

    Two Types of Celerra NS Configurations

    A Celerra configuration can be classified as one of two types: Integrated or Gateway.

    Celerra Integrated

    In an integrated configuration, the entire disk array is dedicated to the Celerra Network Server. No other hosts can utilize any of the storage. The Celerra Data Movers are directly connected to the storage subsystem.

    The Celerra Integrated configuration supports only a CLARiiON storage subsystem.

    Celerra Gateway

    In the gateway configuration, the storage subsystem can be used to provide storage to other hosts in addition to the Celerra Network Server.

    A Celerra Gateway can use Symmetrix and/or CLARiiON for the storage subsystem.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 7

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 7

    Celerra NS Integrated Installation Methodsy Factory-installed

    Setup mostly completedat factory

    Init Wizard providesbasic network configurationfor Linux on Control Station

    y Field-installed Must connect cables May require overwriting of

    factory image Procedure furnished by

    Celerra Technical Support Control Station

    Data Mover

    CLARiiON-only

    Storage Subsystem

    Direct Fibre Channel

    CLARiiON

    Dedicated

    to Celerra

    Pre-Loaded Factory-installed

    When the Celerra NS Integrated arrives from the factory, the Celerra software is pre-loaded. When the system is powered on a simple initialization wizard is run providing the opportunity to enter site-specific configuration network information of the Linux Control Station.

    Field-installed

    The Celerra NS Integrated can sometimes require that you manually perform the installation. When the manual method of installation is required (e.g. the factory setup is flawed, or the system is ordered without a cabinet), the original factory image, if present, must be overwritten. This involves CLARiiON clean-up procedures that will be furnished by Celerra Technical Support when needed.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 8

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 8

    Celerra NS Gateway Direct-Connectedy Data Mover connects directly

    to CLARiiON Two ports on CLARiiON Storage

    Processor are dedicated to Celerra Symmetrix not supported

    y Additional hosts can attachto unused ports on the CLARiiON

    Sun

    LinuxMS

    Control Station

    Data Mover

    CLARiiON-only

    Storage Subsystem

    Direct Fibre Channel

    FC Fabric

    Direct-connected Celerra NS Gateway configurations use a direct Fibre Channel connection to the CLARiiON storage subsystem.

    The CLARiiON may also be used to provide storage to other hosts.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 9

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 9

    Celerra NS Gateway, Fabric-Connectedy Celerra connects to storage

    using Fibre Channel switch(s) CLARiiON and/or Symmetrix Only configuration for Symmetrix

    y Other hosts can share storage system

    Sun

    LinuxMS

    Control Station

    Data Mover

    CLARiiON or Symmetrix

    Storage Subsystem

    Fabric-connected Fibre Channel

    FC Fabric

    Fabric-connected Celerra NS Gateway configuration use a SAN Fibre Channel connection to the CLARiiON and/or Symmetrix storage subsystem.

    The fabric-connected gateway is the only Celerra NS configuration that supports using a Symmetrix storage array.

    Using a fabric-connected configuration allows wider utilization of the CLARiiON Storage Processors FE (Front End) ports.

    The storage array may also be used to provide storage to other hosts.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 10

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 10

    General Model Type Marketing Designationsy Models ending with a 0

    Integrated only 2 Data Movers

    y Models ending with an I The I stands for Integrated

    y Models ending with a S Single Data Mover Upgradeable Integrated or Gateway

    y Models ending with a G The G stands for Gateway At least 2 Data Movers

    y Models ending with GS Single DM Gateway connected

    y The prefix NSX NSX bladed series Based on latest hardware

    architecture Gateway configuration only

    S Models

    The S stands for single Data Mover Model.

    These systems can typically be upgraded with an additional Data Mover. If you upgraded a single Data Mover NS Series device you would no longer refer to it as an S.

    Having only one Data Mover does not restrict the installation type or deployment method. An S series device can be deployed either as an Integrated system or as a Gateway system.

    O Models

    The 0 denotes an integrated system.

    These systems contain two Data Movers.

    These systems are deployed as Integrated systems.

    G Models

    The G stands for Gateway.

    The Gateway can be either Direct or Fabric attached.

    GS ModelsThis represents a combination of the G and S.

    I Models

    The I stands for Integrated.

    NSX PrefixThe represents the NSX bladed series.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 11

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 11

    Terminology Clarification: Back-end and Front-end

    NAS clientsConnected hosts (direct or fabric connected)Front-end

    Storage SubsystemPhysical Disks in DAEBack-end

    Celerra Data Movers and Control Station

    Symmetrix or CLARiiON Storage System

    CelerraClients

    IP Network

    Symmetrixand/or

    CLARiiON

    SAN

    It is important to understand that the terms back-end and front-end are in reference to the component being discussed.

    Storage System

    For the CLARiiON SPs, the back-end refers to the disk array enclosures (DAEs) to which it is connected via Fibre Channel Arbitrated Loop, while the front-end refers to the Fibre Channel connection to the hosts (possibly via a Fibre Channel switch). With a Symmetrix, the front-end is the FA Director and port that connect to host systems and the back-end is the DADisk Adapter(director) that connects to the physical drive modules.

    Celerra Data Movers and Control Station

    For the components of the Celerra Network Server the back-end refers to the storage subsystem (i.e. the CLARiiON and/or Symmetrix), while the front-end refers to the NAS clients in the production TCP/IP network.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 12

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 12

    Data Mover High Availabilityy Provided by Standby Data Mover option

    Requires two or more DMs

    y When DM failure occurs: Control Station initiates failover Triggered by communications failure between CS and DM Standby takes over and provides services of failed DM Little or no interruption

    y Failover policies Automatic Retry Manual

    y Installation scripts automatically configure If 2 or more DMs are present at install, one is configured as Standby Auto policy

    Primary Data Mover Standby Data Mover

    Data Mover high availability

    In Celerra systems with two or more Data Movers, Data Mover failover can be configured to provide high availability in the event of a Data Mover failure. In these configurations one or more Data Movers serves as a Standby Data Mover. The production Data Mover is referred to as a Primary Data Mover, or a type NAS Data Mover.

    Failover policies

    When Data Mover failover is configured, a predetermined failover policy is specified. This policy determined what sort of action is required for the failover to take place in the event that the Primary Data Mover goes down. The policies are Automatic, Retry, and Manual.y Automatic policy: The Automatic policy will enact Data Mover failover to the Standby immediately when a

    failure of the Primary Data Mover is detected.y Retry policy: When failure of the Primary is detected, the Retry policy will try to reboot the Data Mover, if

    this does not resolve the problem then it will enact Data Mover failover to the Standby.y Manual policy: When the Manual policy is in place, Data Mover failover will only occur via administrative

    action.

    Data Mover failover and Celerra installation

    In Celerra systems with more than one Data Mover, the Celerra installation script will automatically configure one Data Mover as Standby and the remainder as Primaries.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 13

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 13

    How Data Mover Failover Worksy Normal operations

    Control Station constantly monitors all DMs

    y When Primary DM fails1. Control Station instructs

    Standby to take over2. Standby assumes identity of

    failed DM provides all production services to clients

    3. Original Primary goes into failed state

    y After problem is resolved Administrator manually initiates

    restoration of original Primary

    FAILED Primary Data Mover

    Control Station

    Go!

    Primary Data Mover Standby Data Mover

    Control Station

    Monitor

    Primary Data Mover Standby Data Mover

    Control Station

    MonitorRestore

    How Data Mover failover works:

    During normal operation the Celerra Control Station continually monitors the status of all Data Movers. If a Primary Data Mover should experience a failure, the Control Station will instruct the Standby Data Mover to take over as Primary while forcing the original Primary, if it is still running, into a failed state.

    Once failover is enacted, the Standby Data Mover becomes Primary and resumes the entire identity of the failed Data Mover. In most cases, this process should have little or no noticeable effect on user access to data.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 14

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 14

    Control Station FailoveryOptional redundant Control Stationsy Primary and standby Control

    Stations monitor each using heartbeat protocol Over dual internal Ethernet

    network

    y Standby Control Station monitors the primary CS

    y If a failure is detected, the standby takes control

    y Standby will initiate Call Home

    CS0: Primary Control Station

    CS1: StandbyControl Station

    PrimaryData Mover

    PrimaryData Mover

    StandbyData Mover

    PrimaryData Mover

    Since data flow is separated from control flow, you can lose the Control Station and still access data through the Data Movers. But you cannot manage the system until Control Station function is re-established. EMC provides Control Station failover as an option.

    Celerra supports up to two Control Stations per Celerra cabinet. When running a configuration with redundant Control Stations, the standby Control Station monitors the primary Control Stations heartbeat over the redundant internal network. If a failure is detected, the standby Control Station takes control of the Celerra and mounts the /nas file file system.

    If a Control station fails, individual Data Movers continue to respond to user requests and users access to data is uninterrupted. Under normal circumstances, after the primary Control Station has failed over, you continue to use the secondary Control Station as the primary. When the Control Stations are next rebooted, either directly or as a result of a power down and restart cycle, the first Control Station to start is restored as the primary.

    A Control Station failover will initiate a call home.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 15

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 15

    Data Mover Storage Connections (back-end)y Every Data Mover has redundant FC connections to

    backend Gateway models require installer to connect cables

    y Redundant paths to storage systems Direct connect to SPs or FAs Fibre Channel Switches/Fabrics

    gateway configurations Design for No Single Points

    of Failure in back-end IO path

    Data Mover

    Fibre Channel

    To storage

    Data Mover Backend Connections

    Every Celerra Data Mover has two physical Fibre Channel connections to the backend storage. This provides a redundant path, primarily for high availability.

    When connecting to a CLARiiON array, these connections should lead to separate Storage Processors (SP). When the Celerra is a fabric-connected Gateway, ideally this connection would be going through separate FC switches and fabrics.

    When connecting to a Symmetrix, these connections should lead to separate FAs via separate switches and fabrics.

    Installer actions

    You manually cable the connections. You may also be required to mount the Celerra components into an EMC or third party rack system.

    NS Integrated models should come from the factory with connections in place, requiring you to verify the connections.

    *Note: In some instances the Celerra NS Integrated model may also be shipped for mounting in an existing rack. In these cases you would be required to make the necessary connections.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 16

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 16

    Data Mover Connection to Production Data Network

    y Each Data Mover has a number of Ethernet connections to the production network

    y Quantity and type are model specificy Types:

    Copper 10/100/1000Mbps (cge) Optical GbE (fge)

    y Connections are made toproduction Ethernet switch

    Data Mover cge0 cge1 cge2 fge0

    Production

    Network

    Each Data Mover provides several physical connections to the production Ethernet data network.

    Ethernet port types

    The exact number of these connections depends on the Data Mover model. Typically, there are two types of Ethernet ports that may be found on a Data Mover, copper 10/100/1000 Mbps and optical Gigabit Ethernet. The copper ports have hardware names beginning with cge, followed by the ordinal number of the port. (e.g. cge0, cge1, cge2, etc.) The optical, or fiber, Ethernet ports have hardware names beginning with fge, followed by the ordinal number of the port. (e.g., fge0, fge1, etc.)

    Making the Connections

    These production Ethernet ports require manual connection to the production Ethernet switch. The Ethernet cables for these connections are NOT included with the Celerra Network Server.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 17

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 17

    Control Station Internal Connection to Management Path

    y Control Station monitors and managesData Movers via a privateEthernet switch included with the Celerra NS NEVER connect any other

    network device or host to private Ethernet switch

    y NS uses a serial connections to provide redundancy 1-to-4 Y-cable Supports up to 4 DMs

    y NSX uses second Ethernet network and System Management switch

    Data Mover

    Ethernet switch

    Control Station

    Ethernet

    Ethernet

    1-to-4 Y serial cable

    Ethernet Management Path

    Primarily, the Celerra Control Station communicates with the Data Movers via a private LAN (physically separate from the production data network) that serves as a management path. The Celerra NS includes a small Ethernet switch to facilitate this communication. NS Integrated models should come pre-cabled from the factory, requiring you to verify the connections.* NS Gateway models will require you to make these connections. The Ethernet cables are included with all Celerra NS models.

    Serial Management Path via 1-to-4 Y-Cable

    In addition to this management Ethernet path, the NS also uses a serial connection between the Control Station and the Data Movers. This provides minimal management functionality in the event that the Ethernet path fails.

    There is only one serial connection on the Control Station for this communication. The serial cable used is a 1-to-4 Y-cable. This allows up to 4 Data Movers to communicate via this connection. The ends of the Y-cable are labeled S1 through S4. S1 should connect to the first Data Mover (server_2). If the system has 2 Data Movers, S2 should be used to make the next connection, and so forth.

    The Celerra NS system includes a small Ethernet switch to facilitate communication between the Control Station and the Data Movers. In Celerra NS Integrated systems communication with the CLARiiON Storage Processors (SPs) is also facilitated via this switch.

    This switch must NEVER be connected to any other device or host.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 18

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 18

    Control Station Administrative Connection to Production Network

    y The Control Station has one Ethernet connection to the management network For administrative purposes ONLY Control Station provides no external

    services

    y Ethernet type: 10/100Mbps

    Control Station

    Production

    Network

    Administrators path to Control Station

    Control Station Connection for Administration

    The Control Station has one physical connection to the production Ethernet network. This connection provides means for the Celerra administrator to connect to the Control Stations CLI or Celerra Manager GUI for management of the Celerra Network Server.

    Making the Connections

    The Control Station connection to the production network requires that you manually connect to the production Ethernet switch. An Ethernet cable for this connection is included with the Celerra Network Server.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 19

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 19

    Control Station Storage Communication (back-end)y Control Stations do not have

    FC connection to the back-end

    y All Control Station communicationto the backend passes througha Data Mover Network Block Services (NBS) NAS management functions therefore

    require an operational Data Mover

    y Control Station also has IP connectivityto storage for configuration and monitoring Control Station

    Data Mover

    Storage Subsystem

    Ethernet switch

    Control Station Backend Communication

    Celerra NS Control Stations have no Fibre Channel connection to the backend storage in any of the NS models.

    All NS Control Station communication is performed by passing the communication through a Data Mover. Therefore the presence of an operational Data Mover is required in order for a Control Station to perform virtually all of its NAS management functions.

    Note: The Control Station on legacy CNS/CFS systems had direct connection to Control LUNs.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 20

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 20

    Storage Processor Back-end Connectionsy Connects the Storage

    Processor to the Disk Array Enclosure

    y Consult your CLARiiON documentation for the number of backend connections for your model

    Storage Processor A Storage Processor B

    All Celerra NS Integrated systems, and some Gateway systems use CLARiiON disk arrays for their storage subsystem. A CLARiiON Storage Processor requires two connections to the backend disk array.

    For more information on EMC CLARiiON, please refer to CLARiiON documentation and training.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 21

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 21

    Storage Processor Front-end Port Connectiony Connects the Storage

    Processor to the Data Movers

    y Minimum of two connections per from each SP to each DM Direct cabling Fabric connections with

    switch zoning

    y Note: Integrated systems do not have FC Front-End ports Uses AUX0/BE2 and

    AUX1/BE3 copper-based connections

    SPB SPA

    Data Mover3

    BE0 BE1

    Data Mover 2

    BE1 BE0

    FE1 FE0 FE1 FE0

    Integrated and Direct-Connected Gateway

    Typically, the Storage Processor frontend ports (FE0 and FE1) from each SP (SPA and SPB) are distributed across different Data Movers and different backend ports (BE0 and BE1) on each Data Mover in NS Integrated and Direct-connected Gateway models.

    Fabric-Connected Gateway

    In a Fabric-connected Gateway, the same principle is accomplished via connections to the FC fabrics and zoning.

    NOTE:

    In the example above, the FE port designations are for example only. The connection requirements illustrated above are that BE0 on each DM must connect to SPA, but not necessarily FE0.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 22

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 22

    SP Connection to Management LANy NS Integrated

    Connect the Ethernet port on SPA and SPB to a private LAN switch

    y NS Gateways Connect the Ethernet port on

    SPA and SPB to a production/administrative LAN switch

    Storage Processor

    To Ethernet switch

    Each CLARiiON Storage Processor has an Ethernet port to facilitate management of the array.

    Celerra NS Integrated Systems

    When the CLARiiON is connected to the Celerra NS Integrate system, both SPA and SPB should come from the factory with these Ethernet ports connected to the Celerras private LAN Ethernet switch.*

    Celerra NS Gateway Systems

    When the CLARiiON is being used by a Celerra NS Gateway system, the SPs must be connected to the production/administrative Ethernet switch so that the administrator can connect.

    *Note: In some instances, the Celerra NS Integrated model may also be shipped for mounting in an existing rack. In these cases, you would be required to make the necessary connections.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 23

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 23

    Celerra NS Modem Connectionsy Serial connection to the

    Control Station Cable included Do not use the serial cable that

    ships with the modem

    y Analog phone liney CLARiiON Management

    Station may also have a Call Home modem

    Control Station

    serial cable

    Phone line

    Modem

    Modem Serial Connection

    The modem in the Celerra NS has a serial port for connection to the Control Station. When making this connection, use the serial cable that comes with the NS. Do not use the serial cable that came with the modem.

    Phone Line

    An analog phone line must also be connected to the modem. This cable is not included with the Celerra NS.

    NOTE:

    Each storage subsystem will also have a modem. Please see the setup documentation for the storage system for instructions on setting up its modem.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 24

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 24

    NS500 Standard Equipment

    The illustration above is a NS500. The NS500(S) is very specific in its combinations of Data Movers, Control Stations, and Storage Processors depending on what was ordered.

    Remember: With an Integrated system, the storage (and SPs) are included. You can not connect an Integrated system to an existing SAN environment.

    While it is possible to place these individual components in a different order it is recommended that you follow the format listed above. If you do change the location of components please be aware of cable length issues.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 25

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 25

    NS500G Standard Equipment

    The NS500G(S) is very specific in its combinations of Data Movers and Control Stations depending on what was ordered. However, the possible combination of storage that a Gateway can connect to is not illustrated in this slide. The illustration above pertains directly to a NS500G Only.

    While it is possible to place these individual components in a different order it is recommended that you follow the format listed above. If you do change the location of components please be aware of cable length issues.

    The customer may have ordered a new CLARiiON array with the Celerra Gateway system, or the customer may already have the array. If your customer ordered the optional cabinet, the components are installed in the cabinet at the EMC factory.

    Because the NS500 share the same physical enclosure as a CX500, when you look at the front, it looks like there should be drive modules in the slots in the front. That is not the case. The storage is provided by a separate CLARiiON enclosure.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 26

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 26

    Single Data Mover ConfigurationsSingle Data Mover NS500S/NS500GS

    Dual Data Movers NS500/NS500G

    This illustration highlights the general physical differences between a single Data Mover model and a dual Data Mover model shown on the next slide.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 27

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 27

    NS500 Data Mover

    0

    0or

    MIA

    Media Interface Adaptor This is used to convert a HSSDC cable to a LC connection.

    Serial to CS

    This is an RJ-45 to DB-9m that connects to the appropriate Control Station serial connection (discussed later).

    Public LAN: CGE(X)

    The public LAN refer to the customers network that will be used to access files stored on the Celerra / Storage. The CGE ports are RJ-45 ports that support following speeds: 10/100/1000. The speeds are defined by the customers environment.

    Private LAN:

    This is a RJ-45 cable that connects to the Control Stations Ethernet switch

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 28

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 28

    NS500 AUX Storage Processor

    Console connection (used for support)

    NS500-AUX SPs look very similar to CLARiiON CX500 SPs. The NS500-AUX has two small form-factor pluggable (SFP) sockets in place of the CX500 optical ports.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 29

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 29

    NS500 Data Mover Status LEDs

    y Note: The LEDs on a the CLARiiON Storage Processor are interpreted similarly

    Fault LED Indicators

    Off indicates no fault.

    Amber indicates fault.

    Flashing Amber Indicators

    Six fast one long indicates rewriting BIOS/POST. Do not remove Data Mover while this is occurring.

    Slow (every four seconds) indicates BIOS (basic input/output system) activity.

    Fast (every second) indicates POST (power on self-test) activity.

    Fastest (four times second) indicates booting.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 30

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 30

    Control Station/Switch AssemblyPrivate Ethernet

    Switch

    The Celerra may include one of two different Control Stations: NS-600-CS or the NS-CS. The two Control Stations function in the same manner, but the buttons, lights, and ports are in different locations. The setup procedure is essentially the same for either Control Station.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 31

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 31

    NS-CS Front View

    This front view of a NS-CS is only viewable after you have removed the front bezel. The front view of this model Control Station presents a floppy drive, CD-ROM drive and a serial port connection.

    The floppy and CD-ROM are used for installations and upgrades of EMC NAS code.

    The serial port is used to connect directly to a computer that has been configured with the proper setting as described inside the setup guide. Commonly available programs allow the user to interact with the Control Station. These serial ports will allow you to access the system in the event of a loss of LAN connectivity.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 32

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 32

    NS-CS Rear View

    The rear view of this Control Station is obstructed. Access to these ports can be difficult due to the fact the Ethernet switch is blocking the middle portion of this device as illustrated above.

    The Public LAN connection is typically connected to the customer network. This allows the Celerra to be accessed and managed via the GUI and/or CLI.

    The Private LAN connection is attached to the Ethernet Switch directly behind the Control Station.

    While this device comes with 4 serial connections only 1 is required per Data Mover.

    It is not common to hook up a mouse and/or keyboard. Management of this device is done via the serial connection as explained earlier.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 33

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 33

    Control Station/Switch AssemblyPrivate Ethernet

    Switch

    The Control Station contains two individual pieces of hardware that are attached.

    This NS600 series model of Celerra has a Control Station that has a model NS-600-CS.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 34

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 34

    NS700 Standard Equipmenty Also available with 4 Data Movers (NS704 Integrated)

    The illustration above pertains directly to a NS700. This model is also available with 4 Data Movers (the NS704 Integrated).

    Typically these devices come pre-cabled and pre wired. While it is possible to place these individual components in a different order it is recommended that you follow the format listed above. If you do change the location of components please be aware of cable length issues.

    Remember: With an Integrated system the storage (and SPs) are included. You will not connect an Integrated system to an existing SAN environment.

    Like the NS600, the NS700 is also available with a single Data Mover (NS700(G)S). If that is the case, the Data Mover Enclosure will only include DM2, the bottom mover.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 35

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 35

    NS700G Standard Equipment

    The NS700G can be connected to various storage options including a Symmetrix depending on your configuration option.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 36

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 36

    6 Port Data Mover

    Regardless of model type designation (NS600 or NS600G) there are no hardware differences between Data Movers. However if a G model is deployed, you will be required to install MIAs in order to connect to the array.

    MIA

    Media Interface Adaptor This is used to convert a HSSDC cable to a SPF connection

    Serial to CS

    This is a DB-9m that connects to the appropriate Control Station serial connection (discussed later)

    Public LAN:

    The public LAN refer to the customers network that will be used to access files stored on the Celerra / Storage. They are RJ-45 ports that support following speeds 10/100/1000.

    CGE:

    Copper Gigabit Ethernet

    Private LAN:

    This is a RJ-45 cable that connects to the Control Stations Ethernet switch.

    SPF

    Small form-factor Pluggable

    Copper FC:

    This is a HSSDC cable that can be converted via MIA (as required) to connect to the array.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 37

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 37

    8 Port Data Mover (w/ connections for CS1)

    Regardless of model type designation (NS700 or NS700G), there are no hardware differences between Data Movers. However if a G model is deployed, you will be required to install MIAs in order to connect to the array.

    MIA

    Media Interface Adaptor This is used to convert a HSSDC cable to a SPF connection.

    Serial to CS

    This is a DB-9m that connects to the appropriate Control Station serial connection (discussed later).

    Public LAN:

    The public LAN refer to the customers network that will be used to access files stored on the Celerra / Storage. They are RJ-45 ports that support following speeds 10/100/1000.

    CGE:

    Copper Gigabit Ethernet

    Private LAN:

    This is a RJ-45 cable that connects to the Control Stations Ethernet switch.

    SPF

    Small form-factor Pluggable

    Copper FC:

    This is a HSSDC cable that can be converted via MIA (as required) to connect to the array.

    In the illustration above, you will notice that the 8-port NS700 Data Mover also includes serial and Ethernet connections for a second Control Station.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 38

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 38

    NS704G Standard Equipment

    Important: The NS704G is a fabric-attached Gateway system only. There is no direct connect option for this device.

    With the exception of the NSX series (discussed later) this is the only NS-series device that can have 2 Control Stations.

    While it is possible to place these individual components in a different order it is recommended that you follow the format listed above. If you do change the location of components please be aware of cable length issues.

    The NS704G can be connected to various storage options including a Symmetrix depending on your configuration option.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 39

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 39

    Data Mover or Storage Processor status LEDs

    DM3/SPB

    DM2/SPA

    Fault LED Indicators

    Off indicates no fault.

    Amber indicates fault.

    Flashing Amber Indicators

    Six fast one long indicates rewriting BIOS/POST. Do not remove Data Mover while this is occurring.

    Slow (every four seconds) indicates BIOS (basic input/output system) activity.

    Fast (every second) indicates POST (power on self-test) activity.

    Fastest (four times second) indicates booting.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 40

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 40

    CX700 AUX Storage Processory Note there are no SAN ports on these SPs

    The CX700 AUX Storage Processor is sold only with an integrated NS700 or NS700S Celerra. The lack of a SAN personality card prohibits and SAN connection to SPA and SPB.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 41

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 41

    NSX Control Station and Blade Layout

    Control Station (CS1)Default Standby

    Control Station (CS0)Default Primary

    The EMC Celerra NSX network server is a network-attached storage (NAS) gateway system that connects to EMC Symmetrix, CLARiiON arrays, or both. The NSX system has between four and eight X-Blade 60 and two Control Stations. The EMCNAS software automatically configures at least one blade as a standby for high availability.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 42

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 42

    NSX Blade

    Please note the location and names of the equipment listed above. You will learn more about each piece of equipment later in this module.

    Important: The terms blade and Data Mover to refer to the same device.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 43

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 43

    NSX Blade Back-end Ports

    The Celerra NSX is always configured as a fabric-connected gateway system. A Fabric-Connected Celerra Gateway system is cabled to a Fibre Channel switch using fibre-optic cables and small form-factor pluggable (SFP) optical modules. It then connects through the Fibre Channel fabric to one or more arrays.

    Other servers may also connect to the arrays through the fabric. You can use a single switch, or for added redundancy you can use two switches. The Celerra system and the array or arrays must connect to the same switches.

    If you are connecting the Celerra system to more than one array, one array must be configured for booting the blades. This array should be the highest-performance system and must be set up first. The other arrays cannot be used to boot the blades and must be configured after the other setup steps are complete.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 44

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 44

    Blade Public Network Ports

    The external network cables connect clients of the Celerra system to the blades. Another external network cable connects the CS to the customers network for remote management of the system. The external network cables are provided by the customer. The category and connector type of the cable must be appropriate for use in the customer network. The six copper Ethernet network ports on the blades are labeled cge0 through cge5. These ports support 10, 100, or 1000 megabit connections and have standard RJ-45 connectors. The two optical Gigabit Ethernet network ports are labeled fge0 and fge1. They have LC optical connectors and support 50 or 62.5 micron multimode optical cables. Ports fge0 and fge1 use optical SFP modules installed at the factory.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 45

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 45

    NSX Control Station Front

    The front view of this model Control Station presents a floppy drive, CD-ROM drive and a serial port connection.

    The floppy and CD-ROM are used for installations and upgrades of EMCNAS code.

    The serial port is used to connect directly to a computer that has been configured with the proper setting as described inside the setup guide. Commonly available programs allow the user to interact with the Control Station. These serial ports will allow you to access the system in the event of a loss of LAN connectivity.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 46

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 46

    NSX Control Station Rear

    The NSX Control Station is designed for use with NSX systems only. While it still serves all the roles and responsibilities of a traditional Control Station please be aware that there is a different backend port selection on this model.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 47

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 47

    NSX System Managed Switch

    The private (internal) LAN cables connect the CS to the blades through the blade enclosures system management switches. These cables and switches make up a private network that does not connect to any external network. Each blade enclosure has two system management switches, one on each side of the enclosure.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 48

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 48

    NSX System Managed Switch Cable Layout

    With the removal of the Ethernet switch, the following diagram above illustrates a method in which the Control Station and Data Movers can communicate via a private and redundant connection with each other.

    Blade Enclosure 3 Port3 (R)Blade Enclosure 2 Port0 (R)10Blade Enclosure 3 Port3 (L)Blade Enclosure 2 Port0 (L)9Blade Enclosure 2 Port3 (R)Blade Enclosure 1 Port0 (R)8Blade Enclosure 2 Port3 (L)Blade Enclosure 1 Port0 (L)7Blade Enclosure 1 Port3 (R)Blade Enclosure 0 Port0 (R)6Blade Enclosure 1 Port3 (L)Blade Enclosure 0 Port0 (L)5Blade Enclosure 0 Port4 (L)CS1(Right)4Blade Enclosure 0 Port4 (R)CS1(Left)3Blade Enclosure 0 Port3 (L)CS0 (Right)2Blade Enclosure 0 Port3 (R)CS0 (Left)1

    ToFromPath

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 49

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 49

    NSX Power Subsystem

    The Celerra NSX always ships in its own EMC cabinet. The cabinet may include two uninterruptible power supplies (UPSs) to sustain system operation for a short AC power loss. All components in the cabinet, except for the CallHome modems, are connected to the UPS to maintain high availability despite a power outage. The two Control Stations have automatic transfer switches (ATSs) for short AC power loss in addition to the two UPSs.

    The NSX Cabinet only includes the Control Station(s) and Data Movers. Storage is always in a separate cabinet.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 50

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 50

    Module SummaryIn this lesson you learned about:

    y Be careful of the term back-end and front-end as they are different if you are looking at it from an the Celerra or storage system prospective

    yWhile physically different, all models share similar components and interconnections NS500/350 NS600/NS700 NSX

    y Integrated systems use a captive CLARiiON arrayy Gateway configurations connect to the back-end via a

    Fabric and may share the back-end with other hosts

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Hardware Review - 51

    2005 EMC Corporation. All rights reserved. Celerra Hardware Review - 51

    Closing Slide

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 1

    2006 EMC Corporation. All rights reserved.

    Celerra ICONCelerra Training for EngineeringCelerra ICONCelerra Training for Engineering

    Installation and Configuration Overview

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 2

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 2

    Revision History

    CompleteFebruary 20061.0

    RevisionsCourse DateRev Number

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 3

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 3

    Product, Installation, and Configuration OverviewUpon completion of this module, you will be able to:

    y Describe the locations where Celerra software is installedy List the major installation tasksy Explain how the Celerra Data Mover boots during the

    installation phases

    The objectives for this module are shown here. Please take a moment to read them.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 4

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 4

    Celerra Software Locationsy 6 System LUNs on storage array

    Data Movers DART OS NASDB, logs, config files, etc. a.k.a. Control LUNs DMs have no internal storage

    y Control Stations internal disk drive Linux OS EMC NAS management software Auxiliary boot image for Data Movers

    y Additional LUNS for user data are configured in the storage system and presented to the Data Movers

    Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    NAS managementLinux &

    services

    Internal drive

    DART, etc

    A Celerra system uses two storage locations for installation of its software: the Control Stationsinternal disk drive and 6 System LUNs (also known as Control LUNs) on the storage array.

    Control Station internal disk drive

    The Celerra Control Station contains an internal disk drive upon which the Linux operating system is installed as well as the NAS management services that are used to configure and manage Data Movers and the file systems on the storage subsystem. The Control Station also holds an auxiliary boot image which can be used by Data Movers whenever its OS cannot be located on the storage array.

    6 Control LUNs on storage array

    Celerra Data Movers have no local disk drives. Data Movers require 6 Control (or System) LUNs on the storage subsystem. These System LUNs contain the DART operating system, configuration files, log files, the Celerra configuration database ("NASDB"), NASDB backups, dump files, etc.. (The exact contents of each LUN are discussed later in this course.)

    NOTE: This module discussed the installation storage requirements. Storage LUNs for data are not present at this time.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 5

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 5

    Celerra NS Installation TasksInstallation of a Celerra includes

    configuration or installation of the following:

    y 6 System LUNsy Fibre Channel connectivity between

    Data Mover and Storage System Connect cables Fabric Zoning

    y Load the software Linux on Control Station DART, etc. on System LUNs

    Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    NAS managementLinux &

    services

    Internal drive

    Fibre Channel

    DART, etc

    The key tasks of the Celerra NS installation include:y Creating and configuring the 6 System LUNs on the storage arrayy Providing redundant Fibre Channel access to the System LUNs for each Data Movery Installing and configuring Linux on the Control Stationy Installing DART onto the System LUNs on the storage system for the Data Movers.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 6

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 6

    Software Status Before Installationy 6 System LUNs are either empty

    or not configured yet Data Movers are not able to boot

    y The Control Station drive is empty Or may contain code that will be

    overwritten

    Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    EMPTY

    Fibre Channel

    Private LAN

    EMPTY

    The Data Mover operating system (DART), NAS and config files will be stored on the Internal IDE drive in the control station and on the System LUNs on the storage subsystem (CLARiiON or Symmetrix). At the beginning of a new install there are no files in any of those locations. (Actually there may likely be a factory image of Linux on the Control station, this will be overwritten during installation.)

    It is assumed the Floppy and CD are loaded.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 7

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 7

    When the Installation is InitiatedThe following are written to the CS local

    drive:

    y Linux OS for Control Stationy NAS management softwarey Auxiliary DART image for Data Movers

    to Pre Execution Environment (PXE) For Data Mover network boot

    Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    EMPTY

    Fibre Channel

    Linux & NAS Including PXE image

    Private LAN

    Starting the software installation

    Boot the CS (Control Station) from the floppy and run the installation command when prompted. Linux is installed on the CS internal IDE drive, also the NAS code (including DART) is copied to the local IDE drive

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 8

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 8

    After the Files are Written to the CSy Remove the CD & floppyy The Control Station rebootsy Prompts for Linux configuration

    options IP address, netmask, gateway Hostname Nameserver

    y Data Movers are rebooted by the installation script

    Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    EMPTY

    Fibre Channel

    Linux & NAS Including PXE image

    Private LAN

    After the files are written to the CS drive, the CS reboots, asks all the configuration questions and restarts the network. A PXE image, with a bootable configuration for the DMs, is created on the CS internal drive.

    The DMs are now automatically rebooted from that PXE image by the installation script.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 9

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 9

    When the Data Movers are Rebootedy They look for a boot image of DART

    (nas.exe)1. Attempts boot over FC - FAILFAIL

    Fabric is not zoned CLARiiON registration not configured

    2. Attempts PXE boot over private LAN - OKOK

    y DMs PXE from CS drive Load temporary DART (nas.exe)

    Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    EMPTY

    Fibre Channel

    Linux & NAS Including PXE image

    Private LAN

    Where is DART?Where is DART?2 ?2 ?

    1 ?1 ?

    XX

    The CS reboots the DMs.

    DMs cannot boot to the System LUNs as there is no O/S (DART) on them yet*, so they default to a network boot to the PXE image on the CS.

    * Also, if this system is connecting to the storage via a fibre channel fabric, there is no zoning in place at this time.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 10

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 10

    Providing Access to the System LUNsWhen the Data Movers PXE boot from DART

    image (nas.exe) on CS drive, DM queries HBAs and the WWNs are passed to the CS where they are displayed on the HyperTerminal screen

    Next:

    y Perform FC zoning If using a FC fabric connection

    y Configure the CLARiiON Create RAID Group w/ System LUNs Register DMs Create Storage Group with System LUNs Data Movers

    y Data Movers are rebooted again

    Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    EMPTY

    Fibre Channel

    Linux & NAS Including PXE image

    Private LAN

    For manual installations, once the DMs boot up, the back end fibre channel ports become active and the WWNs of the DMs are displayed on the HyperTerminal screen.

    The manual install requires that you do all the backend configuration (LUNs, registration, storage groups etc.) before continuing beyond this step.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 11

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 11

    Providing CS Access to System LUNs (NBS)When the Data Movers are rebooted

    again.

    y The look for a boot image of DART (nas.exe)

    They cannot boot from the System LUNs but they CAN access them

    y DMs PXE from CS drivey DMs start Network Block Services

    (NBS) Allows the CS to write to the back-end Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    EMPTY

    Fibre Channel

    Linux & NAS Including PXE image

    Private LAN

    Where is DART?Where is DART?

    2 ?2 ?

    1 ?1 ?

    Once the Data Movers are given access to the storage array they still cannot boot from the System LUNs because DART (nas.exe) has not been loaded there at this time. However, the Data Mover can see the System LUNs.

    The Data Movers still access DART from the Control Station via PXE. Now they can provide access to the Control LUNs for the Control Station via the Network Block Service (NBS).

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 12

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 12

    DART Installed Over NBSWhen the Data Movers are rebooted

    again.

    y CS can now see the System LUNs through DMs using NBS

    Partitions and formats System LUNs Loads DART, etc. to array Completes software configuration of Data

    Movers

    Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    DART, etc LOADED

    Fibre Channel

    NBS Server

    NBS Client

    Using NBS (Network Block Service, see below) over the internal network the CS can access the System LUNs via the Data Mover(s).

    Network Block Devices

    NBS uses iSCSI with CLARiiON proprietary changes. Below is a generic description of Network Block Devices.

    Linux can use a remote server as one of its block devices. Every time the client computer wants to read /dev/nd0, it will send a request to the NBS server via TCP, which will reply with the data requested. This can be used for stations with low disk space (or even diskless - if you boot from floppy) to borrow disk space from other computers. Unlike NFS, it is possible to put any file system on it.

    Using NBS over the internal TCP/IP network, the CS partitions, formats and installs all the required NAS (DART) code on the System LUNs.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 13

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 13

    Celerra Data Mover Installation and Boot ProcessWhen the Data Movers are rebooted

    again.

    y DMs successfully boot DART from array

    1. Attempts boot over FC - OKOK

    y Installation of DMs completesy Further configuration as

    required: Network interfaces File systems Exports and shares Etc.

    Control Station

    Data Mover

    Storage Subsystem

    6 System LUNs

    DART, etc

    Fibre Channel

    Linux & NAS Including PXE image

    Private LAN

    Where is DART?Where is DART? 1 ?1 ?

    Once DART, etc. has been loaded onto the System LUNs, the Data Mover can now successfully boot over Fibre Channel from the System LUNs, and the remainder of the automated installation tasks can complete.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 14

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 14

    Module SummaryIn this module you learned about:

    y Celerra NS software will be installed to the Control Station local drive and the System LUNs on the storage array

    y The major installation tasks included Load Linux and DART image to the Control Station PXE boot Data Mover to provide Control Station access to array via

    NBS Load DART, etc, to array

    y The Data Mover first attempt to boot from the storage array, if DART is unavailable, performs PXE boot from the Control Station

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Installation and Configuration Overview - 15

    2006 EMC Corporation. All rights reserved. Installation, and Configuration Overview - 15

    Closing Slide

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Preparing, Installing, and Configuring a Fabric-Connected Gateway - 1

    2006 EMC Corporation. All rights reserved.

    Celerra ICONCelerra Training for EngineeringCelerra ICONCelerra Training for Engineering

    Preparing, Installing, and Configuring a Fabric-Connected Gateway System

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Preparing, Installing, and Configuring a Fabric-Connected Gateway - 2

    2006 EMC Corporation. All rights reserved. Preparing, Installing, and Configuring a Fabric-Connected Gateway - 2

    Revision History

    CompleteFebruary 20061.0

    Update and reorganizationMay 20061.2

    RevisionsCourse DateRev Number

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Preparing, Installing, and Configuring a Fabric-Connected Gateway - 3

    2006 EMC Corporation. All rights reserved. Preparing, Installing, and Configuring a Fabric-Connected Gateway - 3

    Preparing, Installing, and Configuring a Fabric-connected Gateway

    Upon completion of this module, you will be able to:

    y Plan and prepare for installation of Control Station operating system, NAS software, and DART operating Environment

    y Perform pre-installation tasksy Install and connect componentsy Configure the boot arrayy Install the EMC NAS software

    Regardless of the specific configuration, all Celerra installations are performed using the same general process and phases In this module we will be discussing the Installation and configuration of a Fabric-connected Gateway system configuration. Much of the back-end configuration and fabric zoning can be performed automatically using Auto-configure scripts, however, we will be discussing the manual configuration as this represents a worst-case complexity.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Preparing, Installing, and Configuring a Fabric-Connected Gateway - 4

    2006 EMC Corporation. All rights reserved. Preparing, Installing, and Configuring a Fabric-Connected Gateway - 4

    Celerra NS Installation DocumentationKey document for this discussion:

    y Celerra NS500G/NS600G/NS700G Gateway Configuration Setup Guide

    y Referred to from here on as the Gateway Setup GuideLocate your copy before continuing

    The following portions of this course are designed to focus on the technical publication, Celerra NS500G/NS600G/NS700G Gateway Configuration Setup Guide.

    Please locate your copy of this document and follow the discussions closely with the document.

    If you cannot locate your copy, please notify your instructor immediately.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Preparing, Installing, and Configuring a Fabric-Connected Gateway - 5

    2006 EMC Corporation. All rights reserved. Preparing, Installing, and Configuring a Fabric-Connected Gateway - 5

    Three Phases of Installationy The three phases of installation are:

    Phase 1: Planning and data collection Phase 2: Physical installation and initial configuration Phase 3: Final Configuration

    y In-depth discussion of each phase is covered in the Gateway Setup Guide

    y Today we will focus of this course is on phase 2 Do not minimize the value of the required assessment and planning

    that must be performed in the field during phase1 and earlierQualifier document

    We will continue tomorrow with Phase 3

    Installation and configuration of a Celerra gateway system is typically done in three phases.y Phase 1: The installation is planned and configuration information is collected from the customer. y Phase 2: The hardware is physically installed and cabled, the software installed, and the Control

    Station. At this point the system is functional, but cannot yet be used by clients to store and retrieve files.y Phase 3: The system is configured with client network connections, file systems, shares, exports,

    and so on. When this phase is complete, the system is fully usable by clients.

    In the field, two or more individuals from different EMC or Authorized Service Provider organizations typically work together to complete the different phases of the installation. Close coordination is required to ensure the requirements are communicated.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Preparing, Installing, and Configuring a Fabric-Connected Gateway - 6

    2006 EMC Corporation. All rights reserved. Preparing, Installing, and Configuring a Fabric-Connected Gateway - 6

    Phase 1: Planning and Data Collectiony Software verification

    Verify the correct software level Change Control Authority (CCA) Interoperability Matrix

    y Site preparations Physical space considerations, power, network connectivity, etc

    y Verify Symmetrix and or CLARiiON back-end requirements Software Level, Access Logix, Write Cache configuration, etc. Control LUNs User Data Volumes

    y Gather required information and complete Setup Worksheets in Appendix G SAN and storage cabling and zoning requirements Internal and external network IP addresses, Netmask, Gateways DNS, etc.

    The first phase starts when the customer agrees to the installation and ends when all of the required information has been collected. Missing information, such as IP addresses, can cause significant delays later in the installation process.

    1. Use the EMC Change Control Authority (CCA) process to get the initial setup information and to verify you have all needed software before going to the customers site.

    2. Verify that the customer has completed all site preparation steps, including providing appropriate power and network connections.

    3. If the Celerra system is being connected to a new array, verify that the array has been installed and configured before starting to install the Celerra system. Verify that the required revision of the array software is installed and committed.

    4. Fill out the configuration worksheets with the customer.

    5. Give the phase 2 configuration information to the installer who will complete the next phase.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Preparing, Installing, and Configuring a Fabric-Connected Gateway - 7

    2006 EMC Corporation. All rights reserved. Preparing, Installing, and Configuring a Fabric-Connected Gateway - 7

    EMC NAS Installation Software y Acquire ISO image CCA-approved from EMCs FTP sitey Burn the CD from ISOy The installation boot floppy that shipped should be usable

    If necessary, create installation boot floppy from CD Image is located on NAS software CDUse rawrite.exe to copy \images\boot.img to floppy

    The version of EMC NAS that ships with the Celerra Network Server is not likely to be appropriate for the installation. When the Change Control Authorization is onsulted, the correct version will be identified and placed on the EMC FTP site as an ISO image for download.

    After downloading the approved version, you will want to create a CD from the ISO image

    Installation Boot Floppy

    Typically, you should be able to use the boot floppy that shipped with the Celerra Network Server. If you need to create a new boot floppy from either a Linux or Windows host., The procedure to do this from Windows is included below.y Extract the rawrite.exe file from the Global Services Service Pack CD. You can also obtain rawrite.exe for free from

    many internet sites.y Copy rawrite.exe to C:\tempy Put the EMC NAS code CD into the CD-ROM drive of the Windows computer being used to create the boot floppyy Place a blank, formatted floppy into the floppy drive of the same computery Change directory to C:\tempy Type rawrite.exe and press [Enter], and provide the following information when prompted:

    Disk image name: D:\images\boot.img Target diskette drive: A:

    y When the command prompt returns the boot floppy creation is complete.

  • Copyright 2006 EMC Corporation. All Rights Reserved.

    Preparing, Installing, and Configuring a Fabric-Connected Gateway - 8

    2006 EMC Corporation. All rights reserved. Preparing, Installing, and Configuring a Fabric-Connected Gateway - 8

    Data Collection Required for InstallationDiscuss: Gateway Setup Guide Appendix G: Setup

    Worksheets

    y Site Preparation Worksheety Fibre Channel Cabling Worksheet

    Note the instructional text for Tables G-1 and G-2

    y CLARiiON Boot Array Worksheety Control Station 0 Networking Worksheet

    Note the default values for the internal network

    y Private LAN Worksheet (If non-default configuration was CCA-approved)

    Please ta