san unit v

Upload: madhura-mulge

Post on 08-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 SAN unit V

    1/28

    Lecture Notes

    Storage Area Netw ork( Unit - V)

    M s. M adhu G. M ulge( M-Tech CSE )

    M.B.E.'S College of Engineering, Ambejogai.

    1ME - CSE [SAN- Unit V]

  • 8/7/2019 SAN unit V

    2/28

    ME - CSE [SAN- Unit V] 2

    Application Studies

    Although storage networks share common components in the form of servers, storage,

    and interconnection devices, the configuration of a SAN is determined by the upper-layer

    application it supports.

    A SAN originally designed for a high-bandwidth application, for example, can also

    facilitate a more efficient tape backup solution.

    A post-production video editing application may have different requirements than ahigh-availability OLTP (on-line transaction processing) application.

    Server-free tape backup applications may employ unique hardware and software

    products that would not appear in a SAN designed to support server clustering.

    Because SANs offer the flexibility of networking, however, you can satisfy the needs of

    multiple applications within a single shared storage configuration.

  • 8/7/2019 SAN unit V

    3/28

    ME - CSE [SAN- Unit V] 3

    Full-m otion video

    One of the first applications to employ high-performance SAN technology, full-motion

    video editing leverages the bandwidth, distance, and shared resources that SAN technologyenables.

    Digitized video has several unique requirements that exceed the capabilities of legacy

    data transports, including the sustained transmission of multiple gigabit streams and

    intolerance for disruption or delays.

    Most SAN-based video applications use the SCSI-3 protocol to move data from disk to

    workstations, although custom configurations have been engineered using IP for multicast

    and broadcast distribution.

    Video applications have common high-performance transport requirements but may vary

    considerably in content.

    A video editing application can center on a workgroup configuration.

    Allowing peer workstations to access and modify video streams from one or more disk

    arrays.

  • 8/7/2019 SAN unit V

    4/28

    ME - CSE [SAN- Unit V] 4

    In addition to the physical SAN topology, any application that allows data sharing must

    have software support for file access and locking by multiple users.

    A video broadcast application that serves content from a central data source to multiplefeeds must have the means to support multicast across the SAN.

    Video used for training applications may support both editing workstations and user

    stations, with random access to shared video clips or to instructional modules digitized on

    disk.

    In this example, the bandwidth required per workstation depends on the type of video

    streams retrieved from and stored to disk.

    Standard digitized streams may require ~30MBps throughput, whereas high-definition

    video requires ~130MBps.

    In the latter case, 2Gbps Fibre Channel would support ~400MBps full duplex, orsufficient bandwidth for a high-definition video stream to be read from disk, processed,

    and written back concurrently.

  • 8/7/2019 SAN unit V

    5/28

    ME - CSE [SAN- Unit V] 5

    Fig: A peer video editing SAN via a switched fabric

  • 8/7/2019 SAN unit V

    6/28

    ME - CSE [SAN- Unit V] 6

    The configuration shows dual path in between the video editing workstations and

    redundant SAN switches.

    Device drivers for the host adapter cards must therefore support failover in case a link orswitch is lost, and preferably load balancing to fully utilize both paths during normal

    operation.

    In addition, the ratio of server to storage links must be adequate to support concurrent

    operation by all workstations.

    Larger storage arrays support multiple links so that a non-blocking configuration can be

    built.

    Video editing applications are intolerant of latency or delays. For optimal jitter-free

    performance, video data can be written to the outer tracks of individual disks within the

    storage array.

    Although this technique reduces the total usable capacity, it requires less disk head

    movement for reading and writing data and thus minimizes latency in disk performance.

  • 8/7/2019 SAN unit V

    7/28

    ME - CSE [SAN- Unit V] 7

    LAN-Free a nd Ser ver-Free Tape B ackup

    Traditional parallel SCSI methods based on disk arrays are bound to individual servers,

    tape backup options are limited to server-attached tape subsystems or transport of backupdata as files across the messaging network.

    Provisioning each server with its own tape backup system is expensive and requires

    additional overhead for administration of scheduling and tape rotation on multiple tape

    units.

    Performing backups across the production LAN allows for the centralization ofadministration to one or more large tape subsystems, but it burdens the messaging

    network with much higher traffic volumes during backup operations.

    When the volume of data exceeds the allowable backup window and stresses the

    bandwidth capacity of the messaging network, either the bandwidth of the messaging

    network must be increased or the backup data must be partitioned from the messagingnetwork.

    Thus, the potential conflict between user traffic and storage backup requirements can be

    resolved only by isolating each on a separate network, either by installing separate SAN

    interconnection

  • 8/7/2019 SAN unit V

    8/28

    ME - CSE [SAN- Unit V] 8

    Fig: Tape backup across a departmental network with direct-attached storage

  • 8/7/2019 SAN unit V

    9/28

  • 8/7/2019 SAN unit V

    10/28

    ME - CSE [SAN- Unit V] 10

    Fig: Transitional LAN-free backup implementation for direct-attached storage

  • 8/7/2019 SAN unit V

    11/28

    ME - CSE [SAN- Unit V] 11

    Fig: LAN-free and server-free tape backup with SAN-attached storage

  • 8/7/2019 SAN unit V

    12/28

  • 8/7/2019 SAN unit V

    13/28

    ME - CSE [SAN- Unit V] 13

    Server Clustering

    As enterprise applications have shifted from mainframe and midrange systems to

    application and file servers.

    More sophisticated designs that offer dual power supplies, dual LAN interfaces, multiple

    processors, and other featuresto enhance performance and availability.

    The potential failure of an individual component within a server is thus accommodated

    by using redundancy.

    Redundancy typically implies hardware features but may also include redundant

    software components, including applications.

    Redundancy can also be provided simply by duplicating the servers themselves, with

    multiple servers running identical applications.

    In the case of failure of a hardware or software module within a server, you shift users

    from the failed server to one or more servers in a server cluster.

  • 8/7/2019 SAN unit V

    14/28

    ME - CSE [SAN- Unit V] 14

    The software used to reassign users from one server to another with minimal disruption

    to applications is necessarily complex.

    Clustering software written for high-availability implementations can be triggered bythe failure of a hardware, protocol, or application component.

    The recovery process must preserve user network addressing, login information,

    current status, open applications, open files, and so on.

    Clustering software may also include the ability to balance the load among active

    servers.

    In this way, in addition to failover support, the servers in a cluster can be maximized to

    increase overall performance.

    SANs allow server clusters to scale to very large shared data configurations, with more

    than a hundred servers in a single cluster.

    The clustering software determines which components or applications on each server

    should be covered by a failure, subsets of recovery policies can be defined within the

    server cluster.

  • 8/7/2019 SAN unit V

    15/28

    ME - CSE [SAN- Unit V] 15

    Intern et Service ProvidersInternet service providers that provide Web hosting services have traditionally

    implemented servers with internal or SCSI-attached storage.

    For smaller ISPs, internal or direct-attached disks are sufficient as long as storage

    requirements do not exceed the capacity of those devices.

    For larger ISPs hosting multiple sites, storage requirements may exceed the SCSI-

    attached capacity of individual servers.

    Network-Attached Storage (NAS) or SANs are viable options for supplying additional

    data storage for these configurations.

    In addition to meeting storage needs, maintaining availability of Web services is critical

    for ISP operations.

    Because access to a Web site (URL) is based on Domain Name System (DNS) addressing

    rather than physical addressing, you can deploy redundant Web servers as a failover

    strategy.

  • 8/7/2019 SAN unit V

    16/28

    ME - CSE [SAN- Unit V]

    16

    If a primary server fails, another server can assume access responsibility via round-robin

    DNS address resolution.

    For sites that rely on internal or SCSI-attached storage, this technique implies that eachserver and its attached storage must maintain a duplicate copy of data.

    This solution is workable so long as the data itself is not dynamicthat is, it consists

    primarily of read-only information.

    This option is less attractive, however, for e-commerce applications, which must

    constantly update user data, on-line orders, and inventory tracking information.

    The shift from read-mostly to more dynamic read/write requirements encourages the

    separation of storage from individual servers.

    With NAS or SAN-attached disk arrays, data is more easily mirrored for redundancy and

    is made available to multiple servers for failover operation.

  • 8/7/2019 SAN unit V

    17/28

    ME - CSE [SAN- Unit V]

    17

    Fig: A small ISP implementation using network-attached storage

  • 8/7/2019 SAN unit V

    18/28

    ME - CSE [SAN- Unit V]

    18

    SAN architecture brings additional benefits to ISP configurations by providing high-

    speed data access between servers and storage using block I/O instead of NFS or CIFS file

    protocols and by enabling scalability to much higher populations of servers.

    you can extend the SAN with additional switch ports to accommodate expansion of

    storage capacity and increased population of Web servers.

    This small configuration can scale to hundreds of servers and terabytes of data with no

    degradation of service.

    Fig [a] depicts a scalable ISP configuration using iSCSI for block I/O access to storage

    and tape.

    In this case, the Ethernet switch is a common interconnection both for Web traffic via

    the IP router and for block access to storage data.

    Although servers can be provisioned with dual Ethernet links to segregate file and blocktraffic using VLANs, some iSCSI adapters support file and block I/O on the same

    interface.

    Depending on bandwidth requirements, this solution may minimize components and

    wiring and simplify the configuration.

  • 8/7/2019 SAN unit V

    19/28

    ME - CSE [SAN- Unit V]

    19

    Fig [a]: ISP configuration built with iSCSI

  • 8/7/2019 SAN unit V

    20/28

    ME - CSE [SAN- Unit V]

    20

    Fig [b]: ISP configuration built with Fibre Channel

    Figure [b] shows the functional equivalent to (a) but uses Fibre Channel instead of iSCSI.

    This configuration introduces additional devices and connections but fulfills the requirement

    for high performance and scalable access to shared storage resources.

  • 8/7/2019 SAN unit V

    21/28

    ME - CSE [SAN- Unit V]

    21

    Campus Storage Netw orks

    The need to share storage resources over campus or metropolitan distances has been one

    by product of the proliferation of SANs on a departmental basis.

    Separate departments within a company, for example, may make their own server and

    storage acquisitions from their vendor of choice.

    Each departmental SAN island is designed to support specific upper layer applications,

    and so they may be composed of various server platforms, SAN interconnections, andstorage devices.

    It may be desirable, however, to begin linking SANs to streamline tape backup

    operations, share storage capacity, or share storage data itself.

    Creating a campus network thus requires transport of block storage traffic over

    distance as well as accommodation of potentially heterogeneous SAN interconnections.

    Fibre Channel supports distances of as much as 10 kilometers over single-mode fiber-

    optic cabling and long-wave transceivers.

  • 8/7/2019 SAN unit V

    22/28

    ME - CSE [SAN- Unit V]

    22

    This is sufficient for many campus requirements, but to drive longer distances requires

    additional equipment.

    The main issue with native Fibre Channel SAN extension is not the distance itself but therequirement for dedicated fiber from one site to another.

    Many campus and metropolitan networks may already have Gigabit Ethernet links in

    place, but to share the same cable by Fibre Channel and Gigabit Ethernet simultaneously

    requires the additional cost of dense wave division multiplexing (DWDM) equipment.

    Connecting Fibre Channel switches builds a single layer 2 fabric, and therefore multiplesites in a campus or metro storage network must act in concert to satisfy fabric

    requirements a campus storage network with a heterogeneous mix of Fibre Channel and

    iSCSI-based SANs.

    Example, existing Gigabit Ethernet links connect the various buildings.

    Depending on bandwidth requirements, these links can be shared with messaging traffic

    or can be dedicated to storage.

  • 8/7/2019 SAN unit V

    23/28

    ME - CSE [SAN- Unit V]

    23

    For Fibre Channel SAN connectivity, FCIP could be used, but this example shows

    iFCP gateways to ensure autonomy of each departmental SAN and isolation from

    potential fabric disruption.

    The administrative building is shown with aggregated Gigabit Ethernet links to the data

    center to provide higher bandwidth, although 10Gbps Ethernet could also be used if

    desired.

    The development center is shown with an iSCSI SAN, which requires only a local

    Gigabit Ethernet switch to provide connections to server, storage, and the campus.

    This campus configuration could support multiple concurrent storage applications,

    such as consolidated tape backup to the data center or sharing of storage capacity

    between sites.

  • 8/7/2019 SAN unit V

    24/28

    ME - CSE [SAN- Unit V]

    24

    Fig: Remote tape vaulting from branch offices to a regional data center

  • 8/7/2019 SAN unit V

    25/28

    ME - CSE [SAN- Unit V]

    25

    Disaster Recovery

    Like tape backup operations,disaster recovery (DR) has been viewed as a necessary but

    unattractive requirement for IT storage strategies.

    The cost of implementing a DR solution is balanced against both the likelihood of a

    major disruption and the impact on business if access to corporate data is lost.

    Disaster recovery tends to move toward the top of IT priorities only after

    major natural or human-caused disasters.

    The scope of a DR solution is more manageable if administrators first identify the types

    of applications and data that are most critical to business continuance.

    Customer information and current transactions, for example, must be readily accessible

    to continue business operations.

    Project planning data or code for application updates is not as mission critical, even

    though such code may represent a substantial investment and should be recovered at

    some point.

  • 8/7/2019 SAN unit V

    26/28

    ME - CSE [SAN- Unit V]

    26

    Reducing the volume of data that must be accessible in the event of disaster is key to

    sizing a DR solution to capture what is both essential and affordable.

    Another fundamental challenge for DR strategies is to determine what distance issufficient to safeguard corporate data.

    Performance problems beyond a metro circumference make native Fibre Channel

    extension unsuitable for robust DR scenarios.

    FCIP and iFCP can provide long distance support for Fibre Channel-originated storage

    traffic, whereas iSCSI offers a native IP storage solution to address the distance issue.

    The maximum distance allowed depends on the type of DR strategy to be implemented.

    DR supports both data replication and tape backup options and uses IP network services

    to connect the primary site to the DR site.

  • 8/7/2019 SAN unit V

    27/28

    ME - CSE [SAN- Unit V]

    27

    Fig: Disaster recovery configuration using IP network services and disk-based data replication

  • 8/7/2019 SAN unit V

    28/28

    ME - CSE [SAN- Unit V]

    28

    Thank U ~!~