eclipse dac ge white paper etsi

Upload: cepillo

Post on 30-Oct-2015

90 views

Category:

Documents


3 download

TRANSCRIPT

  • Headquarters Harris Stratex Networks, Inc.

    Research Triangle Park

    637 Davis Drive

    Morrisville,

    North Carolina 27560

    United States

    Tel: 919-767-3230

    Fax: 919-767-3233

    www.harrisstratex.com

    White Paper

    Eclipse Transportation of Gigabit Ethernet ETSI

    Introduction The DAC GE is a GigE plug-in card for the Eclipse INU/INUe. This paper introduces the DAC GE, its features, function and operation, and illustrates its applications within an Eclipse network.

    Contents DAC GE Features and Function

    - DAC GE Description - Transport Channel Capacity and RF bandwidth - Transport Channel / Link Options - Modes of Operation - Basic Port Parameters - Advanced Port and Switch Parameters

    Operational Guidelines - Band Plan and System Gain Implications - Platform Layouts - QoS - VLAN Tagging / LAN Aggregation - RWPR - Link Aggregation

    Configuration and Diagnostics - Configuration Screens - Portal Diagnostics Screens - ProVision Diagnostic Screens - Throughput Testing

    Example Networks - Inter-site Network - Adding In-house Capacity to a Telco Network - DSL Network - Municipal Broadband Network - WiMAX Backhaul Network - Metro Edge Switch

    Summary Glossary

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 1 of 73

  • White Paper

    DAC GE Features and Function The DAC GE is one of over twelve different transport option cards for the Eclipse INU/INUe. This section describes its features, capacity and Ethernet bandwidth options, modes of operation, and operational parameters.

    DAC GE Description The DAC GE transports Gigabit Ethernet data. It incorporates a full-featured layer 2 switch with support for link aggregation, enhanced RSTP, VLAN tagging and extensive QoS options.

    Features include:

    Multiple user ports: three RJ-45 10/100/1000Base-T ports, and one SFP optical 1000Base-LX port.

    2x channel ports for connection to radio or fiber links. Programmable port/channel switching fabric: transparent, VLAN, or mixed. Capacity increments of Nx2 Mbit/s, or Nx150 Mbit/s to a maximum 300 Mbit/s per

    DAC GE. Native Ethernet traffic configurable to ride side by side with native PDH E1 traffic. Extremely low latency, less than 360 microseconds for 2000 byte packets. Comprehensive QoS policing and prioritization options (802.1p and DiffServ). VLAN tagging (802.1Q and Q-in-Q). RWPRTM enhanced RSTP (802.1d-2004). Layer 2 link aggregation (802.3ad). Layer 1 link aggregation. Flow control (802.3x). Jumbo frames to 9600 bytes. Comprehensive RMON and performance indicators (RFC 1757). User-friendly configuration tool with a rich graphical interface. Compatibility with DAC ES, IDU ES, and IDU GE 20x.

    Figure 1: DAC GE

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 2 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Figure 2 illustrates the basic operational blocks. Ports 1 to 4 connect via the physical interface to an Ethernet switch, which supports user configuration of switching fabric (operational mode), speed, QoS, VLANs, RSTP, layer 2 link aggregation, flow control, frame size, and interconnection to transport channels C1 and C2. The gate array (FPGA) manages signal framing to/from the INU backplane bus, which provides channel interconnection to a RAC or RACs for over-air transmission, or to a DAC 155oM for fiber transport.

    The fully integrated switch analyzes the incoming Ethernet frames for source and destination MAC addresses and determines the output port/channel over which the frames will be delivered.

    Payload throughputs are determined by the configured port and channel speeds (bandwidth), QoS settings, and internal and external VLAN information.

    Table 1 lists typical specifications for the single-mode optical port. Table 1: SFP Optical Port Specifications

    Wavelength: 1310 nm

    Maximum launch power: -3 dBm

    Minimum launch power: -9.5 dBm

    Link distance Distances to 10 km / 6 miles with 9/125 m optical fiber; 550m / 600 yards with 50/125 m or 62.5/125 m fiber

    Figure 2: DAC GE Architecture

    NOTE: Nominal Nx2, 150 or 300 Mbit/s throughputs are used in this paper. For 150 and 300 Mbit/s selections, measured maximum throughputs for a 1518 byte frame are 152 and 308 Mbit/s respectively.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 3 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Transport Channel Capacity and Link Bandwidth DAC GE Transport channel capacity is selected in multiples of 2 Mbit/s to a maximum of 200 Mbit/s, or in multiples of 150 Mbit/s to a maximum 300 Mbit/s.

    The channels are mapped via the FPGA to the backplane bus for cross-connection to a RAC or RACs for a radio link, or to a DAC 155oM for a fiber link (Nx2 Mbit/s only).

    Radio link capacity is configured to provide the required traffic (payload) capacity. The resultant RF bandwidth is a function of radio link capacity and selected modulation rate.

    The exception to this rule is adaptive modulation, where for a given RF channel bandwidth, the modulation rate, and hence capacity, is increased when path conditions permit.

    Radio link capacity can be dedicated to Ethernet, or shared with companion PDH or SDH traffic.

    DAC GE Transport Channel Capacity Ethernet channel throughput options are dependent on the Eclipse backplane bus setting; Nx2 Mbit/s or Nx150 Mbit/s.

    An Nx2 Mbit/s backplane supports a maximum of 200 Mbit/s on one channel (C1 or C2), or a total of 200 Mbit/s using both channels (C1 and C2).

    An Nx150 Mbit/s selection supports 300 Mbit/s on one channel (C1 or C2), or 150 Mbit/s on one or both channels.

    Each channel can be mapped to a different link, or to the same link. With an Nx2 Mbit/s setting a DAC GE is air-compatible with a DAC ES, IDU GE

    20x, or IDU ES.

    Link Capacity Link capacity is configured to support the required Ethernet capacity, plus any companion E1 or SDH traffic.

    The maximum capacity that can be configured on one physical radio link is 200 Mbit/s for an Nx2 Mbit/s selection, or 300 Mbit/s for an Nx150 Mbit/s selection.

    The maximum capacity that can be supported on one INU/INUe (the backplane maximum) is also 200 Mbit/s for an Nx2 Mbit/s selection, or 300 Mbit/s for an Nx150 Mbit/s selection. (These maximums represent the backplane maximum and the maximum that can be transported over one radio link).

    The maximum capacity that can be configured on one fiber (DAC 155oM) link is 128 Mbit/s.

    Two or more physical links can be link aggregated to support a logical link with a capacity that is the sum of the individual link capacities. In this way co-located INUs with DAC GEs can be installed to provide up to a 1 Gbit/s connection.

    Liquid Bandwidth Liquid bandwidth refers to the Eclipse ability to seamlessly assign link capacity to Ethernet traffic, and to companion TDM E1 or STM1 traffic.

    This scalability is enabled by the unique universal modem design where Ethernet and/or TDM data is transported natively.

    The modulation process does not distinguish between the type of data to be transported, Ethernet or TDM; data is simply mapped into byte-wide frames to provide a particularly efficient and flexible wireless transport mechanism.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 4 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    The result is that when configured for Ethernet, and/or TDM data, the full configured link capacity is available for user throughput.

    For an Nx2 Mbit/s backplane selection, assignment is fully scalable in 2 Mbit/s / E1 steps to optimize throughput granularity for network planning purposes. This is illustrated in Figure 3, which indicates possible assignments to Ethernet and to companion NxE1 capacity for a selected link capacity. Figure 3: Payload Assignment Graph for Ethernet and Companion E1 traffic

    With an Nx150 Mbit/s backplane selection the link capacity options are 150 Mbit/s or 300 Mbit/s. This applies to a radio link only.

    For a 150 Mbit/s link, capacity is dedicated to Ethernet or to STM1. For a 300 Mbit/s link, the 300 Mbit/s can be dedicated to Ethernet, or to 150

    Mbit/s Ethernet and 1xSTM1, or to 2xSTM1.

    Radio Link Capacity and RF Bandwidth: Fixed Modulation Radio link capacity is configured within the RACs, where depending on the capacity selected, one or more modulation options are available to support different RF channel bandwidths.

    Two RAC types are available:

    RAC 30 or RAC 3X for standard link operation, where RAC 30 supports RF bandwidths up to 28 MHZ, and RAC 3X supports bandwidths from 28 to 56 MHz.

    RAC 40 or RAC 4X for Co-channel Dual Polarized (CCDP) link operation, where two links are operated on the same frequency channel; one using the vertical polarization, the other the horizontal. RAC 40 supports an RF bandwidth of 28 MHz, RAC 4X supports bandwidths of 28, 40, or 56 MHz.

    Three ODU types are available, ODU 300sp, ODU 300hp, ODU 300ep.

    ODU 300sp supports Ethernet capacities to 80 Mbit/s on frequency bands 7 to 38 GHz, with QPSK or 16 QAM modulation options. Standard Tx power.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 5 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    ODU 300hp supports Ethernet capacities to 300 Mbit/s on frequency bands 6 to 38 GHz, with modulation options from QPSK to 256 QAM. High Tx power.

    ODU 300ep supports Ethernet capacities to 300 Mbit/s on 5, 13, or 15 GHz. Modulation options range from QPSK to 256 QAM. Extended Tx power.

    Table 2 lists the RF bandwidths supported by Eclipse RAC/ODU combinations for Ethernet capacities from 40 to 300 Mbit/s.

    Table 2: Ethernet Capacity, RF Bandwidth, Modulation and RAC/ODU

    Ethernet Capacity Mbit/s

    RF Channel Bandwidth MHz

    Modulation RAC ODU 300

    40 14 16 QAM RAC 30 sp, hp, ep

    40 28 QPSK RAC 30 sp, hp, ep

    65 14 64 QAM RAC 30 hp, ep

    80 28 16 QAM RAC 30 sp, hp, ep

    100 28 32 QAM RAC 30 RAC 40

    hp, ep

    130 28 64 QAM RAC 30 RAC 40

    hp, ep

    130 56 16 QAM RAC 3X RAC 4X

    hp, ep

    150 28 128 QAM RAC 30 RAC 3X RAC 40

    hp, ep

    1502 40 64 QAM RAC 3X RAC 4X

    hp, ep

    150 56 16 QAM RAC 3X RAC 4X

    hp, ep

    190 28 256 QAM RAC 3X RAC 4X

    hp, ep

    2002 40 128 QAM RAC 3X RAC 4X

    hp, ep

    200 56 64 QAM RAC 3X RAC 4X

    hp, ep

    200 56 32 QAM RAC 4X hp, ep

    300 56 128 QAM RAC 3X RAC 4X

    hp, ep

    1. 10 and 20 Mbit/s options are also available. 2. 5, 6, 10 or 11 GHz only for 40 MHz operation.

    Radio Link Capacity and RF Bandwidth: Adaptive Modulation Instead of using a fixed modulation rate to provide a guaranteed capacity and service availability under all path conditions, the modulation rate, and hence capacity, is increased when path conditions permit. On a typical link this means a higher capacity will be available for better than 99.5 percent of the time.

    It provides a particularly cost and traffic efficient solution when used in conjunction with data prioritization, where with appropriate QoS settings all high priority traffic, such as voice and video, continues to get through when path conditions are poor. Outside these conditions best effort lower priority traffic, such as email and file transfers, enjoys data bandwidths that can be two or three times the guaranteed bandwidth.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 6 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Adaptive modulation for Eclipse is provided by the RAC 30A plug-in. It uses one of three automatically and dynamically switched modulations - QPSK, 16 QAM or 64 QAM - selected by an adaptive modulation engine that can handle up to 100 dB/s fading fluctuations.

    Modulation switching is hitless. During a change to a lower modulation, remaining higher priority traffic is not affected. Similarly, existing traffic is unaffected during a change to a higher modulation.

    Table 3 highlights RAC 30A function, whereby for a given RF channel bandwidth of 7, 14 or 28 MHz, a twofold improvement in data throughput is provided for a change from QPSK to 16 QAM, and a threefold improvement to 64 QAM.

    RAC 30A may be used as an upgrade solution for existing links to deliver higher capacities for minimum disruption and cost, and in new links where:

    A narrower channel bandwidth, such as 7 MHz, can be used instead of 14 MHz or 28 MHz, or

    An antenna up to two sizes smaller than normally required can be used. Traffic can be Ethernet, TDM, or a mix of both, with a 2 Mbit/s or 1.5 Mbit/s granularity.

    RAC 30A is compatible with Eclipse ODU 300hp/ep/sp, and with the RAC 30 (V2 and V3), meaning that during an upgrade to RAC 30A operation there is no need to replace both ends at the same time. This simplifies the upgrade program while ensuring minimum downtime. Table 3. RAC 30A Adaptive Modulation Figures

    Higher Capacity Links Where higher Ethernet capacities are required, two or more INUs are co-located to provide parallel-path links. These can be operated on different frequency channels, or more commonly the two links are operated on the same frequency channel using XPIC RAC 40s or RAC 4Xs in a CCDP configuration. In this way, Ethernet capacities to 600 Mbit/s (2 or 4 links), 900 Mbit/s (3 links), or 1000+ Mbit/s (4 links) are efficiently enabled.

    CCDP operation enables two equal-capacity links to operate within the same frequency channel using the vertical and horizontal polarizations. When link-aggregated, the capacity of the individual links is combined on a single user interface.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 7 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Note that while four 300 Mbit/s links can be installed to support a combined over-air data capacity of 1200 Mbit/s, where link aggregation is enabled, the maximum that can be supported on one DAC GE interface is 1000 Mbit/s.

    For information on DAC GE link aggregation see Link Aggregation.

    Figure 4 summarizes the radio channel (link) options on an Nx150 Mbit/s selection for Ethernet capacities from 150 to 600 Mbit/s. Up to 300 Mbit/s a single INU is used. For higher capacities two co-located INUs are used.

    Normally a 300 Mbit/s Ethernet link is installed using one radio link. This requires an RF channel bandwidth of 56 MHz. But on RF bands where 56 MHz channeling is not supported, or where a free 56 MHz channel is not available, two 150 Mbit/s links can be installed on one 28 MHz RF path using RAC 40s or RAC 4Xs for CCDP operation.

    Even on bands where 300 Mbit/s / 56 MHz channels are available, it may still be preferable to operate two 150 Mbit/s / 28 MHz channels to secure the additional system gain of such links, and to take advantage of the inherent redundancy provided by link aggregation should one of the links fail. See Band Plan and System Gain Implications.

    Figure 4: Ethernet Radio Path Options for 150 to 600 Mbit/s

    Capacity License Capacity is licensed according to the required RAC capacity. The license is software enabled within a compact flash card, which plugs into the NCC.

    The base license is 20 Mbit/s for up to 6 RACs. Beyond this base, capacity is licensed on a per-RAC basis. Licensed capacity is field upgradeable.

    DAC GE Transport Channel / Link Options The following diagrams illustrate DAC GE configurations for one and two channel operation for simple links, ring links and aggregated links.

    Figure 5 illustrates single channel operation.

    A RAC 30 is used for operation on RF channel bandwidths to 28 MHz. Supports Ethernet capacities to 150 Mbit/s.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 8 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    A RAC 3X is used for operation on RF channel bandwidths from 28 to 56 MHz. Supports Ethernet capacities to 300 Mbit/s.

    Figure 5: Simple Single Channel Radio Link

    Figure 6 illustrates co-channel CCDP operation from one INU. This has particular application where 300 Mbit/s Ethernet data is required, but a 56 MHz radio channel is not available. Instead, the two RF links operate on the same 28 MHz RF channel, and are link aggregated to provide a single 300 Mbit/s user interface.

    RAC 40s are used. Each is configured for 150 Mbit/s. RF channel bandwidth is fixed at 28 MHz.

    The two 150 Mbit/s Ethernet channels can, if required, be operated as two independent Ethernet connections (no link aggregation).

    The radio links can also be configured for adjacent channel operation, using RAC 30s or RAC 3Xs.

    Figure 6: Co- or Adjacent Channel Links

    Figure 7 illustrates a ring node configuration where a single INU/DAC GE supports east and west traffic using RAC 30s for channel bandwidths to 28 MHz, or RAC 3Xs for bandwidths from 28 to 56 MHz.

    RWPRTM (Resilient Wireless Packet Ring) is enabled on the DAC GE to provide enhanced RSTP; an external RSTP switch is not required.

    With an Nx2 Mbit/s selection, ring link capacity can be split between Ethernet and E1 traffic. E1 circuits can be ring-protected using Eclipse Super-PDH ring protection, or simply configured for point-to-point operation.

    With an Nx2 Mbit/s selection the maximum capacity supported for Ethernet is 100 Mbit/s - less if E1 circuits are also configured. (Backplane maximum is 200 Mbit/s: 100 Mbit/s east and 100 Mbit/s west).

    With an Nx150 Mbit/s selection only Ethernet traffic (150 Mbit/s) is supported on the ring. (Backplane maximum is 300 Mbit/s: 150 Mbit/s east and 150 Mbit/s west).

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 9 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Figure 7: Ring Configuration with One INU

    Figure 8 illustrates a 300 Mbit/s ring node using 300 Mbit/s links.

    Two INUs, each with a DAC GE are required, with an Ethernet cable connection between the DAC GEs. RWPR is enabled in both DAC GEs such that each is a separately managed switch on the ring.

    RAC 3Xs are required. The RF channel bandwidth is 56 MHz. A 300 Mbit/s ring can also be configured using two link-aggregated 150 Mbit/s links. The parallel-path links are first link aggregated, and then RWPR ring protected.

    A 600 Mbit/s ring requires four INUs at each ring node, using two link-aggregated 300 Mbit/s links east and west. Figure 8: Ring Configuration with Two Co-located INUs

    Figure 9 illustrates 600 Mbit/s L2 link-aggregation using two 300 Mbit/s links and RAC 4Xs for CCDP link operation. Both links operate on the same 56 MHz radio channel.

    A 600 Mbit/s aggregated link can also be configured using four co-path 150 Mbit/s links, or by using one 300 Mbit/s link with two 150 Mbit/s links.

    Similarly:

    A 450 Mbit/s link is established using one 300 Mbit/s link with one adjacent-channel 150 Mbit/s link, or by using three 150 Mbit/s links where two can be configured for CCDP operation, and the third configured on an adjacent channel.

    A 900 Mbit/s link is established using three 300 Mbit/s links. Two can be configured for CCDP operation; the third must be on an adjacent channel.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 10 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    A 1 Gbit/s link requires four 300 Mbit/s links. With CCDP operation the links are paired to operate on two adjacent 56 MHz radio channels.

    Figure 9: 600 Mbit/s Link

    For more information on platform options refer to Platform Layouts.

    Modes of Operation DAC GE supports three operational modes, transparent, mixed or VLAN, which determine the layer 2 (L2) port-to-port and port-to-channel switching fabric.

    Transparent Mode This is the default, broadcast mode, which includes options for L2 Link Aggregation.

    Transparent Mode with Aggregation Disabled All ports and channels are interconnected. It supports four customer LAN connections (ports 1 to 4) with bridging to two separate transport channels (C1 or C2). Figure 10: Transparent Mode with Aggregation Disabled

    To avoid a traffic loop, only C1 or C2 is used over the same radio path. C1 and C2 may be used where the DAC GE supports two back-to-back ring links where one channel is assigned to the east, the other to the west.

    Transparent with Aggregation Two or more links are configured to support a single logical link with a capacity that is the sum of the individual link capacities.

    Options are provided within Portal to select channel and/or port aggregation:

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 11 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    A channel selection of C1 and C2 applies where the link or links to be aggregated are installed on the same INU as the DAC GE. A typical application is the CCDP configuration of two 150 Mbit/s links to provide a 300 Mbit/s aggregate capacity on one 28 MHz radio channel.

    A channel plus port selection applies where the link or links to be aggregated are installed on separate, co-located INUs. A typical application is the CCDP configuration of two 300 Mbit/s links to provide a 600 Mbit/s aggregate capacity on one 56 MHz radio channel.

    A customizable aggregation weighting or load balancing option is provided for use where the links to be aggregated are not of equal capacity.

    Balanced aggregation weights are default applied. However, where one of the aggregated links is of different capacity, such as a 300 Mbit/s link aggregated with a 150 Mbit/s link, the weighting on the 300 Mbit/s link should be set to 11, and on the 150 Mbit/s link, set to 5. The aggregation weights must be assigned such that they always total 16.

    Figure 11 illustrates C1 and C2 aggregation; traffic on channels C1 and C2 is aggregated and bridged to ports P1 to P4 to support a common network connection on all ports. The default weighting applied is 8/8. Figure 11: Transparent Mode with C1 and C2 Aggregation

    For more information, including a layer 1 (L1) aggregation option, refer to Link Aggregation.

    Mixed Mode Mixed Mode supports two separate network connections where P1-C1 provides dedicated transport for port 1 traffic, and a second transparent/broadcast mode connection is provided with P2, P3, P4 and C2 interconnected.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 12 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Figure 12: Mixed Mode Port and Channel Assignment

    The two channels can be assigned on the same path, or used to support east and west paths in a ring network using an external RSTP switch, where C1 is assigned to one direction and C2 to the other.

    VLAN Mode VLAN Mode supports four separate LAN connections. P1-C1 is the same as for Mixed Mode, where dedicated transport is provided for port 1 traffic. For ports 2, 3 and 4, three separate (virtual) LANs (VLANs 2, 3 and 4) are multiplexed to C2, with internal Q-in-Q tagging of the packets ensuring correct end-to-end matching of LANs over the link. Figure 13: VLAN Mode Port and Channel Assignment

    The two channels can be assigned on the same path, or used to support east and west paths in a ring network using an external RSTP switch, where C1 is assigned to one direction and C2 to the other.

    Basic Port Parameters User selection/confirmation is provided for the following port-based parameters.

    Enabled/Disabled. A port must be enabled to allow traffic flow.

    Name. A port name or other relevant port data can be entered.

    Connection Type and Speed. Provides selection per-port of auto or manual settings for half or full duplex operation. In auto, the DAC GE self-sets these options based on the traffic type detected.

    Interface Type. Provides selection per port of auto or manual settings for the interface type; Mdi or MdiX (straight or cross-over respectively).

    Priority. Provides a four-level, low, medium-low, medium-high or high priority setting for each port. This port prioritization only has relevance to ports using a shared transport channel. Traffic is fair-queued so that traffic on a low priority port is allocated some bandwidth when availability is contested.

    Port Up. Indicates that a valid connection with valid Ethernet framing has been detected.

    Resolved. Indicates the DAC GE has resolved an auto selection for speed-duplex.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 13 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Advanced Port and Switch Parameters

    Priority Mapping Provides selection of queue-controller operation for the following options. Only one option can be selected and a selection applies to all ports.

    Port default. Enables the setting of a four-level port priority on each of the four ingress ports. This is the basic port priority option described in Basic Port Parameters above.

    802.1p provides prioritization based on the three-bit priority field of the 802.1p VLAN tag. Each of the possible eight tag priority values (0 to 7, with 7 the highest) are mapped into a four-level (2-bit) priority level. Mapping is user configurable.

    DiffServ provides prioritization based on the DSCP (Differentiated Service Code Point) field in the IP header. It is designed to tag a packet so that it receives a particular forwarding treatment or per-hop-behavior (PHB) at each network node. The six bits available enable 64 discrete DSCP values or priorities (0 to 63), with 63 the highest. Mapping is user configurable.

    No priority. Incoming packets are passed transparently.

    For more information, see Traffic Priority.

    Flow Control Flow Control is implemented through use of IEEE 802.3x pause frames, which tell the terminal node to stop or restart transmission to ensure that the amount of data in the receive buffer does not exceed a high water mark.

    For more information, see Flow Control.

    Disable Address Learning Address Learning is default implemented to support efficient management of Ethernet traffic in multi-host situations. The option to disable Address Learning is primarily for use in a ring network where protection for the Ethernet traffic is provided by an external RSTP switch. To avoid conflict between the self-learning functions within the DAC GE and external RSTP switches during path failure situations, the DAC GE capability must be switched off. Failure to do this means that in the event of an Ethernet path failure, and subsequent re-direction of Ethernet traffic by the external switch to the alternate path, the DAC GE will prevent re-direction of current/recent traffic until its address register matures and deletes unused/un-responsive destination addresses, which may take several minutes.

    Maximum Frame Size Maximum Frame Size sets the largest frame that can be transmitted without it being broken down into smaller units (fragmented). The DAC GE supports jumbo-frames to 9600 bytes; the configurable range is 64 to 9600 bytes. A selection applies to all ports.

    The settable frame-size should not be set above 7500 bytes for bi-directional traffic. 9600 can be used for uni-directional requirements; frame sizes to 9600 bytes in one direction, and normal frame sizes in the other direction.

    For more information refer to MTU Size.

    RWPRTM DAC GE incorporates RSTP in the form of RWPR-enhanced RSTP.

    RSTP is a link management protocol for Layer 2 ring or mesh networks. It prevents the formation of network loops and provides path redundancy. When a link in the network fails, RSTP redirects traffic around the failure by unblocking a

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 14 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    standby link. Service recovery (reconvergence) times are typically 2 to 7 seconds.

    RWPR (Resilient Wireless Packet Ring) employs patent-pending, Harris Stratex developed mechanisms to enhance RSTP reconvergence times. RWPR essentially eliminates failure detection time (less than 1 ms) using a unique rapid-failure-detection (RFD) algorithm, then uses dynamic message timing to accelerate the RSTP convergence process. The result is carrier-class reconvergence times - times as low as 50 ms.

    For more information refer to Operational Guidelines: RWPR.

    Link Status Propagation Link Status Propagation enables externally-connected equipment to rapidly detect the status of a DAC GE channel. It operates by instantly forcing a port shutdown at both ends of the link in the event of a channel failure, such as a path fade, or at the far end of a link in the event of an Ethernet cable disconnection, or external device failure on a DAC GE port.

    A port shutdown is immediately detected by the connected equipment so that it can act on applicable alarm/switching options.

    For more information, refer to Link Status Propagation.

    VLAN Tagging DAC GE supports 802.1Q and Q-in-Q tagging.

    802.1Q. Untagged frames are tagged. Q-in-Q. All frames are tagged, including those with existing tags.

    Selections are made on a per-port basis.

    A VLAN ID can be entered (range 0 to 4095) or left as default. A VLAN membership filter can also be selected.

    With this capability DAC GE can tag, 802.1p prioritize, and aggregate Ethernet traffic from two, three or four ports onto a common trunk/channel.

    For more information on VLAN tagging, refer to Operational Guidelines, Customized VLAN Tagging.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 15 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Operational Guidelines This section provides an introduction to DAC GE deployment. Topics addressed are:

    Band Plan and System Gain Implications Platform Layouts QoS VLAN Tagging / LAN Aggregation RWPR Link Aggregation

    Band Plan and System Gain Implications For the capacities of interest, ETSI band plans support RF channel bandwidths of 3.5, 7, 14, 28, 40 or 56 MHz with, depending on the capacity/bandwidth option, modulation rates from QPSK to 256 QAM.

    Band Plan Implications For 300 Mbit/s data, which is the maximum supported on one radio link, the RF channel size required is 56 MHz. However, in some countries or regions 56 MHz channeling may be unavailable, or is restricted to the higher bands (18 GHz and above). Where 300 Mbit/s data is required in such situations, two 150 Mbit/s links can be installed on 28 MHz channeling. The two links can be operated on the same 28 MHz channel using XPIC RAC 40s for CCDP operation, and where a single 300 Mbit/s user interface is required, DAC GE link aggregation is enabled.

    System Gain Implications The maximum data that can be transported on a channel is a function of the modulation rate; the higher the rate the higher the capacity, but the lower the system gain.

    At the low end, QPSK or 16 QAM is used. At the top, 128 or 256 QAM is used. There is a marked difference in system gain. For example a 150 Mbit/s ODU 300hp link on a 28 MHz channel (128 QAM) delivers a system gain at 18 GHz of 85.5 dB using a RAC 30v3, or 84.5 dB using a RAC 3X or 40. The same 150 Mbit/s on a 56 MHz channel uses 16 QAM and delivers a system gain of 93.5 dB using a RAC 3x or 4X. The 8 to 9 dB improvement represents a difference of about one antenna size at both ends, but comes at a cost of a 56 MHz channel rather than a 28 MHz channel.

    Where 300 Mbit/s is required the most obvious choice is one 300 Mbit/s link on a 56 MHz channel (128 QAM). However, 300 Mbit/s can also be provisioned using two 150 Mbit/s links on one 28 MHz channel (128 QAM) using CCDP operation, where aside from the benefits of 28 MHz channeling over 56 MHz, it may also be preferable for system gain reasons. Refer to Table 3, which compares 10-6 system gains at 18 GHz for the ODU 300hp.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 16 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Table 3: Capacity Versus Bandwidth, Modulation and System Gain ETSI 18 GHz

    Link Capacity RF Ch Bandwidth Modulation System Gain RAC Type

    300 Mbit/s 56 MHz 128 QAM 81 dB RAC 3X, 4X

    210 Mbit/s 56 MHz 64 QAM 88 RAC 3X

    200 Mbit/s 56 MHz 32 QAM 89 RAC 4X

    190 Mbit/s 28 MHz 256 QAM 78 RAC 3X, 4X

    28 MHz 128 QAM 84.5 dB RAC 30, 3X, 40

    28 MHz1 128 QAM 85.5 dB RAC 30v3

    150 Mbit/s

    56 MHz 16 QAM 93.5 dB RAC 3X, 4X

    1: Enhanced system gain option for RAC 30v3

    From this data it can be seen that:

    Maximum practical hop distances are higher for solutions using dual 150 Mbit/s links, compared to a single 300 Mbit/s link:

    - Dual 150 Mbit/s links on one 28 MHz CCDP channel deliver a 3.5 dB to 4.5 dB gain advantage over a single 300 Mbit/s link on a 56 MHz channel, which equates to about one antenna size at one end. Not only do you get better system gain using two 150 Mbit/s links, you use half the channel bandwidth.

    - Dual 150 Mbit/s links on one 56 MHz CCDP channel deliver a 12.5 dB gain advantage over a single 300 Mbit/s link on a 56 MHz channel, which equates to more than one antenna size at both ends.

    Where a capacity of 150 Mbit/s is not sufficient, 200 Mbit/s may provide a solution, particularly where system gain is an issue for the higher 300 Mbit/s option. While using the same 56 MHz channeling as a 300 Mbit/s link, the 200 Mbit/s option delivers a system gain advantage of 7 or 8 dB, which equates to about one antenna size at both ends.

    Fixed Versus Adaptive Modulation Fixed modulation refers to a fixed modulation rate. For a required link capacity there can be several modulation and RF channel size options to allow a trade-off between system gain and RF channel size, based on the modulation rate used. Where a high system gain is needed - one that cannot be met by an increased antenna size - a more robust, lower modulation rate is required, which in turn dictates the need for a larger RF channel size.

    When a link path is planned, it is normally configured to provide optimum reliability under all path conditions. This is usually expressed as an availability figure, where 99.999% (five nines) availability over time is regarded as the industry standard objective.

    But the resilience needed to achieve this is typically required for just a small fraction of time, typically less than 0.1% to 0.5%. This means that for 99.9% to 99.5% of the time, a higher modulation rate could be used to achieve a higher capacity on the channel, or a lower RF channel size could be used to achieve the same capacity. This is when adaptive modulation provides real benefit.

    Adaptive modulation refers to the dynamic adjustment of modulation rate to ensure maximum data bandwidth is provided most of the time (on a given RF channel size), with a guaranteed bandwidth provided all of the time.

    For example, a link using robust QPSK modulation can have a system gain providing as much as 30 dB of fade margin, but is only needed to protect the link against worst-case fades that may occur for just a few minutes in a year. For the rest of the year the margin is not used.

    By using less robust but more efficient higher modulation rates, the available fade margin can be transformed into delivering more data throughput. Adaptive modulation

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 17 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    dynamically changes the modulation so that the highest availability of capacity is provided at any given time.

    RAC 30A is the Eclipse adaptive modulation RAC card. When used in conjunction with the QoS traffic prioritization options provided on the DAC GE, they can be configured to ensure all high priority traffic continues to get through when path conditions deteriorate; only low priority best effort data, such as email and file transfer traffic, is discarded.

    It is especially applicable to longer hops where system gain is an issue to provide the fade margin needed for five-nines availability. Using fixed modulation, large and expensive antennas may be required to deliver the required path budget, which in turn may involve the added time and cost of installing high-strength support structures and associated planning approvals. But with adaptive modulation, a higher system gain (lower modulation rate) is switched into service to avoid what would otherwise be a path-fade situation, and DAC GE QoS settings are used to ensure all essential traffic is unaffected by the reduction in link capacity.

    The adjustment in system gain enabled by the RAC 30A between 64 QAM and QPSK operation is 16 to 18 dB, which broadly equates to two antenna sizes at both ends. For example, instead of 1.8m antennas, 0.6m antennas could be used!

    Refer to Transport Channel Capacity and Link Bandwidth for data on RAC 30A configuration options.

    Figure 14 illustrates the RAC 30A modulation/capacity steps and typical percent availability for each over time. QPSK, as the most robust modulation, is used to support critical traffic with a 99.999% availability. Less critical traffic is assigned to the higher modulations. Most importantly, the highest modulation is typically available for better than 99.5% of the time.

    Figure 14. Adaptive Modulation Illustration

    Redundancy Implications Using a single INU, dual 150 Mbit/s links may be L1 or L2 or link aggregated to provide redundancy in the event one link fails. With appropriate traffic priority settings all high priority data will continue to get through in spite of the halved Ethernet bandwidth. To provide a similar level of redundancy on a single 300 Mbit/s link, hot-standby or diversity protection is required.

    Such redundancy is also provided where two INUs are used to support dual 190, 200, or 300 Mbit/s links, or where up to four INUs are used to support a total data capacity of 1 Gbit/s.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 18 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Platform Layouts This section provides guidance on platform layouts for termination and intermediate nodes:

    A network termination node is a node used on single-hop links or a node installed at the end of a network.

    An intermediate node is a node used within ring and star networks that have two or more links configured to different nodes.

    Network Termination Nodes One INU supports Ethernet connections to 200 Mbit/s with an Nx2 Mbit/s backplane, or 300 Mbit/s with an Nx150 Mbit/s backplane. The INU is installed with:

    One RAC/ODU and one DAC GE for 1+0 non-protected link operation. Two RAC/ODUs and one DAC GE for 1+1 protected/diversity operation. Two RAC/ODUs and one DAC GE for 1+1 co-channel (CCDP) operation. One NPC card where 1+1 protection of the NCC card is required.

    Where both Ethernet and PDH or SDH data are to be transported over the link, the appropriate DAC cards are also installed:

    One or more DAC 16x or DAC 4x for NxE1. One DAC 1x155o, DAC 2x155o or DAC 2x155e for STM1.

    Figure 15 illustrates single-channel Ethernet-only operation with no NPC option. Figure 15: Single Channel Link Node

    Figure 16 illustrates CCDP link operation using RAC 40s or RAC 4Xs. Each link is configured to transport 150 Mbit/s, with the network connections to each held separate, or L2 or L1 link-aggregated to provide a single 300 Mbit/s logical link. A single dual-polarized antenna is used.

    RAC 40 CCDP for 2x150 Mbit/s, 128 QAM links on one 28 MHz RF channel. RAC 4X CCDP for 2x150 Mbit/s, 16 QAM links on one 56 MHz RF channel.

    The two links may also be operated on adjacent channels using RAC 30s or RAC 3Xs.

    RAC 30 for 2x150 Mbit/s, 128 QAM links using two adjacent 28 MHz RF channels.

    RAC 3X for 2x150 Mbit/s, 16 QAM links using two adjacent 56 MHz RF channels.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 19 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Figure 16: 300 Mbit/s (2x150 Mbit/s) CCDP Terminal

    Where Ethernet throughputs of 450 Mbit/s, 600 Mbit/s, 900 Mbit/s, or higher are required, two or more Eclipse Nodes are co-located.

    Figure 17 illustrates a 600 Mbit/s configuration, with each Node supporting 300 Mbit/s on one 56 MHz RF channel using RAC 4X CCDP operation. The two 300 Mbit/s streams can be held as separate Ethernet links or, as shown, L2 link-aggregated on one INU to provide a single 600 Mbit/s interface. Figure 17: 600 Mbit/s (2x300 Mbit/s) CCDP Terminal

    Figure 18 illustrates a 600 Mbit/s configuration using four 150 Mbit/s co-path links, which are configured as two separate 2x150 Mbit/s CCDP links. The four 150 Mbit/s data streams can be held as separate network connections or link-aggregated, as shown, to provide one 600 Mbit/s logical link.

    A single dual-polarized antenna is used. Two 28 MHz RF channels are required - each RAC 40 CCDP pairing occupies one 28 MHz RF channel.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 20 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Figure 18: 600 Mbit/s (4x150 Mbit/s) CCDP Terminal

    More Information

    For more information on CCDP link operation, including protected CCDP ring links, refer the Eclipse User Manual, Volume II, Chapter 3, Co-channel Operation.

    Ring and Star Ethernet Network Nodes This section describes platform layouts and capacities for ring and star network nodes.

    The maximum capacity supported on a single Eclipse INU is 200 Mbit/s using an Nx2 Mbit/s backplane, or 300 Mbit/s using a 150 Mbit/s backplane.

    For a star node the backplane capacity calculation is a simple summing of the through capacity, and any dropped capacity. For a simple 2-link Nx2 Mbit/s node, this means that 200 Mbit/s can be passed through the node, RAC to RAC, or through plus drop, where some of the capacity is RAC to RAC, and the balance is RAC to DAC GE. Similarly, for a 2-link Nx150 Mbit/s node, 300 Mbit/s can be passed through RAC to RAC, or through plus drop, where 150 Mbit/s is passed through RAC to RAC, and 150 Mbit/s is dropped, RAC to DAC GE.

    For an Ethernet ring node, capacities on the east and west links are terminated on a DAC GE. The backplane capacity used is a simple summing of these link capacities. On a ring they are normally of equal capacity, meaning a maximum 100 Mbit/s ring is supported on an INU using an Nx2 Mbit/s backplane, or 150 Mbit/s using an Nx150 Mbit/s backplane.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 21 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    150 Mbit/s Ring Node

    Figure 19 illustrates a simple RWPR ring node, where the customers LAN is supported directly from the DAC GE operating on transparent mode.

    If the RSTP function is provided on external switches (RWPR not enabled on the DAC GE), mixed mode is used with P1/C1 supporting east or west, and P2 to P4 the opposite direction on C2. Link Status Propagation should also be enabled.

    For information on RWPR operation, see RWPR in Operational Guidelines.

    Figure 19: Single INU Node

    300 Mbit/s Ring Node

    Two topology options are illustrated, one for 300 Mbit/s radio links, the other for two CCDP 150 Mbit/s links with link aggregation. The backplane bus is set for Nx150 Mbit/s operation.

    Figure 20 illustrates east/west 300 Mbit/s link option. 56 MHz channels are required.

    The two co-located INUs are interconnected via their DAC GE ports. RWPR is configured on each DAC GE; each operates as a separate RSTP

    bridge on the ring.

    Figure 20: 300 Mbit/s Node

    Figure 21 illustrates the co-path CCDP 150 Mbit/s link option. Compared to the previous solution, which required 56 MHz channeling, this solution uses one 28 MHz channel east and west.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 22 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Separate DAC GEs are required for the aggregation and RWPR ring functions:

    DAC GE A in INU 1 aggregates the east 150 Mbit/s links to provide a single 300 Mbit/s connection on ports 1 to 4, using transparent mode with C1 and C2 aggregation.

    DAC GE B in INU 2 aggregates the west 150 Mbit/s links to provide a single 300 Mbit/s connection on ports 1 to 4, using transparent mode with C1 and C2 aggregation.

    DAC GE C in INU 2 provides the RWPR ring switch function and hosts the local LAN. Note that the east, west and local LAN interfaces are all port-connected; the DAC GE C transport channels are not configured (no backplane bus access is required).

    The east and west aggregated links are treated as one logical link by RWPR. If one link in the aggregated pair fails, ring switching does not occur - both links must fail to initiate switching.

    Figure 21: 300 Mbit/s Node: 2x150 Mbit/s CCDP Links East and West

    600 Mbit/s Ring Node

    Figure 22 illustrates a 600 Mbit/s solution. Paired RAC 4X CCDP 300 Mbit/s links are configured on single 56 MHz channels east and west.

    Separate DAC GEs are required for the aggregation and RWPR ring functions:

    DAC GE A in INU 1 west is configured for transparent mode, single-channel operation.

    DAC GE B in INU 2 west is configured for transparent mode with P1/C1 link aggregation to provide a 600 Mbit/s aggregate of west 1 and west 2 on P2.

    DAC GE C in INU 1 east is configured for transparent mode, single-channel operation.

    DAC GE D in INU 2 east is configured for transparent mode with P1/C1 link aggregation to provide a 600 Mbit/s aggregate of west 1 and west 2 on P2.

    DAC GE E in INU 2 east provides the RWPR ring switch function for the 600 Mbit/s east and west aggregated links and hosts the local LAN. The east, west and local LAN interfaces are all port-connected; the DAC GE E transport channels are not configured (no backplane bus access is required).

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 23 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    The east and west aggregated links are treated as one logical link by RWPR. If one link in the aggregated pair fails, ring switching does not occur - both links must fail to initiate switching.

    Link status propagation should be configured on the A and C DAC GEs. See Link Status Propagation.

    Figure 22: 600 Mbit/s Node: 2x300 Mbit/s CCDP Links East and West

    Ethernet and E1 Ring Networks

    Figure 23 illustrates a ring network configured for Ethernet and E1 traffic, which requires an Nx2 Mbit/s backplane setting. RWPR is enabled for the Ethernet data, and Eclipse ring-wrapping is configured for the E1 circuits.

    In the example shown:

    The network is transporting 76 Mbit/s Ethernet plus 16xE1 ring-protected circuits. 76 Mbit/s Ethernet is carried on 38x2 Mbit/s circuits, which are configured as

    point-to-point circuits on the ring links.

    The 16xE1 Eclipse ring-protected circuits are all sourced/sunk at the core network site.

    At the core network site the resultant backplane bus usage is 200 Mbit/s (100x2 Mbit/s), which is the maximum for an INU/INUe:

    The Ethernet capacity uses 38x2 Mbit/s east plus 38x2 Mbit/s west for a total backplane usage of 76x2 Mbit/s = 152 Mbit/s .

    The E1 circuits use 24xE1 at the core site (each drop-insert circuit uses 1 backplane bus circuit connections).

    For more information on backplane bus rules, refer to the Node Capacity Rules appendix in the Eclipse User Manual, or to the Harris Stratex Networks paper: Eclipse Super-PDH Ring Capacity Guide.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 24 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    The ring links must support 54x2 Mbit/s (38 + 16), which means the best fit for link (RAC) capacity is 64x2 Mbit/s. 52x2 Mbit/s is the next lowest configurable link capacity, which could be satisfied if the Ethernet capacity in the example is reduced to 72 Mbit/s, or the E1 ring circuits are reduced to 14xE1.

    Figure 23: 100 Mbit/s Ring: Ethernet and E1 Circuits

    QoS QoS refers to parameters that affect traffic throughput: bandwidth, delay (latency), jitter and loss. These are discussed under:

    Traffic Priority Latency MTU Size Flow Control

    Traffic Priority QoS is mostly referred to in the context of a priority service for selected traffic where considerations go hand-in hand with bandwidth; the more restricted the Ethernet bandwidth, the greater the likelihood of delays and dropped frames, and the greater the need to provide priority for delay-sensitive multimedia traffic such as voice and video.

    Packetized voice (VoIP), while not demanding of bandwidth, is intolerant of delay, jitter, and packet loss.

    This also applies to video but with the added complication of it often being very bursty, with high bandwidth demands during scenes containing considerable movement.

    Priority servicing also applies where a service differentiation is required, such the prioritization of one customers traffic over another.

    Generally, where throughput bottlenecks occur traffic is buffered, and it is how traffic in a buffer is queued and prioritized for transmission that concerns QoS priority management. The most common tools for this purpose are port-prioritization and frame/packet tagging.

    Note that DAC GE prioritization is fair-weighted to ensure that even low priority traffic is afforded some bandwidth when the available bandwidth is contested.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 25 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Port Prioritization Port prioritization prioritizes traffic on one port over another. It is only applicable where two or more ports share a common channel. With DAC GE a four-level port prioritization applies: low, medium low, medium high, and high. Operation requires a Port Priority selection in the Priority Mapping screen.

    Tag Prioritization Unlike port prioritization, frame/packet tag prioritization allows traffic on one port to be prioritized over other traffic on the same port.

    Traffic is assigned a priority tag within the layer 3 DSCP (DiffServ) header, or layer 2 Class of Service (CoS / 802.1p) header, which depending on the application may be set from within the application itself, or applied by a network device such as switch with a port-based tagging capability.

    DAC GE can be configured to prioritize traffic using either tagging scheme. Incoming tagged frames are read and each frame is queued according to its tag priority level, and on the prioritization mapping applied within the DAC GE for tagged frames.

    802.1p priority: incoming 802.1p tagged frames are queued and sent in order of priority. DiffServ tagged and untagged packets are not prioritized (unless subject to port prioritization). Note that 802.1p prioritization is set within the 802.1Q VLAN tagging options.

    DiffServ priority: incoming DiffServ tagged frames are queued and sent in order of priority. 802.1p tagged and untagged packets are not prioritized (unless subject to port prioritization).

    As the DAC GE has a 4 level priority stack, user-configurable mapping is applied to accommodate the 8 levels of 802.1p priority tagging, and the 63 levels of Diffserv. Table 4 shows default mapping. Table 4: DAC GE Default Priority Mapping Table

    DAC GE Priority Level 802.1p Priority DiffServ Priority

    High 6, 7 48 - 63

    Medium High 4, 5 32 - 47

    Medium Low 2, 3 16 - 31

    Low 0, 1 0 - 15

    802.1p or DiffServ queuing priorities are not contested with port priority settings. The priority mapping options provide selection of 802.1p, DiffServ, port priority, or no priority. A selection applies to all ports.

    DAC GE includes a layer 2 tagging capability. See VLAN Tagging / LAN Aggregation.

    Latency Network latency refers to the time taken for a data packet to get from source to destination. For an IP network it is particularly relevant to voice (VoIP) or videoconferencing; the lower the latency, the better the quality.

    Latency is typically measured in microseconds or milliseconds for one-way and two-way (round-trip) transits. For phone conversations a one-way end-end latency of 200 ms is considered acceptable. Other applications are more tolerant; Internet access should be less than 5 seconds, whereas for non real-time applications such as email and file transfers, latency issues do not normally apply.

    For Eclipse 150 Mbit/s or 300 Mbit/s links, the per-hop latency (delay time) is captured in Table 5. The delays are primarily a function of:

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 26 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Link (RAC) FEC/interleaver operation, where the common buffer size means the buffer is filled and emptied at a faster rate for 300 Mbit/s links compared to 150 Mbit/s or lower.

    Normal packet processing delays within switch port/channel buffers; the larger the frame size, the higher the latency. (At 64 bytes the latency is primarily due to link FEC/interleaver operation)

    Table 5: Typical One-way Latency for 150 and 300 Mbit/s Single-link Throughputs

    Latency Frame Size, Bytes

    150 Mbit/s 300 Mbit/s

    64 150 uS 79 uS

    128 165 uS 81 uS

    512 220 uS 99 uS

    1024 295 uS 125 uS

    1518 367 uS 149 uS

    From this it can be seen that the latency of a DAC GE / DAC GE link or multiple links is well within the VoIP maximum of 200 mS.

    Other contributors to overall latency are the devices connected to the Eclipse network. For a VoIP circuit these will include the external gateway processes of voice encoding and decoding, IP framing, packetization and jitter buffers. Contributing to external network latency are devices such as routers and firewalls.

    MTU Size Within a GigE network jumbo-sized MTUs (Maximum Transmission Units) are supported.

    MTU refers to the byte size of a layer 3 packet before the addition of a header and footer in the layer 2 encapsulation (framing) of a packet.

    For 10 and 100 Mbit/s Ethernet the MTU maximum is typically 1500 bytes. For GigE, MTU sizes to 9000+ bytes are supported.

    Jumbo frames are used where large amounts of data must be delivered with best efficiency. In practice care must be exercised, as not all network devices from source to destination may be able to handle jumbo frames, or at least large jumbo frames.

    DAC GE supports jumbo frames to 9600 bytes for uni-directional traffic1, and to 7500 bytes for bi-directional traffic. In practice frame sizes above 4000 bytes are seldom used by network operators.

    Layer 2 Framing Layer 2 switch framing adds a 14 byte (min) MAC/LLC header (Media Access Control) and a 4 byte FCS (Frame Check Sequence) footer, resulting in a 1518 frame size for a standard 1500 byte packet. The header and footer byte sizes remain constant for smaller packets, meaning they represent a higher percentage of the frame size as packet size reduces. Conversely, they represent a smaller percentage of the frame size as the packet size increases. See Figure 24 for a typical 1500 byte packet, and Table 6 for content description.

    1 Frame sizes to 9600 bytes in one direction, and normal frame sizes in the other direction.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 27 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Figure 24: Ethernet Frame Structure

    Table 6: 10/100 Mbit/s Ethernet Frame Content Description

    Description Bytes IFG Inter-Frame Gap 12 min.

    PRE Preamble (clocking), plus SFD 8

    MAC/LLC Media Access Control / Logic Link Control

    Standard Ethernet Frame: Destination Address 6 bytes Source Address 6 bytes Length/Type 2 bytes

    14

    802.1Q Ethernet Frame: Destination Address 6 bytes Source Address 6 bytes VLAN Q Tag 4 bytes Length/Type 2 bytes

    18

    802.1Q-in-Q Ethernet Frame: Destination Address 6 bytes Source Address 6 bytes VLAN Q Tag 4 bytes VLAN Q-in-Q Tag 4 bytes Length/Type 2 bytes

    22

    IP Header 20*

    TCP Header 20

    Application Data 1460**

    FCS Frame Check Sequence 4

    * Typically 20 bytes but can be up to 60 bytes. ** 1460 bytes is the MSS (Maximum Segment Size) assuming a 20 byte IP header

    Path MTU Where the end-to-end MTU (path MTU) is larger than the source MTU there is no problem. If this is not the case and one or more devices in the path have a lower link MTU, packets will be fragmented, dropped or returned to source, depending on whether or not the Dont Fragment bit has been set in the IP header, and on whether or not correct MTU negotiation occurs.

    Ideally, TCP/IP at source will attempt to discover the downstream MTU and adjust (negotiate) packet sizes to the smallest link MTU in the path. The process can be summarized as:

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 28 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Where a router in the path discovers that it cannot send a packet on its outgoing port because the downstream path MTU is too low and fragmentation is not allowed, then under MTU negotiation it responds to source with an ICMP (Internet Control Message Protocol) message with information on the destination address that failed and the MTU allowed. The source then resets the packet size to accommodate the path MTU.

    Compared to always sending just small sized packets, or allowing packets to be fragmented on route and reassembled at destination, it can be shown that this method is most efficient it uses least network resources.

    Where operators have complete control of the path, such as on a company LAN, ensuring all link MTUs support jumbo frames is not difficult. But on a WAN this is not the case, therefore discovering the path MTU is important when determining expected performance.

    Note that some router/switch vendors use proprietary trunking protocols between their products, which require jumbo frame support. For example ISL (Cisco) uses a frame size of 1554 bytes.

    In legacy networks, which have a mix of Fast and Gigabit Ethernet devices, and therefore the potential for MTU incompatibility, one solution is to segregate the GigE jumbo-frame traffic onto a VLAN that has a known path MTU; all packets for jumbo-framed transmission are tagged and partitioned in a VLAN in which all equipment supports the required MTU.

    Applications Frame size selection within the DAC GE has particular application where it is installed as an edge device. It can be set to provide a policing function to ensure over-sized frames are not sent; the local (source) TCP layer should discover this downstream MTU restriction and resize packets accordingly. By restricting the MTU at the edge it avoids unnecessary loading on downstream network devices devices up to the point of the roadblock.

    Note that Jumbo frames may cause instability on a network for applications that are sensitive to delay and jitter, such as voice where small 64 to 128 byte frame sizes are typically used. Where there is potential for applications using large frames to negatively impact applications using small frames, restricting the network MTU is an option.

    When operating with a DAC ES / IDU ES at the far end, bear in mind that its maximum frame size setting is 1536 bytes.

    Flow Control Flow control is a tool to assist traffic management in a congested network.

    Where traffic increases beyond the carrying capacity of a network, congestion will occur and packets will be discarded. For most protocols this does not pose a significant issue, as lost packets will be retransmitted. However, for voice or video and some data applications, lost data cannot be recovered and will be observed by the customer as a poor or unacceptable service. The Flow Control option on the DAC GE can mitigate this problem.

    While the DAC GE memory buffer absorbs short traffic bursts to smooth out delivery, if throughput increases beyond the carrying capacity of the radio its buffer will top-out, whereupon traffic will be discarded - unless flow control is enabled.

    With flow control a high water point is established in the buffer. When triggered, an 802.3x pause frame is sent back towards the source Ethernet address to force the sending device to reduce the rate at which it is forwarding traffic. This supports graceful reduction of traffic and results in radio link bandwidth being used more efficiently. For it to be fully effective all devices in the end-to-end path must support flow control.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 29 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Figure 25: Flow Control Mechanism

    In a congested network, attention must also be given to prioritizing traffic to ensure that which is more important is queued ahead of less important. See Traffic Priority.

    VLAN Tagging / LAN Aggregation VLANs (virtual LANs) enable the aggregation of two or more LANs (or VLANs) for transport as separate (segregated) network entities on a common trunk. For the DAC GE it means that up to four separate networks (sub-networks) can be transported over one radio channel.

    If a network is not segmented (single LAN), every message sent is broadcast throughout the LAN.

    Segmentation onto VLANs means that each is operated as a separate network; traffic on one will not be seen on another, to result in more efficient and secure network groupings.

    Groupings might be user or customer based, each on their own VLAN. DAC GE provides options to automatically set VLAN tags on ingressing traffic, or to customize the process.

    Automatic VLAN Tagging An internal, automatic option is enabled in the VLAN Mode of Operation, where all traffic ingressing ports 2, 3 and 4 is transported over radio channel 2 to its matching (same-number) port at the far end of the link using 802.1Q-in-Q tagging. This VLAN mode also supports a dedicated port 1-to-port 1 connection over radio channel 1. See VLAN Mode.

    The VLAN Mode is only for use in DAC GE-to-DAC GE links. The VLAN tagging does not exist beyond the far-end DAC GE. Figure 22 illustrates a typical application, where a DAC GE link is used to transport four separate LANs/VLANs between customer sites.

    The VLAN Mode does not assign a priority on the VLANs. However each port can be port-prioritized so that traffic ingressing a port can be prioritized against traffic from another port. See Traffic Priority.

    Customized VLAN Tagging The DAC GE VLAN Tagging screen supports 802.1Q and 802.1Q-in-Q tagging. The process includes with 802.1p prioritization tagging.

    802.1Q: Only untagged frames are tagged. 802.1Q-in-Q: All frames are tagged. Those with an existing tag are double-

    tagged.

    These options are only available for Transparent mode and on ports P2 to P4 for Mixed mode.

    A VLAN ID can be entered (range 0 to 4095) or left as default. At each end of the VLAN the IDs must be matching (have the same ID number).

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 30 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    A VLAN membership filter can also be selected. Only VLAN IDs within the membership range are allowed to transit the relevant port/channel.

    With this capability DAC GE can tag, prioritize and aggregate traffic from two, three or four ports onto a common radio trunk. At the far end of the DAC GE trunk, which may be over multiple hops, options are provided to remove the VLAN tags applied by DAC GE, or allow them to be retained intact for VLAN traffic management at downstream devices.

    VLAN tagging is typically used at the edge of a network to tag and assign a traffic priority. In this way up to four separate LANs (ports 1 to 4) can be carried as virtual LANs on a single Eclipse radio trunk; Eclipse acts both as an edge switch and as the radio trunk link to the core network.

    Each VLAN is held separate on the trunk and accorded the priority set within the 802.1p priority stack at all intermediary 802.1p devices. This allows network providers to discriminate on the service priority accorded over its network for each VLAN. See Traffic Priority.

    Traffic Prioritization and VLAN Tagging Process Figure 26 shows a simplified view of DAC GE processing of priority and VLAN tagging.

    A frame ingressing a port is checked for frame priority at the Categorizer, which, based on the DAC GE priority mapping settings, determines its status at its port ingress queue.

    - The DAC GE switch has an ingress and egress queue on each port. Between them they share a 16 k-byte memory pool for storing frames. When the egress queue exceeds a high threshold the ingress port(s) are stopped from forwarding more data meaning congestion is fed back to the ingress queue(s).

    - Port priority (low to high) is normally used to prioritize traffic on one port over that from another port where ports have a common channel (egress port).

    - The same four-level low to high prioritization mechanism is used to prioritize ingressing VLAN tagged traffic against untagged traffic.

    Dequeued frames are forwarded on an internal bus to the egress queue of the output port(s) or channel(s).

    - For a port-to-channel connection, frames are forwarded from a port ingress queue to the channel egress queue.

    - With VLAN aggregation all frames from multiple ports are forwarded from each port to the egress port of the assigned channel port.

    Dequeued egress frames are forwarded for transmission via the transmit modifier where, if configured, a VLAN tag is added.

    Adding a VLAN tag does not impact the prioritization/queuing of an ingressing frame on the DAC GE that has applied the tag. Its impact applies at downstream switches/routers (up to the point where the tag is stripped).

    - At the downstream DAC GE its channel ingress port has no prioritization capability, meaning 802.1p VLAN prioritization applied at the upstream DAC GE has no effect on the prioritization of VLANs ingressing the downstream DAC GE. But, as the DAC GE-to-DAC GE trunk has a fixed end-end capacity (over one or multiple hops), what is transmitted at one end is received at the other. There is no intermediate switch function or resizing of trunk capacity that would otherwise benefit from queuing and prioritization.

    - However, if the associated downstream DAC GE port is connected to a lower capacity interface, traffic from the resulting egress queue will be forwarded in priority order according to the priority settings for the port.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 31 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    If incoming frames have a pre-existing VLAN tag, the priority setting assigned by an additional DAC GE VLAN tag (the outer tag) has priority at downstream 802.1p prioritized devices.

    Figure 26: DAC GE Priority Processing and VLAN Tagging

    The VLAN ID and priority of a Q or Q-in-Q tagged frame are captured in the MAC/LLC header. See Layer 2 framing in MTU Size.

    It is important to configure a DAC GE according to its function:

    Where the requirement is to simply transport up to four LANs/VLANs over a DAC GE-to-DAC GE trunk, use the VLAN Mode of Operation.

    Use the VLAN Tagging capability where the requirement calls for VLAN tags to be retained beyond a DAC GE-to-DAC GE trunk or where traffic from all four ports must be aggregated onto a common channel.

    - The tagging options presented within the DAC GE VLAN Tagging screen must be carefully analyzed to ensure appropriate selection. See VLAN Tagging for the options.

    - The VLAN Tagging option most used for trunk aggregation is Q-in-Q.

    Figure 27 illustrates the concept of LAN/VLAN aggregation using the options of VLAN operational mode and separately, the customizable VLAN tagging options.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 32 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Figure 27: LAN Aggregation

    Figure 28 illustrates how a VLAN trunk is carried into a WAN. VLAN tagging is applied at Site X using Q-in-Q. At Site Y Do Nothing is selected in the VLAN Tagging screen, so that the tags applied at Site X are partnered on the external switch.

    Figure 28: LAN/VLAN Aggregation with Transparent Trunk Extension

    Figure 29 depicts a practical application with a DAC GE configured as an edge switch in a Metro network. Four VLANs, each from different companies, are aggregated on an Eclipse radio trunk; Eclipse acts as the aggregating edge switch and radio access point to the wider network.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 33 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Figure 29: Eclipse with DAC GE as a Metro Edge Switch

    RWPRTM Within Ethernet ring networks data can be protected using the redundancy available when two or more paths are provided between common end-points. Such networks may be provided entirely within an Eclipse ring network, or within a network combining Eclipse, third party devices and/or other Harris Stratex products.

    The contention that would otherwise occur with the arrival of looped Ethernet frames is managed by the Rapid Spanning Tree Protocol (RSTP), which creates a tree that spans all switches in the ring, forcing redundant paths into a standby, or blocked state. If subsequently one network segment becomes unreachable because of a device or link failure, the RSTP algorithm reconfigures the tree to activate the required standby path.

    RSTP is defined within IEEE 802.1d-2004 and is an evolution of the Spanning tree Protocol (STP).

    Normal RSTP service recovery (reconvergence) action involves a progressive exchange of messages between all nodes beginning with those immediately adjacent to the failure point. Reconvergence times normally range between 2 and 7 seconds, depending on the failure detection process.

    The RWPR implementation within the DAC GE accelerates RSTP reconvergence through application of a unique rapid-failure-detection (RFD) mechanism and dynamic Hello timing, to deliver reconvergence times as low as 50 ms.

    RWPR failure detection provides an end-to-end solution across each DAC GE to DAC GE link, meaning it acts independently of any intermediate hops (Eclipse repeaters or external switches).

    RWPR requires Eclipse SW release 3.4 or later; DAC GE users on earlier SW have access to RWPR capabilities through a software upgrade.

    RWPR benefits include:

    Carrier-class network re-convergence times to better support time-sensitive service level agreements.

    Reliable and consistent RSTP operation, even in the presence of link fading. Support for radio and fiber links; both may be included in Eclipse ring networks. Aggregated links may be used within RWPR ring topologies to support 600+

    Mbit/s rings.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 34 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Lower cost network solutions. Edge devices do not need to support RSTP on Eclipse connections.

    Where an Eclipse ring network is to include external switches, or Eclipse radios are installed within an existing network to establish a ring, there are two primary options given that RSTP on external switches cannot inter-operate with Eclipse RWPR:

    The RSTP function is provided by the DAC GEs using RWPR, and the external switches are configured for transparent operation (not RSTP enabled). The network section comprising the external switch or switches is viewed simply as a path between the DAC GEs at each end of the path.

    The RSTP function is provided by external switches and RWPR is not enabled on the DAC GEs.

    Introduction to RWPR Operation RWPR configuration uses industry-standard RSTP procedures to set and act on switch priority, port cost and port priority.

    The STP algorithm within RSTP calculates the best path throughout a switched Layer 2 network. It defines a tree with a root switch, and a loop-free path from the root to all other switches in the network. All paths that are not needed to reach the root switch from within the network are placed in a blocked mode.

    If a path within the tree fails and a redundant (blocked) path exists, the spanning-tree algorithm recalculates the tree topology and activates the redundant path.

    Only the traffic affected by a topology change is interrupted.

    When a failed path is restored to service and the path provides a lower cost path to the root switch, RSTP will initiate a topology change to re-instate the restored path.

    The switches determine the tree structure automatically through the exchange at regular intervals of bridge protocol data unit (BPDU) messages.

    BPDUs contain information about the sending switch and its ports, including the switch MAC address, switch priority, port priority, and path cost. Spanning-tree uses this information to elect the root switch, and the path to and through other switches on the network, where for each switch a root-facing port and a designated port or ports2 are set.

    - For each switch in the network RSTP calculates the lowest path cost to root on each port. It then sets the port with the lowest cost to root as its root port.

    - When two ports on a switch form part of a loop, the spanning-tree path cost and port ID values determine which port is put in the forwarding state and which is put in the discarding/blocking state.

    Each switch starts as a root switch with a zero root-path cost. After exchanging BPDUs with its neighbors, RSTP elects the root switch, and the topology of the network from the root switch.

    - The root switch is the logical center of the network.

    - The switch with the highest switch priority (lowest switch ID) is elected as the root switch.

    - The switch ID comprises a user-settable priority value and the switch MAC address. If switches are configured for the same priority value, or left as default, the switch with the lowest MAC address becomes the root switch.

    2 Within an RWPR/RSTP context, port refers to a DAC GE port or channel; a channel is a radio-facing port.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 35 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Port cost and priority settings are used by spanning-tree to elect the network topology beneath the root switch. The spanning-tree algorithm uses data from both to determine an optimum network tree (optimum paths to root switch), with contesting ports set for forwarding or blocking.

    - Path cost is set to represent data bandwidth (speed) available on the path, and is assigned a value such that the higher the speed, the lower the cost. Highest priority is given to highest speed = lowest value cost. Costs are added through the network. If the path from a switch has a cost of 100, and the path from it to the next switch towards the root switch is also 100, the combined cost up to the second switch is 200. A lower cost route is always elected over a route with a higher cost.

    - A port priority can be set to represent how well a port is located within a network to pass traffic back to the root. Port priority is contained within a Port ID, which comprises a port priority setting, and the port number.

    - Where costs to root are such that they cannot assist spanning-tree to set a priority, the port with the lowest port ID is used to decide port states, such as forwarding or blocking. Where ports are set for the same port priority, spanning tree selects the port with the lowest port number as the forwarding port.

    - If path costs are not assigned, spanning-tree uses port ID in its selection of port status.

    Table 7 lists the RSTP port roles and states.

    Table 7: RSTP Port Role

    RSTP Port Role RSTP Port State Function Root Port Forwarding Root Port is assigned to the one port on each bridge that

    provides the lowest cost path to the Root bridge (switch).

    Designated Port Forwarding Designated Port is assigned to the one port attached to each LAN (or point-to-point link) that provides the lowest cost path from that LAN to the Root bridge.

    Backup Port Discarding Any operational bridge port that is not a Root Port or Designated Port is a Backup Port if that bridge is the designated bridge for the attached LAN. Backup Port acts as a backup for the path provided by a Designated Port in the direction of the leaves of the spanning tree.

    Alternate Port Discarding Any operational bridge port that is not a Root Port or Designated Port is an Alternate Port if that bridge is not the designated bridge for the attached LAN. An Alternate Port offers an alternate path in the direction of the Root bridge.

    Unknown Port Discarding Broken port or link down port.

    Edge Port Forwarding Port connected only to user LANs or equipment without bridge

    Disabled Port Discarding Administratively disabled

    It is not essential for every switch within a ring to be spanning-tree enabled, but it is recommended.

    Switched ring segments that are not RWPR/RSTP enabled are not represented in the STP tree, and depending on the location of a path failure, may become isolated.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 36 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Switches that are not running spanning tree do forward BPDUs so that spanning-tree switches at each side can exchange BPDUs.

    Figure 30 illustrates a 150 Mbit/s ring layout where RSTP is enabled on the DAC GEs using the RWPR option; the customers LAN is supported directly from the DAC GE at each site. In this example the Eclipse backplane bus is set for Nx150 Mbit/s operation.

    Figure 31 illustrates example DAC GE RWPR settings for this network, which ensure correct election of the root switch at the network core, and establish the preferred topology on the remaining switches.

    Figure 30: 150 Mbit/s Ring: Ethernet

    Figure 31: Eclipse RWPR Switch Network Example

    In this example network:

    The root switch is configured with the lowest bridge priority value. (Lowest value = highest priority). If the root switch fails, the lower-left switch would become the root switch.

    Data bandwidths are equal (150 Mbit/s) on all ring links. DAC GE RWPR costs (path costs) have been set to 300 on channels C1 and C2 on all DAC GEs. This means that from the root switch, RWPR costs are equal (1200) to the top-left switch. So RWPR costs alone do not help to elect a preferred route, clockwise or anti-clockwise, to the top-left switch.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 37 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    RSTP next looks at the port priority (RWPR priority) settings on the top-left equal-path-cost switch to determine channel port status; forwarding (root) or blocked. As the preferred route is clockwise, C2 has been configured with a lower RWPR priority (higher value number). So C2 becomes the blocked port and all traffic to this switch travels clockwise to/from the root switch via C1.

    If C1 and C2 on the top-left switch were configured with the same RWPR priority, RSTP would next examine the port numbers involved, to assist the election of a preferred route. In this example, C1 has the lowest port number so C1 would be confirmed as the forwarding (root) port, and C2 as the blocked port.

    Where higher capacity RWPR networks are required, options include use of co-located INUs and use of the L2 link aggregation options:

    For a 300 Mbit/s ring two topology options are supported, one using 300 Mbit/s radio links on 56 MHz channels, the other using co-path 150 Mbit/s links on 28 MHz CCDP channels with L2 (layer 2) or L1 (layer 1) link aggregation. Two INUs are required at each network node, one to the east, the other to the west, with their DAC GEs port-port interconnected.

    A 600 Mbit/s ring may be constructed using two L2 aggregated 300 Mbit/s links east and west, or four L2 aggregated 150 Mbit/s links.

    When L2 or L1 aggregated links are used on a ring, the links are first link-aggregated on each hop, and then RWPR ring protected.

    Separate DAC GEs are required for L2 link aggregation and RWPR ring functions.

    East and west L2 aggregated links are treated as one logical link by RWPR. If one link in the aggregated pair fails, ring switching does not occur - both links must fail to initiate switching.

    East and west L1 aggregated links are also treated as one logical link by RWPR. However, because all traffic on the logical link is interrupted for about 10 ms if one link in the aggregated pair fails, ring switching does occur. Similarly, because a 10 ms traffic interrupt occurs when the failed link is restored, RSTP switch action will again be initiated, whereupon ring traffic will be restored to the link providing it is cost effective. See Layer 1 Backplane Bus Link Aggregation.

    To view the INU/INUe layout for 150 Mbit/s to 600 Mbit/s ring nodes, see Platform Layouts: Ring and Star Nodes.

    Reconvergence Times When a link within an RSTP ring fails, failure detection time and network reconfiguration time must be included in the total reconvergence time. The following section contrasts the RWPR reconvergence process in an Eclipse ring network against the RSTP process in a wireless network using external RSTP switches.

    Failure Recovery Times Using External RSTP Switches When a point-to-point radio link fails, the failure can be due to a path or equipment failure, or both. But for most failure situations the Ethernet port on the radio will remain up, meaning no immediate indication is provided to a connected switch that the link has failed or is degraded (high BER). Under these situations an RSTP switch can only determine the status of a link using Hello BPDU (Bridge Protocol Data Unit) messaging.

    Under RSTP, Hello BPDUs are sent out of all switch ports on the network so that every switch in the network is aware of its neighbor.

    Hello BPDUs have a default 2 second time interval. When three BBDUs are missed in a row (total 6 seconds), that neighbor is

    presumed to be down and the switch initiates RSTP convergence.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 38 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Some Ethernet links can bypass this lengthy process using an Ethernet PHY port shutdown capability.

    The Ethernet port on the link is electrically shut down (transmit muted) for a path failure or degrade.

    This is detected as a link failure (port down) on the companion RSTP switch port. Total detection time is generally within 200 to 500 ms.

    Add to these times the typical RSTP network convergence times of between 200 ms and 1 second, and you have total failure recovery latencies in the order of 7 seconds for a Hello BPDU timeout process, or 1 to 2 seconds for a PHY port shutdown event.

    Eclipse Carrier-Class RWPR Failure Recovery Latency When an Eclipse link in an RWPR network fails, (software, equipment, path, or diagnostic failure event) its RFD (rapid failure detection) mechanism immediately forces initiation of RSTP convergence (within 1 ms).

    Additionally, a dynamic Hello time is used on the Ethernet ports to accelerate convergence under RSTP, knowing that during this period, port states can change frequently through message exchanges between neighbor switches.

    The polling timer is advanced to 10ms from a default 500 ms. This occurs when a switch receives a topology change message or when it

    detects a topology change event.

    The result is that failure recovery latencies are considerably lowered compared to normal RSTP operation. For a 5 node RWPR ring typical maximum traffic outages are:

    60 ms for a link down (link failure). 50 ms for a link up (failed link restored to service). 800 ms for an Ethernet PHY port-down. 40 ms for Ethernet PHY port up.

    These times satisfy MEF guidelines for carrier class reliability (redundancy).

    Only an integrated switch solution can provide this level of performance. It cannot be matched by wireless networks using external switches.

    Link Aggregation Link aggregation groups a set of ports so that two network nodes can be interconnected using multiple links to increase link capacity and availability between them.

    When aggregated, two or more physical links operate as a single logical link with a traffic capacity that is the sum of the individual link capacities.

    This doubling, tripling or quadrupling of capacity is relevant where more capacity is required than can be provided on one physical link.

    Link aggregation also provides redundancy between the aggregated links. If a link fails, its traffic is redirected onto the remaining link, or links.

    If the remaining link or links do not have the capacity needed to avoid a traffic bottleneck, appropriate QoS settings are used to prioritize traffic so that all high priority traffic continues to get through.

    To provide a similar level of redundancy without aggregation, hot-standby or diversity protection is required, but with such protection the standby equipment is not used to pass traffic.

    3/25/2008 Eclipse_DAC_GE_ETSI_080425.doc Page 39 of 73

    Copyright 2008 Harris Stratex Networks, all rights reserved.

  • White Paper

    Link aggregation can be implemented at different levels in the protocol hierarchy, and depending on the level will use different information to determine which packets, frames or bytes go over the different links.

    A layer 3 (L3) implementation uses source and or destination IP addresses in the IP header. Higher layer implementations use logical port information and other layer relevant information.

    Layer 2 (L2) link aggregation uses source and/or destination MAC address data in the Ethernet frame MAC/LLC header.

    A layer 1 (L1, physical layer) aggregation acts on the bit or byte data stream. For Eclipse two modes of link aggregation can be configured:

    L2 link aggregation using the DAC GE switch. L1 link aggregation using circuit cross-connects on the INU/INUe backplane bus.

    Layer 2 DAC GE Link Aggregation DAC GE link aggregation was introduced in Modes of Operation and in Network Termination Nodes.

    The same rapid failure detection capability used for RWPR is also used to support fast-switched link aggregation. Traffic transfer from a failed link occurs within microseconds, well within the 50 ms carrier-class benchmark.

    Traffic streams transiting the logical link are spl