transmission network design and architecture guidelines version 1 3

Upload: seda-oezcan

Post on 14-Apr-2018

226 views

Category:

Documents


1 download

TRANSCRIPT

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    1/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 1 of 64

    Reference: Transmission network design & architecture guidelines

    Version: 1.3 Draft

    Date: 10 June 2013

    Author(s): David Powders

    Filed As:

    Status: Draft Version (1.3)

    ApprovedBy:

    Signature /Date:

    .......................................................... / ......................

    NSI Ireland

    Transmission NetworkDesign & Architecture

    Guidelines

    Version 1.3 Draft

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    2/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 2 of 64

    Document HistoryVersion Date Comment

    1.0 Draft 27.01.2013 First draft

    1.1 Draft 25.02.2013 Incorporating changes requested from parentoperators;- Resilience- Routing- Performance monitoring

    1.2 Draft 14.05.2013 - Updated BT TT Routing section 2.3

    1.3 Draft 10.06.2013 - Section 2.3 E-Lines added to TT design- Section 2.6 Updated dimensioning rules- Section 2.6.4 Updated Policing allocation per class- Section 3.x added (Site design)

    Reference documents1 (2012.12.27) OPTIMA BLUEPRINT V1.0 DRAFT FINAL.doc

    2 Total Transmission IP design - DLD V2 2 (2)[1].pdf

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    3/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 3 of 64

    Contents

    DOCUMENT HISTORY ....................................................................................................... ................ 2REFERENCE DOCUMENTS......................................................................... ...................................... 21.0 INTRODUCTION.............................................................................................. ........................... 5

    1.1 BACKGROUND............................................................................................................................. 51.2 SCOPE OF DOCUMENT................................................................. ................................................. 51.3 DOCUMENT STRUCTURE.............................................................. ................................................ 5

    2.0 PROPOSED NETWORK ARCHITECTURE ............................................................... ................ 62.1TRANSMISSION NETWORK............................................................... ................................................ 72.1DATA CENTRE SOLUTION................................................................ ................................................ 8

    2.1.1 Physical interconnection ........................................................ ................................................ 82.2SELF BUILD BACKHAUL NETWORK............................................................ ...................................... 9

    2.3.1 Self build fibre diversity ....................................................................................................... 122.3MANAGED BACKHAUL .................................................................................................................. 13

    2.3.1 TT Network contract ........................................ .............................................................. ...... 132.3.3 Backhaul network selection.............................. ................................................................. ... 16

    2.4 BACKHAUL ROUTING ................................................................................................................ 162.4.1 Legacy mobile services .................................... .............................................................. ...... 162.4.2 Enterprise services ................................................................. .................................... 172.4.3 IP services ......................................................... ......................................................... 17

    2.4.3.1 L3VPN structure ............................................................................................................... 172.4.3.2 IP service Resilience........................................................................................................ 19

    2.5 ACCESS MICROWAVE NETWORK........................................................... .................................... 202.5.1 Baseband switching ................................................................................................ ... 212.5.2 Microwave DCN .......................................................... .............................................. 222.5.3 Backhaul Interconnections ................................................................ ......................... 22

    2.6 NETWORK TOPOLOGY & TRAFFIC ENGINEERING ......................................................... .............. 232.6.1 Access Microwave topology & dimensioning ......................................................... ... 242.6.2 Access MW Resilience rules .............................................................. ......................... 272.6.3 Backhaul & Core transmission network dimensioning rules ..................................... 282.6.4 Traffic engineering .................................................................................................... 29

    2.7 NETWORK SYNCHRONISATION .............................................................. .................................... 352.7.1 Self Built Transmission network ................................................................................ 362.7.2 Ethernet Managed services ............................................................... ......................... 372.7.3 DWDM network ........................................................... .............................................. 382.7.4 Mobile network clock recovery ......................................................... ......................... 39

    2.7.4.1 Legacy Ran nodes ........................................................................................................... 392.7.4.2 Ericsson SRAN 2G ....................................................................................................... 392.7.4.3 Ericsson SRAN 3G & LTE ........................................................................................... 392.7.4.4 NSN 3G .......................................................................................................................... 402.8 DATA COMMUNICATIONSNETWORK(DCN) .............................................................. .............. 40

    2.8.1 NSN 3G RAN Control Plane routing ......................................................................... 412.8.2 NSN 3G RAN O&M routing .............................................................. ......................... 41

    2.9 TRANSMISSION NETWORK PERFORMANCE MONITORING ........................................................ ... 433.0 SITE CONFIGURATION ............................................................ .............................................. 45

    3.1 CORE SITES ............................................................................................................................... 453.2 BACKHAUL SITES ...................................................................................................................... 51

    3.2.1 BT TT locations ........................................................... ......................................................... 563.3ACCESS LOCATIONS ...................................................................................................................... 57

    3.3.1 Access sites (Portacabin installation) .......................................................... .............. 573.3.2 Access site (Outdoor cabinet installation) .............................................................. ... 60

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    4/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 4 of 64

    Figures

    Figure 1 Proposed NSI transmission solution .................... ...................... ...................... ...................... ..................... . 7Figure 2 Example data centre northbound physical interconnect .................... ...................... ...................... ............... 8Figure 3 - Dublin Dark fibre IP/MPLS network ....................... ...................... ...................... ...................... ............. 10Figure 4 - National North East area IP/MPLS microwave network .................... ...................... ...................... ......... 11Figure 5aBT Total Transmission network............................................................................................................ 14Figure 5bNSI logical and physical transmission across the BT network ..................................... ...................... .. 15Figure 7Access Microwave topology ...................... ...................... ..................... ...................... ...................... ...... 21Figure 8Example VSI grouping configuration ...................... ...................... ...................... ...................... ............. 23Figure 10IP/MPLS traffic engineering .................... ...................... ...................... ..................... ...................... ...... 30Figure 11Enterprise traffic engineering ............................................................................................................... 31Figure 12Downlink traffic control mechanism ..................... ...................... ...................... ...................... ............. 32Figure 13- Normal link operation ............................................................................................................................ 34Figure 14

    Self built synchronisation distribution ................... ...................... ...................... ...................... ............. 37

    Figure 151588v2 distribution over Ethernet Managed service .................... ...................... ...................... ............. 38

    Tables

    Table 1: Self build fibre diversity ................... ...................... ...................... ...................... ......... 12Table 2: TT Access fibre diversity ..................... ..................... ...................... ...................... ...... 13Table 3 List of L3VPNs required......................................................................................... 18Table 4 Radio configuration V air interface bandwidth ..................................................... 25Table 5: Feeder link reference .................... ...................... ...................... ...................... ............. 26Table 6: CIR per technology reference...................................................................................... 26Table 5 Sample Quality of Service mapping...................................................................... 30Table 6 City Area (Max link capacity = 400Mb\s).............................................................. 33Table 7 Non City Area (Max link capacity = 200Mb\s) ..................................................... 33Table 7: Synchronisation source and distribution summary .............................................. 36Table 8: DCN network configuration per vendor................................................................. 41Table 9: NSI transmission network KPIs and reporting structure .................................... 44Table 10: Core site build guidelines ..................... ..................... ...................... ...................... ...... 51Table 11: Backhaul site build guidelines ..................... ...................... ...................... .................... 56Table 12: Access site categories ................... ...................... ...................... ...................... ............. 57Table 11: Access site consolidationNo 3PP services in place ............................. .................... 63Table 12: Outdoor cabinet consolidation existing 3PP CPE on site ............................ ............. 64

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    5/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 5 of 64

    1.0 Introduction

    1.1 Background

    The aim of this document is to detail the design and architecture principles to

    be applied across the Netshare Ireland (NSI) transmission network. NSI, as

    detailed in the transition document, is tasked with collapsing the existing

    transmission networks inherited from both Vodafone Ireland and H3G Ireland

    onto one single network carrying all of each operators enterprise and mobile

    services. As detailed in the transition document it is NSIs responsibility to

    ensure that the network is future proof, scalable and cost effective with the

    capability to meet the short term requirements of network consolidation and

    the long term requirements of service expansion.

    1.2 Scope of document

    This document will detail the proposed solutions for the access and backhaul

    transmission networks and the steps required to migrate from the current

    separate network configuration to one consolidated network. While the

    required migration procedures are detailed within this document timescales

    required to complete these works are out of scope.

    1.3 Document structure

    The document is structure as follows:

    Section 2 describes the desired end to end solution for the consolidatednetwork and the criteria used to arrive at each design decision

    Section 3 covers the site design and build rules

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    6/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 6 of 64

    2.0 Proposed Network architecture

    As described in section 1.1, NSI is required to deploy and manage a

    transmission network which is future proof, scalable and cost effective. As

    services, particularly mobile, move to an all IP flat structure it is important to

    ensure that the transmission network is evolved to meet this demand.

    Traditionally transmission networks and the services that ran across them

    were linked in the sense that the service connections followed the physical

    media interconnections between the network nodes. For all IP networks

    where any to any logical connections are required, it is essential that the

    transmission network decouples the physical layer from the service layer.

    For NSI Ireland the breakdown between the physical and service layer can be

    described as:

    Physical media layer

    1. Tellabs 8600 & 8800 multiservice routers

    2. Ethernet Microwave (Ceragon / Siae)

    3. Dark Fibre (ESB / Eircom / e|net)

    4. Vodafone DWDM (Huawei)

    5. SDH (Alcatel Lucent/Ericsson)

    6. POS / Ethernet (Tellabs)

    7. Managed Ethernet services (e|net, UPC,ESBT, Eircom)

    8. BT Total Transmission network (TT)

    Service layer

    o IP/MPLS (Tellabs / BT TT)

    o L2 VPN (Tellabs / BT TT)

    o E-Line (Ceragon / Siae)

    o TDM access (Ceragon / Siae / Ericsson MiniLink)

    By decoupling the physical media layer from the service layer it allows NSI the

    flexibility to modify one layer without impacting the other. Therefore routing

    changes throughout the network are independent of the physical layer once

    established. In the same way changes in the physical layer such as new

    nodes or bandwidth changes are independent of the service routing. This in

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    7/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 7 of 64

    turn ensures that transmission network changes requiring 3rd party

    involvement are restricted primarily to the physical layer which, once

    established, should be minimal.

    While seamless MPLS from access point through to the core network is

    possible, for demarcation purposes the NSI transmission network will

    terminate at the core switch (BSC / RNC / SGw / MME / Enterprise gateway).

    2.1 Transmission network

    Figure 1 details the proposed solution for the NSI transmission network.

    680-SR12-1 680-SR12-2

    HPD-SR12-1

    HPD-SR12-2

    RNCs 1-8 I CS U OM U

    UP CP O&M

    GPOP-SR12

    BT TT Network

    Access7210

    Cgn

    Cgn

    Cgn

    Cgn

    Cgn

    Cgn

    Cgn

    Cgn

    elp?

    RSTP ?

    Cgn

    Cgn

    UP

    CP

    O&M

    TOP

    RBS

    Ge

    UP VID= 3170 172.17.x.x/32CP VID= 3180 172.18.x.x/32

    O&MVID= 3190 172.19.x.x/32TOP VID= 3200 172.20.x.x/32

    CGNO&M= 3210 172.21.x.x/32

    UP

    CP

    O&M

    TOP

    RBS

    GeAccessCluster

    - Each VLAN =Broadcast Domain

    - MAC Learning enabled throughout theclusterto enable layer2 switching

    - No E-Linesin use

    UP VID= 3170 172.17.x.x/32CP VID= 3180 172.18.x.x/32

    O&MVID= 3190 172.19.x.x/32TOP VID= 3200 172.20.x.x/32

    CGNO&M= 3210 172.21.x.x/32

    dn1rnc01 172.30.213.0/24

    172.30.214.0/24

    172.30.215.0/24

    172.30.216.0/24

    172.30.217.0/24

    172.30.218.0/24

    dn1rnc02 10/196.0.0/20

    dn1rnc03 10.196.16.0/20

    dn1rnc04 10.196.32.0/20

    dn1rnc05 10.196.48.0/20

    dn1rnc06 10.196.64.0/20

    dn1rnc07 10.196.80.0/20

    dn1rnc08 10.196.96.0/20

    /29netrworkallocated tobackendBTS interface.

    Staticroutes requiredtoOMU, DCNandRNCnetworks??

    Tellabs86xx

    Tellabs nx10Gig connecting the Data Centres

    DN680 VF Clonshaugh

    10G LACP

    10G LACP

    10G LACP

    10G LACP

    10G LACP10G LACP

    10G LACP

    10G LACP

    TTEthe

    rnettr

    unkTTEthern

    etTrunk

    The BT TT Network is configuredfor L2 PtP circuits to each of theCDC locations. Dual nodes atthe Data centres may be used toload balance the traffic from thedistributed BPOP locations

    Netshare IP/MPLS network. L3VPNs are configured for each ofthe srevice types from each of theoperators

    TOP Server

    TOPVLAN D

    UPVLAN A

    CPVLAN B

    O&MVLAN C

    UP1

    VRRPIRBA250 .M IRBA230 .B

    W.X.Y.Z/29

    TOPVLAN D

    UPVLAN A

    CPVLAN B

    O&MVLAN C

    CPVRRP

    IRBB250 .M IRBB230 .B

    W.X.Y.Z/29

    O&MVRRP

    IRBC 250 .M IRBC 230 .B

    W.X.Y.Z/29

    TOP

    VRRPIRBD 250 .M IRBD 230 .B

    W.X.Y.Z/29

    VRRP VPLS i/f

    OMU Network

    172.30.208.0/25

    DCN Network

    192.168.0.0/16

    10G LACP

    Tellabs8860 Tellabs8860 Tellabs8860Tellabs8860

    TT Ethernet trunk

    Netshare LSP

    VLAN Trunk

    VLAN Trunk

    Netshare VLAN Trunk

    Legend

    UP VID= 3170 172.17.x.x/26CP VID= 3180 172.18.x.x/26

    O&MVID= 3190 172.19.x.x/26TOP VID= 3200 172.20.x.x/26

    CGNO&M= 3210 172.21.x.x/26

    Cgn

    Cgn

    Siae

    Siae

    Siae

    Cgn GigE

    GigE

    GigE

    GigE

    elp

    Cgn

    Cgn

    Cgn

    Siae

    Siae

    Siae

    Siae

    Siae

    CgnCgn

    Siae

    SiaeSiae

    GigE

    Netsh

    areEth

    ernetT

    runk

    Netshare GigE/POS Trunk

    Netsh

    areEthe

    rnet

    Trunk

    Figure 1 Proposed NSI transmission soluti on

    To explain in detail the proposed transmission solution the network will be

    broken into the following areas

    Data centre Northbound interfaces

    Self build backhaul

    Managed backhaul

    Backhaul routing

    Access Microwave network

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    8/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 8 of 64

    Network QoS & link dimensioning

    DCN

    Network synchronisation

    2.1 Data Centre solution

    2.1.1 Physical interconnection

    VFIE and H3G operate their respective networks based on a consolidated

    core.

    All core switching (BSCs, RNCs, EPCs, Master synchronisation, DCN,

    security) for both H3G and VFIE are located in Dublin across 4 data centres.

    They are;

    1. CDC1 DN680 Vodafone, Clonshaugh (VFIE)

    2. CDC2 DN706 BT, Citywest (VFIE)

    3. CDC3 DN422 Data Electronics, Clondalkin (VFIE)

    4. CDC4 DNxxx Citadel, Citywest (H3G)

    Figure 2 below details the possible northbound connections @ each data

    centre

    Figure 2 Example data centre northbound physical in terconnect

    NSI will deploy 2 x Tellabs 8800 multiservice routers (MSRs) at each of thedata centres. 2 routers are required to ensure routing resilience for the

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    9/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 9 of 64

    customer traffic. The 8800 hardware will interface directly at 10Gb\s, 1Gb\s &

    STM-1 with the core switches, DCN and synchronisation networks for both

    operators.

    Each of the Data centres will be interconnected using n x 10Gb\s rings. RSVP

    LSPs are not supported on the current release of 8800 interfaces in a Link

    Aggregation Group (LAG) so multiple 10Gb\s rings can be used to transport

    traffic from both operators. In the first deployment 1 x 10Gb\s ring will be

    deployed which can be upgraded as required. Consideration was given to a

    meshed MPLS core, however the Nx10Gb\s ring was deemed to be

    technically sufficient and more cost effective. This design may be revisited in

    the future based on capacity, resilience and expansion requirements.

    Interfacing to the out of band DCN (mobile and transmission networks) and

    synchronisation networks will be realised through 1Gb\s interfaces.

    All interfaces to legacy TDM and ATM systems are achieved through the

    deployment of STM-1c and STM-1 ATM interfaces.

    Physical and layer 3 monitoring of the physical interfaces is active on all trunk

    interfaces so in the event of a link failure all traffic is routed to the diverse path

    and the required status messaging and resilience advertisements are

    propagated throughout the network. These will be explained in detail in each

    of the sections dealing with service provisioning.

    2.2 Self build backhaul network

    Self build refers to network hardware and transmission links that are within the

    full control of NSI in terms of physical provisioning. The Self built Backhaul

    network interconnects the aggregation sites and the core data centre

    locations via a mix of Ethernet, Packed over SDH (POS) and SDH point to

    point trunks. The service layer will be IP/MPLS based on the Tellabs 8600

    and 8800 MSR hardware. Figures 3 and 4 are examples of the proposed

    network structure,

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    10/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 10 of 64

    Dublin Dark Fibre v1.0

    Core Dublin ring STM16 (future 10G/nx10G/40G); ISIS L2-only or L1-2 if between routers in same location

    ISIS 49.0031

    DN522200

    172.25.2.3

    DN294200

    172.25.0.8

    DNBDE200

    172.25.0.107

    DNFTZ200

    172.25.0.9

    DN923200

    172.25.4.2

    DN017200

    172.25.4.3

    DNNGE201

    172.25.0.109

    DNBLB200

    172.25.0.101

    DNFOX200

    172.25.0.100

    DNCRL200

    172.25.0.103

    DNWAL200

    172.25.0.102

    DN875200

    172.25.5.1

    DNTLH200

    172.25.1.4

    DN394200

    172.25.0.104

    DNCLD200

    172.25.1.2

    DNLCN200

    172.25.0.105

    DNPAL200

    172.25.1.1

    DN940200

    172.25.5.2

    DNCME200

    172.25.1.6

    DNCAB200

    172.25.2.6

    DN433200

    172.25.6.1

    DNCP1200

    172.25.1.5

    DN113200

    172.25.2.8 DNSAN200

    172.25.0.106

    DN915200

    172.25.4.1

    DNDCT200

    172.25.x.y

    DN822200

    172.25.5.3

    DNSE1200

    172.25.2.4

    DN880200

    172.25.3.2

    DNTWR200

    172.25.2.5

    DNHB1200

    172.25.0.7

    DNAGI200

    172.25.64.1

    ISIS 49.0032

    ISIS 49.0033

    DNBW1200

    172.25.128.1

    PoC2 connections GE (future 10G); ISIS L1-2 intra-area links or L2-only inter-area links

    PoC3 connections GE (future subrate_10G/line_rate_10G); ISIS L1-only

    PoC3 connections GE; ISIS L1-only

    DN680200

    172.25.0.1

    DN680201

    172.25.0.2

    DN680202

    172.25.0.17

    DN422201

    172.25.0.6

    DN422200

    172.25.0.5

    DN706200

    172.25.0.3

    DN706201

    172.25.0.4

    DNNGE200

    172.25.3.3

    DNPRP201

    172.25.0.110

    DNPRP200

    172.25.3.1

    DN419200

    172.25.2.2

    DNBLP200

    172.25.2.1

    DNBLP201

    172.25.x.y

    ge12/0/7

    ge6/0/7

    ge9/0/7

    ge8/0/7

    ge8/0/7

    ge7/0/7ge8/0/7

    ge2/0/7

    ge13/0/7ge5/0/7

    ge10/0/7

    ge6/0/7

    ge9/0/7

    so10/0/0

    so5/0/0

    so6/1/0so9/1/0so8/1/0

    so7/1/0

    so6/1/0

    ge8/0/7

    ge7/0/7

    ge6/0/7

    ge9/0/7

    ge8/0/7

    ge7/0/7

    ge5/0/7

    ge10/0/7

    ge3/0/7so7/1/0so6/1/0so9/1/0 so8/1/0 so7/1/0s o8 /1 /0 s o8/ 1/ 0s o9 /1 /0so6/1/0

    so6/1/0

    so7/1/0

    so5/0/0

    so9/0/0

    so9/0/0so6/0/0

    ge9/0/6

    ge3/0/7

    ge6/0/7

    ge9/0/7

    ge6/0/7

    ge9/0/7 ge6/0/7ge9/0/7

    ge10/0/7

    ge5/0/7

    ge9/0/7ge6/0/7 ge6/0/7 ge9/0/7 ge7/0/7 ge8/0/7

    ge3/0/7

    g e 12 / 0/ 7 g e 6/ 0 /7

    ge9/0/7

    ge9/0/0

    ge9/0/7

    g

    e6/0/0

    ge6/0/7

    ge7/0/7

    ge8/0/7

    ge6/0/7

    ge9/0/7

    ge7/0/7

    ge8/0/7

    ge8/0/0

    ge7/0/0

    ge6/0/7

    ge9/0/7

    ge12/0/7

    ge12/0/6

    ge11/0/7

    ge3/0/7

    ge8/0/7ge7/0/7

    ge7/0/7

    ge8/0/7

    ge10/0/7

    ge5/0/7

    ge9/0/7

    ge6/0/7

    ge9/0/7 ge6/0/7

    ge5/0/7

    ge10/0/7ge5/0/7ge10/0/7ge7/0/7

    ge8/0/7

    ge7/0/7ge8/0/7

    ge3/0/7

    ge12/0/7

    ge7/0/7

    ge8/0/0

    ge7/0/0

    ge6/0/7

    ge9/0/7

    ge8/0/0

    g e7 /0 /0 g e6 /0 /7

    ge9/0/7ge8/0/7ge7/0/7

    ge8/0/7

    ge9/0/7

    ge7/0/6

    ge8/0/6

    ge9/0/7

    ge6/0/6

    34 10.82.0.32/30 33

    18

    10.

    82.0.1

    6/30

    17

    26

    10.

    82.

    0.

    24/30

    25

    17

    10.

    82.0.

    16/30

    18

    30 10.82.0.28/30 29

    37

    10.8

    2.

    0.

    36/30

    38

    21 10.82.0.20/3022

    110.82.0.0/30 2

    510.82.0.4/306

    910.82.0.8/30 10

    13 10.82.0.12/30 14 98 10.82.0.96/3097

    1 10.82.10.0/30 2

    6

    10.

    82.1

    0.

    4/30

    5

    10

    10.

    82.

    10.8

    /30

    9

    14

    10.8

    2.

    10.1

    2/30

    13

    18 10.82.10.16/30 17

    2210.82.10.20/30 21

    25

    10.

    82.1

    0.

    24/30

    26

    29

    10.

    82.1

    0.

    28/30

    30

    33

    10.8

    2.

    10.

    32/30

    34

    37

    10.

    82.

    10.3

    6/30

    38

    41

    10.8

    2.

    10.

    40/30

    42

    45 10.82.10.44/30 46

    49 10.82.10.48/30 50

    53 10.82.10.52/30 54

    22110.82.10.220/30222

    22610.82.10.224/30225

    58

    10.8

    2.

    10.

    56/30

    57

    62 10.82.10.60/30 61

    66

    10.

    82.1

    0.

    64/30

    65

    234

    10.8

    2.1

    0.2

    32/30

    233

    229

    10.8

    2.1

    0.2

    28/30

    230

    70 10.82.10.68/30 69 7410.82.10.72/30 73

    78

    10.

    82.1

    0.

    76/30

    77

    81

    10.

    82.1

    0.

    80/30

    828610.82.10.84/30 85

    89

    10.

    82.

    10.8

    8/30

    90

    93

    10.8

    2.

    10.9

    2/30

    94

    9710.82.10.96/30 98

    102

    10.8

    2.

    10.

    100/30

    101

    105 10.82.10.104/30 106

    109

    10.

    82.1

    0

    .108/30

    110

    113 10.82.10.112/30 114117 10.82.10.116/30 118

    125 10.82.10.124/30 126

    121

    10.8

    2.

    10.

    120/30

    122

    129 10.82.10.128/30 130

    134

    10.

    82.

    10.1

    32/30

    133

    13710.82.10.136/30 138

    142

    10.

    82.1

    0.

    140/30

    141

    237

    10.8

    2.1

    0.

    236/30

    238

    242

    10.

    82.1

    0.2

    40/30

    241

    245 10.82.10.244/30246

    DN419201

    172.25.0.108

    25010.82.10.248/30249

    146

    10.

    82.1

    0.

    144/30

    145

    150

    10.

    82.1

    0.

    148/30

    149

    15410.82.10.152/30 153

    Sync Priority 1

    Sync Priority 2Sync Priority 3

    U1U1

    4

    U1

    U14

    From

    _AD

    M

    From_

    ADM

    From_ADMFrom_ADMFrom_ADM

    From

    _ADM

    L1BL1B L1B L1B L1B L1B

    L1BL1A L1A

    L1A

    L1AL1AL1A

    L1A

    L 1A L 1A

    L1B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2BL2B

    L2B

    L2B L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2BL2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2B

    L2BL2B

    L2B

    L2B

    L2B

    L2B

    L2B

    ge7/0/7

    ge7/0/7

    L3B

    L3B

    L2B

    L2B

    L2B

    L2B

    L2B L2B

    Figure 3 - Dublin Dark f ibre IP/MPLS network

    In the network Dark fibre from Eircom, ESB and e|net will be used as the

    physical media interconnect. Interconnections based on aggregation

    requirements will be at speeds of 1Gb\s, 2.5Gb\s (legacy) or 10Gb\s. A

    hierarchical ISIS structure of rings will be used to simplify the MPLS design.

    The Level 2 areas will be connected directly to the core data centre sites with

    level 1 access rings used to interconnect traffic from the access areas. The L2

    access areas will have physically diverse fibre connections to 2 of the data

    centres. Physically diverse LSPs are routed from the L2 aggregation routers

    to each of the data centres facilitating diverse connectivity to each of the core

    switching elements. This provides protection against a single trunk or node

    failure.

    The access rings will have diverse fibre connectivity to a L1/2 router which will

    perform the ABR function.

    Within each access ring diverse LSPs will be configured to the ABR or ABRs

    providing access route resilience against a single fibre break and/or node

    failure.

    RSVP LSPs with no bandwidth reservations will be used to route the LSPs

    across the backhaul network. All LSPs will use a common Tunnel Affinity

    Template. This provides the flexibility to re-parent traffic to alternative trunks

    without a traffic intervention should that be required. It is proposed to use a

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    11/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 11 of 64

    combination of strict and loose hop routing across the network. The working

    path should always be associated with the strict hop with the protection

    assigned to either a strict or loose hop. For those LSPs routed over the

    Microwave POS trunks, strict hops will be used to ensure efficient bandwidth

    management. For those routed across Dark fibre or managed Ethernet loose

    hops will be used. In a mesh network where there are multiple physical

    failures and multiple paths possible this approach offers a greater level of

    resilience.

    LH038200172.25.0.16

    DN706201172.25.0.4

    DN680200172.25.0.1

    DNBW1200172.25.128.1

    KECAP200172.25.128.2 MHFKS200

    172.25.128.5

    MHWD1200172.25.128.3

    WHLIN200172.25.128.6

    MH009200172.25.128.7

    LH001200172.25.128.8

    LH011200172.25.128.9

    CNSGA200172.25.128.11

    CNMCR200172.25.128.12

    LHDDK200172.25.128.14

    10/1/4

    10/1/7

    10/1/6

    10/1/5

    7/0/0

    7/0/3

    7/0/2

    7/0/1

    8/0/0

    8/0/38/0/28/0/1

    7/0/0

    7/0/37/0/27/0/1

    8/0/0

    8/0/3

    8/0/2

    8/0/1

    9/0/0

    9/0/2

    9/0/2

    9/0/1

    7/0/0

    7/0/3

    7/0/2

    7/0/18/0/0

    8/0/3

    8/0/2

    8/0/1

    9/0/0

    9/0/3

    9/0/2

    9/0/1

    4/0/4

    4/0/7

    4/0/6

    4/0/5

    7/0/0

    7/0/37/

    0/2

    7/0/1

    8/0/0

    8/0/3

    8/0/2

    8/0/1

    7/0/2

    7/0/1

    7/0/0

    7/0/3

    9/0/0

    9/0/3

    9/0/2

    9/0/1

    8 / 0 / 0

    8 / 0 / 3

    8 / 0

    / 2

    8 / 0 / 1

    8/0/0

    8/0/3

    8/0/2

    8/0/1

    7/0/0

    7/0/3

    7/0/2

    7/0/1

    8/0/0

    8/0/3

    8/0/2

    8/0/1

    7/0/0

    7/0/3

    7/0/2

    7/0/1

    9/0/0

    9/0/3

    9/0/2

    9/0/1

    7/0/0

    7/0/3

    7/0/2

    7/0/1

    8/0/0

    8/0/3

    8/0/2

    8/0/1

    7/0/0

    7/0/3

    7/0/2

    7/0/1

    8/0/0

    8/0/3

    8/0/2

    8/0/1

    7/0/0

    7/0/3

    7/0/2

    7/0/1

    7/0/0

    7/0/3

    7/0/2

    7/0/1

    8/0/0

    8/0/3

    8/0/2

    8/0/1

    8/0/0

    8/0/3

    8/0/2

    8/0/1

    7/0/0

    7/0/3

    7/0/2

    7/0/1

    8 / 0 / 0

    8 / 0 / 3

    8 / 0 / 2

    8 / 0 / 1

    7/0/4

    7/0/4

    8/0/4

    1

    10.

    82.

    128.

    0/30

    2

    1710.82.128.16/30

    18

    5

    10.

    82.

    128.

    4/30

    6

    9

    10.

    82.

    128.

    8/30

    10

    13

    10.

    82.

    128.

    12/30

    14

    2510.82.128.24/30

    26

    2110.82.128.20/30

    22

    2910.82.128.28/30

    30

    33 10.82.128.32/30 34

    45 10.82.128.44/30 46

    41 10.82.128.40/30 42

    37 10.82.128.36/30 38

    77 10.82.128.76/30 78

    73 10.82.128.72/30 74

    69 10.82.128.68/30 70

    65 10.82.128.64/30 66

    81

    10.8

    2.12

    8.80

    /30

    82

    93

    10.82

    .128.92

    /30

    94

    89

    10.82

    .128

    .88/30

    90

    85

    10.8

    2.128.84

    /30

    86

    97

    10.82

    .128

    .96/3

    098

    101

    10.82.128.100/30

    102

    113

    10.82.128.112/30

    114

    109

    10.82.128.108/30

    110

    105

    10.82.128.104/30

    106

    21 10.82.129.20/30 22

    125 10.82.128.124/30 126

    121 10.82.128.120/30 122

    117 10.82.128.116/30 118

    129 10.82.128.128/30 130

    141 10.82.128.140/30 142

    137 10.82.128.136/30 138

    133 10.82.128.132/30 134

    145 10.82.128.144/30 146

    157 10.82.128.156/30 158

    153 10.82.128.152/30 154

    149 10.82.128.148/30 150

    161

    10.

    82.

    128.

    160/30

    162

    173

    10.

    82.

    128.

    172/30

    174

    169

    10.

    82.

    128.

    168/30

    170

    165

    10.

    82.

    128.

    164/30

    166

    177

    10.

    82.

    128.

    176/30

    178

    181

    10.82.128.180/30

    182

    193

    10.82.128.192/30

    194

    189

    10.82.128.188/30

    190

    185

    10.82.128.184/30

    186

    197 10.82.128.196/30 198

    209 10.82.128.208/30 210

    205 10.82.128.204/30 206

    201 10.82.128.200/30 202

    213 10.82.128.212/30 214

    225 10.82.128.224/30 226

    221 10.82.128.220/30 222

    217 10.82.128.216/30 218

    241 10.82.128.240/30 242

    237 10.82.128.236/30 238

    233 10.82.128.232/30 234

    229 10.82.128.228/30 230

    2 4 5

    1 0

    . 8 2

    . 1 2 8

    . 2 4 4 / 3

    0

    2 4 6

    1

    1 0

    . 8 2

    . 1 2 9

    . 1 / 3

    0

    2

    2 5 3

    1 0

    . 8 2

    . 1 2 8

    . 2 5 2 / 3

    0

    2 5 4

    2 4 9

    1 0

    . 8 2

    . 1 2 8

    . 2 4 8 / 3

    0

    2 5 0

    8/0/0

    8/0/3

    8/0/2

    8/0/1

    MHSKE200172.25.128.4

    7/0/0

    7/0/3

    7/0/2

    7/0/1

    61 10.82.128.60/30 62

    57 10.82.128.56/30 5853 10.82.128.52/30 5

    449 10.82.128.48/30 5

    0

    Figure 4 - National North East area I P/MPLS microwave network

    Figure 4 details a sample of the self built backhaul network routed over N+0

    SDH microwave links and rings. In this situation LSPs are routed over

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    12/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 12 of 64

    multiple hops to the data centres and all routers will be added to the Level 2

    area. In order to ensure that traffic is correctly balanced across the SDH

    trunks RSVP LSPs will be routed statically giving NSI a greater level of

    control over the bandwidth utilisation. LSPs from each collector will be

    associated with a particular STM-1 and routed to the destination accordingly.

    Traffic aggregating at each collector is then associated with a particular LSP.

    NOTE: The transition document states that the National SDH Microwave

    network should be replaced by NSI with the BT TT network (See section

    2.1.2) or a National DF network. However, as this will take time and

    consolidated sites are required nationally in the short term, the network

    described in Figure 4 will be utilised over the short to medium term.

    2.3.1 Self build fibre diversity

    The table below details the physical diversity requirements for fibre based on

    traffic aggregation in the transmission network. Note that in some cases

    where the capital cost to provide media diversity over fibre is prohibitive

    Microwave Ethernet will be considered as a medium term alternative. While

    the microwave link will for the most part be of a lower capacity than the

    primary fibre route the degradation of service during the fibre outage may beacceptable for short periods to maximise the fibre penetration

    Aggregation level Diversity Comments

    < 5 Single fibre pair No diversity

    5 x 9 Flat ring Two fibre pairs sharing

    the same duct

    > 9 Fibre duct diversity 5m fibre separation to

    the aggregation router

    Table 1: Self build fibre diversity

    Note that the above table details the desired physical separation. In some

    cases the physical separation may not be physically possible and a decision

    on the aggregation level will be made based on other factors such as location,

    security, landlord, antenna support structure and cost.

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    13/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 13 of 64

    2.3 Managed backhaul

    Managed backhaul refers to point to point or network connections facilitated

    by an OLO over which NSI will transport traffic. In this case the OLO will

    provision the physical transmission interconnections.

    Presently VFIE use Eircom and e|net as a managed service vendor. In this

    case VFIE have individual leased lines from each of the vendors providing

    point to point fixed bandwidth connections.

    H3G to date have used BT as their backhaul transmission vendor where all

    traffic from the access network is routed across the National BT Total

    Transmission (TT) network.

    2.3.1 TT Network contractThe BT TT contract allows H3G to utilise up to 70Gb\s of bandwidth across a

    possible 200 collector or aggregation locations. Presently BT has configured

    multiple L3VPNs across the TT to route traffic between the collector locations

    and the data centre site at Citadel (Citywest).

    BT deployed 2 x SR12 (ALU) routers at Citadel to terminate all of the traffic

    from the possible 200 x locations.

    H3G can interconnect from their access network at a BT GPOP onto a

    collocated SR12 or APOP. At an APOP BT deploy an Alcatel-Lucent (ALU)

    7210 node and extend the TT to this point. The physical resilience from the

    GPOP to the APOP depends on the traffic to be aggregated at the site. See

    Table 2.

    Collector

    Type

    Sites

    aggregated

    Physical resilience Comments

    Small 5 None

    Medium 5 < x 9 Flat ring

    Large 10 Diverse fibre duct

    Table 2: TT Access fibre diversity

    Figure 5a details the configuration of the BT TT solution.

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    14/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 14 of 64

    Figure 5aBT Total Transmission network

    Because BT route traffic to and from the collector points over L3VPNs theymust be involved in the provisioning process for all RBS across the network.

    As described in section 2.0 it is proposed to separate the physical

    interconnection of sites from the service provisioning for NSI. To achieve this

    across the TT NSI must use the TT to replicate physical point to point

    connections across the backhaul network.

    It is proposed to change the BT managed service from a Layer 3 network to a

    layer 2 network and replicate the approach taken in the self built network. Theend result being that the provision of services across the NSI backhaul

    network is consistent regardless of the underlying physical infrastructure (Self

    built or managed).

    In order to replicate the self built architecture and utilise the BT TT contract it

    will be necessary to extend the TT network to a second data centre. It is

    proposed to extend the BT TT to the VFIE date centre in Clonshaugh.

    At Citadel and Clonshaugh the TT will interconnect with the 8800 network on

    Nx10Gb\s connections. While it is not necessary to deploy 2 x SR-12 TT

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    15/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 15 of 64

    routers at the data centres due to the path resilience employed, it will be

    useful in terms of load balancing and future bandwidth requirements. As with

    the self build design, resilience will be achieved through the physical path

    diversity to diverse data centre locations from each of the BT GPoPs.

    Figure 5b illustrates the physical and logical connectivity across the BT TT.

    BT

    Total Transmission

    Core

    L1/2

    BT - Alcatel7210- Collector

    1Gbit/s

    1Gbit/s

    BT

    IP GPoP

    Ballymount

    BT

    IP GPoP

    HPD 2

    10Gb

    BT

    IP GPoP

    HPD 1

    10Gb

    10Gb\ s NSI MPLS 10Gb\ s NSI MPLS10Gb\ s NSI MPLS

    Symmetricom

    TP500

    ge

    P1_1588v2

    P1_SyncE

    E-Lines on BT

    TT

    Citadel

    100Clonshaugh

    ADVA

    XG210

    NSI

    MPLS ABR

    (1)

    L1/2

    SymmetricomTP500

    ge

    P1_1588v2

    P1_SyncE

    NSI

    MPLS ABR

    (2)

    ADVA

    XG210

    NSI MPLS

    Collector

    1G/10G 1G/10G 1G/10G 1G/10G

    1G/10G

    1Gb\s 1Gb\s

    Primary &

    secondary

    LSPs

    E-Lines from BT

    Collector to

    GPOP

    Figure 5b

    NSI logical and physical transmission across the BT network

    VLAN trunks over E-Lines are configured from the collector to the GPOP over

    which LSPs are configured to the ABRs using LDP. LDP will facilitate

    automatic label exchange within the MPLS network and remove the

    requirement for manual configuration in the access area.

    In the BT TT network, VLAN trunks over E-Lines are configured for each ABR

    to one of the parent data centres. RSVP-TE LSPs can be configured across

    these trunks to any of the data centre facilities in a resilient manner.

    Dual ABRs are used to ensure hardware resilience for the access areas

    where up to 20 collector nodes could be connected to a BT GPOP in this

    manner.

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    16/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 16 of 64

    2.3.3 Backhaul network selection

    In some cases NSI will have the option to use either self build dark fibre or

    managed services to backhaul services from a particular aggregation point. In

    this case a number of factors must be considered when selecting the network

    type. They are;

    Factor Self Build Managed Comment

    Long term

    bandwidth

    requirements

    High Low /

    medium

    For large bandwidth sites

    dark fibre may offer the

    more attractive cost per bit

    Operational Cost

    impact

    High/Medium Low To reduce the impact on

    the operational expenditure

    dark Fibre CapEx deals

    may be more attractive

    Surrounding

    network

    Dark fibre Managed The transmission network

    selection should take

    account of the surrounding

    backhaul type. This is to

    ensure that the

    interconnecting clusters are

    optimally routed through

    the hierarchical structure.

    2.4 Backhaul routing

    Backhaul routing can be split into legacy (TDM/ATM) services, enterprise

    services and IP services.

    2.4.1 Legacy mobile services

    Legacy mobile services relate to 2G BTS and 3G RBS nodes with TDM and

    ATM interfaces. For these services NSI will configure pseudowires (PWEs)

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    17/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 17 of 64

    across the MPLS network. ATM services will be carried in ATM PWEs with

    N:1 encapsulation used for the signalling VCs to reduce the number required.

    Userplane VCs can be mapped into single PWE. TDM services will be

    transported using SAToP PWEs. At the core locations MSP1+1 protected

    STM-1 interfaces will be deployed between the 8800 MSRs and the core

    switches (BSC / RNC).

    Note: Multichassis MSP feature is not available on the Tellabs 8800 MSRs.

    Therefore MSP1+1 protecting ports will be on separate cards.

    At the access locations MSP protection for ingress TDM traffic will be

    configured in the same way on the 8600 nodes.

    PWEs for legacy services will be routed between the core and collector

    locations over physically diverse LSPs.

    2.4.2 Enterprise services

    Similar to legacy services, enterprise services will be routed between the core

    and collector locations over diverse or non diverse LSPs based on the

    customers SLA. For the most part enterprise services are provided as

    Ethernet services. In this case Ethernet PWEs will be configured to carry the

    services. A Class of Service (CoS) will be applied to the Ethernet PWE basedon the customers SLA.

    At the core locations the service will be handed to the customer network over

    an Ethernet connection with VLAN separation for the individual customers. In

    the event that multiple customers are sharing the same physical interfaces

    SVLAN separation per customer can be implemented. This will be finalised

    based on a statement of requirements from the parent operator.

    TDM services for enterprise customers will be treated the same as legacy

    TDM services described in 2.4.1 with STM-1 interfaces used to interface the

    core switches

    2.4.3 IP services

    2.4.3.1 L3VPN structure

    For IP services L3VPNs will be configured across the MPLS network.All

    routing information will be propagated throughout each L3VPN using BGP.

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    18/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 18 of 64

    The IP/MPLS network will be configured in a hierarchical fashion with route

    reflectors used to advertise routing within each area. Route Reflectors (RRs)

    will be implemented in the core area with all level 2 routers peering to those

    RRs. The ABRs between the level 1 and 2 areas will act as the route

    reflectors for the connected level 1 areas. This will reduce the size and

    complexity of the routing tables across the network.

    For each service a L3VPN will be configured. Because H3G and VFIE use

    different vendors and have different requirements in the core the number of

    L3VPNs required differ slightly. Table 3 details the L3VPNs to be configured

    across the NSI network.

    Parent L3VPN Description Comment

    VFIE 2G UP User Plane Separate L3VPNs are configured for

    each BSC

    VFIE SIU O&M Baseband

    aggregation switch

    VFIE RNC UP 3G User Plane Separate L3 VPN are configured for

    each RNC

    VFIE SRAN O&M SRAN O&M

    VFIE Synchronisation 1588v2 network

    VFIE Siae O&M Ethernet

    microwave O&M

    VFIE MiniLink O&M O&M for the

    MiniLink PDH

    network (SAU-IP)

    H3G 3G UP User plane A single L3VPN for all RNCs

    H3G 3G CP Control plane A single L3VPN for all RNCs

    H3G 3G O&M (RNC) Operation and

    maintenance

    A single L3VPN for all RNCs

    H3G 3G O&M RBS Operation and

    maintenance

    A single L3VPN for all RBS

    H3G TOP 1588v2 network Synchronisation

    H3G Ceragon O&M Ethernet

    Microwave O&M

    VFIE LTE Tbc Tbc

    H3G LTE Tbc Tbc

    Table 3 List of L3VPNs required

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    19/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 19 of 64

    As services are added to the network they will be added as endpoints to the

    respective L3VPN for that service and parent core node. This is achieved by

    adding the endpoint interface and subnet to the VPN. Any adjacent network

    routing required to connect to a network will be redistributed into the VPN

    also.

    VFIE use /30 subnets to address the mobile services across the network. This

    results in a large number of endpoints within each L3VPN. For that reason the

    networks will be split based on the parent core switch. This results in a L3VPN

    for each of the services routed to each of the RNCs/BSCs. For the H3G

    network, /26 networks are typically used at each of the endpoints. This

    summarisation reduces significantly the number of endpoints required within

    each VPN and consequently the number of VPNs.

    Sections 3 and 4 detail the impacts the proposed design have on each of the

    operators existing solutions and the steps, if any, required to migrate to the

    proposed solution.

    2.4.3.2 IP service Resilience

    Transport resilience

    Within the backhaul network IP services will be carried resiliently between the

    core and collector locations over diversely routed LSPs. It is proposed to use

    a combination of strict and loose hop routing across the network. The working

    path should always be associated with the strict hop with the protection

    assigned to the loose hop. By configuring the protection on a loose hop it will

    allow the IGP to route the LSP between the source and destination. In the

    event of a failure all traffic will be switched to the protecting LSP which has

    been routed between the source and destination via the IGP. In a mesh

    network where there are multiple physical failures and multiple paths possible

    this approach offers a greater level of resilience.

    Note, as described in section 2.2, in the case where both the main and

    protecting paths are routed over Microwave STM-1 trunks, strict hop routing

    will be employed for both paths to ensure optimum utilisation of the available

    capacity.

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    20/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 20 of 64

    Router Resilience

    Within the level 2 area of the network dual routers are deployed to ensure

    resilience at locations aggregating large volumes of traffic. In this case

    resiliently LSPs are routed from the collector nodes to both routers. In the

    event of a router failure traffic will route over the operating router until such

    time as the second router is operational after which the routing will return to

    the initial configuration.

    Core switch resilience - VRRPFor all connections to the mobile core, Virtual Router Redundancy Protocol

    (VRRP) should be used. While the VRRP implementation will differ slightly

    based on the mobile core vendor and function, the objective is to ensure that

    the transmission network to the core has full interface and router redundancy.

    10Gb\s (with LAG if required) cross links at each data centre location between

    the 8800 nodes will be implemented to support the router redundancy.

    For the 8800 nodes during restart it is possible that the router will advertise

    the interface addresses to the core switch (BSC/RNC/SGw/MME) before the

    router forwarding function is re-established. This may result in the temporary

    Black Holing of traffic. To avoid this scenario a separate connection is

    required between the routers with a default route added to each for all traffic.

    This will avoid the above scenario. It is proposed that a 10Gb\s link should be

    used for this also.

    2.5 Access Microwave network

    The target access microwave network with be based on an Ethernet

    microwave solution utilising ACM to maximise the available bandwidth. In the

    existing networks H3G use Ceragon IPx Microwave products while VFIE use

    the Siae Alc+2 and Alc+2e products. While it is envisaged that NSI will tender

    for one supplier it is not planned to replace one of the existing networks. The

    access network solution must be designed so as to ensure both vendors

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    21/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 21 of 64

    products and the services transported across them inter operate without

    issue.

    Figure 7 details a possible configuration of the access network topology

    utilising both vendors products.

    Cgn

    Cgn

    Siae

    Siae

    Siae

    CgnGigE

    GigE

    GigE

    GigE

    elp

    Cgn

    Cgn

    Cgn

    Siae

    Siae

    Siae

    Siae

    Siae

    CgnCgn

    Siae

    Siae Siae

    GigE

    Cgn

    Cgn

    Aggregation Node 86xx

    Figure 7Access M icrowave topology

    2.5.1 Baseband switching

    For the access network all traffic will be routed at layer 2 utilising VLAN

    switching at each point. VLANs will be statically configured at each site on

    each of the indoor units. For VFIE, unique VLANs are used to switch traffic

    from each of the RBS nodes. For H3G, common VLANs are used for each of

    the service types switched across the network. They are;

    UP VID = 3170

    CP VID = 3180

    O&M VID = 3190

    TOP VID = 3200

    Ceragon O&M = 3210

    Note: Future developments may result in the deployment of all outdoor MW

    Radio products in the traditional MW Bands and in the E-Band. In this case at

    feeder locations a cell site router may be deployed to perform the baseband

    switching function using IP/MPLS routing functions. Should this solution be

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    22/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 22 of 64

    employed in the future, an additional design scenario will be described and

    added to this document.

    2.5.2 Microwave DCN

    All Microwave DCN will be carried in band (this is already the case for the

    Ceragon network elements). As sites are consolidated and migrated to the

    consolidated network, it will be necessary to migrate the Siae DCN solution to

    an in band solution. It is proposed to assign VLAN ID 3000 to the Siae

    network for DCN.

    2.5.3 Backhaul Interconnections

    The access network will interface with the backhaul network over multiple GE

    interfaces. The interfaces can be protected or not depending on the capacity

    requirements. While LAG is possible on the GE interfaces the preference will

    be to use ELP on the access router with interconnected IDU interfaces in an

    active / active mode.

    In a situation where greater than one 1Gb\s is required over the Radio link,

    LAG can be used. The limitation on the access interfaces is that the interfaces

    in a LAG on the Tellabs 8600 must be on the same interface module. This is aplanned feature for release FP4.1 and is planned to be deployed in the NSI

    network forQ2 2014.

    VSI interfaces will be used to associate common network VLANs arriving on

    separate physical interfaces to a common virtual interface. This ensures that

    the approach used to assign a single subnet per traffic type per cluster can be

    continued where required. A separate VSI interface will be configured for each

    service type and added as the endpoint to the required IPVPN. Any static

    routes required to connect to and from the DCN network will use the VSI

    interface address. Figure 8 details the operation of the VSI interface.

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    23/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 23 of 64

    GE

    GE

    GE

    VSIVSI

    VSI

    GE

    GE

    GE

    Cgn

    Cgn

    Siae

    VID3170

    VID3210VID3000

    VID3000

    VID3210VID3170

    VID3000VID3210

    VID3170

    VID3000

    VID3210VID3170

    VID3000

    VID3210VID3170

    VID3000

    VID3210VID3170

    VID3000 Siae DCN

    VID3210 Ceragon DCNVID3170 H3G UP

    VID3000

    Siae DCN

    VID3210 Ceragon DCNVID3170 H3G UP

    VID3000 Siae DCN

    VID3210 Ceragon DCNVID3170 H3G UP

    Tellabs 86xx

    Figure 8Example VSI grouping conf iguration

    2.6 Network topology & traffic engineering

    The NSI transition document details the targets for network topology, traffic

    engineering and bandwidth allocation on a per site basis for each of the

    mobile networks. In summary they are;

    No more than 1 Microwave hop to fibre (Facilitated by providing fibre

    solutions to 190 towns)

    No contention for shared transmission resources (NSI are required to

    monitor utilisation and ensure upgrade prior to congestion on the

    transmission network)

    Traffic engineering (CoS, DSCP, PHB) will be assigned equally to each

    service type from each operator. At a minimum the following will be

    applied;

    o Voice (GBR)

    o Video/interactive (VBR-RT)

    o Enterprise (VBR-NRT)

    o Data (BE)

    Bandwidth allocation per site

    o Dublin & other cities (400Mb\s\site)

    o Towns (5 10K) (300Mb\s\site)

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    24/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 24 of 64

    o Rural (200Mb\s\site)

    This chapter will explain in detail the required Access, Backhaul and Core

    transmission network dimensioning guidelines and traffic engineering rules to

    achieve the targets set out in the transition document

    2.6.1 Access Microwave topology & dimensioning

    The national access microwave network will be broken into clusters of

    microwave links connected, over one or multiple hops, to a fibre access point.

    The fibre access point can be part of the self built or managed backhaul

    networks but must have the following characteristics;

    Long term lease or wholly owned by NSI or one of the parent operators

    24 x 7 access for field maintenance

    Excellent Line of Site properties

    Facility to house a significant number of microwave antennas

    Space to house all the required transmission nodes and DC rectifier

    systems

    No Health and safety restrictions

    Before creating a cluster plan, each site in the MW network must be classified

    under the following criteria;

    Equipment support capabilities

    Line of sight capabilities proximity to existing fibre solution

    Existing frequency designations

    Site development opportunities

    Landlord agreements (Number and type of equipment/services

    permitted under the existing agreements)

    Term of agreement

    Creating a database as above will allow the MW network planning team to

    create cluster solutions where a number of sites are associated with a

    designated head of cluster. As per the transition document the target topology

    is one hop to a fire access point. However this will not always be possible due

    to one or a combination of the following factors;

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    25/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 25 of 64

    Line of Site

    Channel restrictions

    Proximity of fibre solutions

    Once the topology of the cluster is defined it is necessary to define thecapacity of each link within the cluster. For tail links this is straight forward, the

    link must meet the capacity requirements of the transition document;

    Dublin & other cities (400Mb\s\site)

    Towns (5 10K) (300Mb\s\site)

    Rural (200Mb\s\site)

    For feeder links, statistical gain must be factored while still meeting the

    capacity requirements for each of the individual sites. Table 4 gives examplesof existing MW Radio configurations and the average air interface speeds

    available.

    Channel

    Bandwidth

    Configuration Max Air interface speed @

    256QAM

    14MHz Single channel 85Mb\s

    28MHz Single channel 170Mb\s

    28Mhz 2 channel LAG 340Mb\s

    28MHz 3 channel LAG 500Mb\s

    28MHz 4 channel LAG 680Mb\s

    56Mhz Single Channel 340Mb\s

    56MHz 2 channel LAG 680Mb\s

    56MHz 3 channel LAG 1.02Gb\s

    56MHz 4 channel LAG 1.34Gb\s

    E-Band 1GHz 1Gb\s

    Table 4 Radio configuration V air interface bandwidth

    Table 5 provides a guide for feeder link configurations based on the number

    of physical sites aggregated across that link.

    Physical sites

    aggregatedCity Urban Rural Comments

    2P1: E-band

    P2: 2 x56MHzP1: 1 x56MHz P1: 1 x56MHz 3:1 Stat gain

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    26/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 26 of 64

    3P1: E-band

    P2: 2 x56MHzP1: 1 x56MHz P1: 1 x56MHz 3:1 Stat gain

    4P1: E-band

    P2: 2 x56MHzP1: 2 x56MHz P1: 1 x56MHz 3:1 Stat gain

    5 P1: E-band

    P2: 2 x56MHzP1: 2 x56MHz P1: 1 x56MHz 3:1 Stat gain

    6P1: E-band

    P2: 3 x56MHzP1: 2 x56MHz P1: 2 x56MHz 3:1 Stat gain

    7P1: E-band

    P2: 3 x56MHz

    P1: E-band

    P2: 3 x56MHzP1: 2 x56MHz 3:1 Stat gain

    8P1: E-band

    P2: 4 x56MHz

    P1: E-band

    P2: 3 x56MHzP1: 2 x56MHz 3:1 Stat gain

    Table 5: Feeder link reference

    Note that no more than 8 physical sites should be aggregated on any one

    feeder link.

    For MW links utilising adaptive code modulation (ACM) it is important that at

    the reference modulation (i.e. the modulation scheme for which ComReg have

    allocated the Max EIRP) is dimensioned so as to meet the sum of the CIRs

    from each operator across that link.The total CIR per link is based on the product of the RAN technologies

    deployed and the CIR per RAN technology.

    Service RAN technology CIR (Mb\s)

    Voice 2G 1

    Voice 3G 1

    Voice LTE 1

    Data GPRS 1.5Data R99 2

    Data HSxPA 15

    Data LTE 20

    Table 6: CIR per technology reference

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    27/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 27 of 64

    Should restrictions apply in terms of hardware, licensing, topology with the

    effect that links cannot be dimensioned as per table 4 then the following

    formula should be used to determine the minimum link bandwidth.

    Min Feeder l ink c apacity = MAX (VFIE CIR + H3G CIR, Max Tai l l ink capacity)

    CIR = Total CIR across all links aggregated from each operator

    Max Tail link capacity = Max tail link capacity of all sites aggregated

    across the feeder link

    The formula is designed to facilitate the required capacity for each site based

    on location while at the same time ensuring, where multiple sites are

    aggregated, that the minimum CIR is available to each site.

    2.6.2 Access MW Resilience rules

    The resilience rules for the access MW network are based on the number of

    cell sites and enterprise services aggregated across the link. 1+1 HSB will be

    used to protect the physical path.

    Collector site Sites

    aggregated

    Physical resilience Comments

    Small 5 None

    Medium /

    Large

    > 5 1+1 HSB

    Note that while LAG can be considered as a protection mechanism, allowing

    the link to operate at a lower bandwidth in the event of a Radio failure, NSI will

    protect the Radios in a LAG group using 1+1HSB to ensure the highest

    hardware availability for a physical link. NSI will consider LAG for capacity

    only and 1+1 HSB for protection.

    The target microwave topology, as described in the transition document, is for

    1 microwave hop to fibre which will result in minimal use of 1+1 HSB

    configurations. However in the event that this topology is not possible NSI will

    implement protection as described above.

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    28/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 28 of 64

    2.6.3 Backhaul & Core transmission network dimensioning rules

    Forecasting data utilisation across mobile networks is unpredictable due to

    the fact that service is quite new and technologies are still evolving. The

    dimensioning rules for the core and backhaul networks will be based in the

    first instance on projected statistical gain.

    To ensure that the Backhaul and Core networks are dimensioned correctly for

    the initial network consolidation the following criteria will be used;

    Network Statistical gain Action

    Backhaul network Less than 6 ok

    Greater than 6 and less than 8 under review

    8 or greater upgrade

    Core Dark Fibre Less than 8 ok

    Greater than 8 and less than 10 under review

    8 or greater upgrade

    The statistical gain will be based on the average throughputs per technology

    aggregated. The statistical gain is based on the following calculation;

    Stat Gain = Total existing service capacity + Forecasted service capacity

    Backhaul capacity

    For the backhaul and core networks the current utilisation will be monitored on

    a monthly basis with the forecasted Statistical gain forecasted over an annual

    basis. This will give rise to programmed capacity upgrades across the

    Backhaul (managed and self build) and Core networks. The time to upgrade

    trunks across these networks is typically between 6 and 24 months depending

    on the upgrade involved.

    To facilitate this process the parent companies must provide 12, 24 and 36

    month rolling forecasts at least twice yearly. These forecasts must detail at a

    minimum;

    Volume deployment per service type per geographic area

    Average throughput per service type

    Max allowable latency per service type

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    29/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 29 of 64

    NSI will constantly monitor utilisation Vs forecast and feedback to the parent

    companies. This will ensure that the capacity forecasting processes are

    optimised over time.

    2.6.4 Traffic engineering

    As described in section 2.6.2, while all efforts will be made to ensure

    congestion and contention is minimised across the transmission network, in

    some cases it will be unavoidable. NSI must ensure, in such circumstances,

    that both operators have equal access to the available bandwidth. To ensure

    that this is the case traffic engineering must be employed across the

    transmission and RAN networks;

    QoS mapping

    Shaping

    Policing

    Queue management

    Quality of service is used to assign priority to certain services above others.

    Critical service signalling and GBR services will be assigned the highest

    priorities with VBR services assigned lower priority based on the service

    and/or the technology. There are large variations in the bandwidth

    requirements for LTE, HSPA, R99 and GPRS. For this reason, if all services

    were assigned equal priority, during periods of congestion, the low bandwidth

    services would be disproportionally impacted to such an extent that they may

    become unusable. For that reason, the low bandwidth data services will be

    assigned a higher priority to those presenting very high bandwidths.

    QoS along with the queue management function should be designed to

    ensure, during periods of congestion, that equivalent services from the two

    operators have equal access to the available bandwidth.

    Table 5 details the proposed QoS mapping for all mobile RAN services.

    Traffic Type DSCP L2-pbit MPLS Queue

    Signalling, synchronisation, routing

    protocols,

    24,40,48,49,56 7 CS7 (Strict)

    Speech 46 6 EF (Strict)

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    30/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 30 of 64

    VBR Streaming, GPRS data,

    Gaming

    32,34,36,38 4 AF4 (WRED)

    R99 data 24,26,28,30 3 AF3 (WRED)

    HS Data 18,20,22 2 AF2 (WRED)Premium Internet access 10 1 AF1 (WRED)

    LTE Data 0,8 0 BE

    Table 5 Quality of Service mapping

    Traffic engineering across the IP/MPLS network

    Classification

    EF

    Tail drop

    RT

    Tail drop

    pG

    WRED

    BE

    WRED

    Shaper

    CIR / PIR

    Strict

    priority

    WFQ

    Ingress from Core Traffic

    ClassificationQueue &

    Queue ManagementShaping Scheduling Trunk Interface

    IP/MPLS traffic Engineering

    IP Flow

    IP Flow

    IP Flow

    IP Flow

    Ingress - POC 1 Location IP/MPLS PHB

    G+E

    WRED

    Shaper

    CIR / PIR

    Shaping

    Figure 10I P/MPLS traff ic engineeri ng

    Figure 10 describes the flow of traffic through the IP/MPLS network. On

    ingress from the core and access networks traffic is classified according to the

    DSCP value and mapped to the required Per Hop Behaviour (PHB) service

    class. From there it is passed to the egress interface where it is queued and

    scheduled based on a strict plus weighted fair queue (WFQ) mechanism.

    GBR services are passed to the strict queue and VBR services are passed to

    a weighted fair queue where access to the egress interface is controlledbased on the service class priority. In times of no congestion all traffic is

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    31/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 31 of 64

    passed without delay. In a congested environment, GBR services are passed

    directly to the egress interface and the VBR services are queued with access

    to the egress interface controlled by the weighted fair algorithm. Weighted

    Random Early Discard (WRED) is used to ensure efficient queue

    management. Packets from data flows are discarded at a pre-determined rate

    as the queue fills up. By doing this the 3G flow control and TCP/IP flow control

    should slow down resulting in reduced retransmissions and more efficient use

    of the available bandwidth.

    For enterprise services, policing on ingress will be implemented to ensure the

    enterprise customer is within the SLA. In such circumstances a CIR and PIR

    can be allocated to the customer services with a CBS and PBS assigned also.

    In this case the two rate three colour marking (trTCM) mechanism will be used

    to control the flow of enterprise traffic through the network.

    Figure 11Enterpri se traff ic engineering

    Traffic within contract and within the PBS will be marked Green, traffic greater

    than the CIR and within the PIR including the PBS will be marked Yellow, all

    other traffic will be marked Red and discarded. In congestion scenarios the

    WRED queue management function will discard the Yellow marked packets

    first.

    Traffic engineering across the Layer 2 Microwave network

    Input data burst Policing marking Output traffic

    Data

    Input data burst Policing marking Output trafficInput data burst Policing marking Output traffic

    DataData

    Discarded

    traffic

    Discarded

    traffic

    Yellow marked traffic.

    First to be discarded in

    case of network congestions

    Yellow marked traffic.

    First to be discarded in

    case of network congestions

    Policing implementation according to standard Two Rate Three Color Marker (trTCM)

    CBS allows to tolerate bursts above CIR

    short bursts will be marked GREEN

    PBS allows to tolerate bursts above PIR

    short bursts will not be discarded

    Policing implementation according to standard Two Rate Three Color Marker (trTCM)

    CBS allows to tolerate bursts above CIR

    short bursts will be marked GREEN

    PBS allows to tolerate bursts above PIR

    short bursts will not be discarded

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    32/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 32 of 64

    Across the microwave network a combination of Shaping, CoS based policing,

    trTCM and WRED queue management should be used to ensure congestion

    control and fairness in terms of bandwidth contention.

    For downlink traffic, the physical interface from the IP/MPLS network must be

    shaped to the maximum bandwidth of the radio interface. This is to ensure

    that egress buffer overflow is not experienced, in particular for large bursts of

    LTE traffic. For LTE traffic, shaping per VLAN should also be implemented to

    ensure that tail links, which may be connected to feeder links and be of lower

    capacity, do not experience buffer overflow.

    Note: VLAN shaping for LTE must be considered when considering the Layer

    2 VLAN structure and Layer 3 addressing to the H3G LTE network.

    Figure 12Downlink tr aff ic control mechanism

    For uplink traffic, shaping should be applied on both the H3G and VFI RAN

    nodes. This is to ensure that both operators present the same bandwidth to

    the Transmission network for sharing.

    Data traffic should be policed on ingress to the access microwave network on

    a per service level. This ensures that during congestion, out of policy traffic

    from each operator is discarded first during periods of congestion.

    GE

    BEP1.0

    LTE

    3G

    2GBEP2.0

    BEP2.0

    H3G

    VFIE

    LTE traffic shaping per service

    & per port (VLAN group) shaping

    In order to avoid BEP2.0 buffer overflow

    Sh

    ap

    ing

    Sh

    ap

    ing

    GE

    BEP1.0

    LTE

    3G

    2G

    LTE

    3G

    2GBEP2.0

    BEP2.0

    H3G

    VFIE

    LTE traffic shaping per service

    & per port (VLAN group) shaping

    In order to avoid BEP2.0 buffer overflow

    LTE traffic shaping per service

    & per port (VLAN group) shaping

    In order to avoid BEP2.0 buffer overflow

    Sh

    ap

    ing

    Sh

    ap

    ing

    Sh

    ap

    ing

    Sh

    ap

    ing

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    33/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 33 of 64

    As detailed in previous sections the target bandwidth for RBS sites is 400Mb\s

    in the City areas, 300Mb\s in towns and 200Mb\s for all others. Tables 6 and 7

    detail the proposed policing settings for the two areas.

    Data traffic CIR

    (Per operator)

    PIR

    (Per Operator)

    Comments

    GBR Services NA NA No Policing - Green

    GPRS Data 1Mb\s Not set PIR will not be greater than

    max link capacity.

    Out of policy = yellow

    R99 Data 2Mb\s Not set PIR will not be greater than

    max link capacity.

    Out of policy = yellow

    HSDPA 15Mb\s Not Set PIR will not be greater than

    max link capacity.

    Out of policy = yellow

    LTE 20Mb\s 400Mb\s Contracted SLA to operator.

    Out of policy = Red

    Table 6 City Area (Max link capacity = 400Mb\s)

    Traffic CIR

    (Per operator)

    PIR

    (Per Operator)

    Comments

    GBR Services NA NA No Policing - Green

    GPRS Data 1Mb\s Not set

    PIR will not be greater than

    max link capacity.

    Out of policy = yellow

    R99 Data 2Mb\s Not set

    PIR will not be greater than

    max link capacity.Out of policy = yellow

    HSDPA 15Mb\s Not Set

    PIR will not be greater than

    max link capacity.

    Out of policy = yellow

    LTE 20Mb\s 200Mb\sContracted SLA to operator.

    Out of policy = Red

    Table 7 Non City Area (Max link capacity = 200Mb\s)

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3

    34/64

    Transmission network planning__________________________________________________________________________

    Transmission network design

    & architecture guidelines 13/06/2013 Page 34 of 64

    All packets within CIR and the CBS will be marked green. For 3G and HS

    services the PIR should not exceed the available link capacity so packets will

    be marked as yellow. For LTE traffic, out of policy traffic will be marked red

    and discarded.

    In some cases the sum of both operators PIR will be greater than the

    available link capacity, even at maximum modulation. In this case, it will be

    possible for both operators to peak to the maximum available capacity, but not

    at the same time.

    Figure 13- Normal li nk operation

    Figure 13 details the operation of both policing and queue management on a

    microwave link. For operator 1, when the traffic presented exceeds the PIR, it

    is marked Red and discarded. Where the sum of both operators traffic does

    not exceed the interface PIR but exceeds the available link capacity the

    WRED mechanism in the outbound queue will start discarding yellow marked

    packets at a predetermined rate based on the queue size. In this instance, the

    3G flow control and TCP/IP (LTE) flow control mechanisms will slow down the

    Policing will color

    packets according

    to trTCM

    CIR1

    Operator 1

    Operator 2

    PIR1

    PIR2

    CIR2

    TX link

    capacity

    Discarded traffic by policing

    Op1 + Op2 traffic exceeding TX link capacity. When queues start to fill-up

    WRED (QoS) mechanism will start dropping YELLOW marked packets from data traffic

    3G flow control & TCP/IP LTE sessions will slow down traffic of both Operators

    Thus preserving GREEN packets (CIR) for both operators

    Policing will color

    packets according

    to trTCM

    Policing will color

    packets according

    to trTCM

    CIR1

    Operator 1

    Operator 2

    PIR1

    PIR2

    CIR2

    TX link

    capacity

    Discarded traffic by policingDiscarded traffic by policing

    Op1 + Op2 traffic exceeding TX link capacity. When queues start to fill-up

    WRED (QoS) mechanism will start dropping YELLOW marked packets from data traffic

    3G flow control & TCP/IP LTE sessions will slow down traffic of both Operators

    Thus preserving GREEN packets (CIR) for both operators

    Op1 + Op2 traffic exceeding TX link capacity. When queues start to fill-up

    WRED (QoS) mechanism will start dropping YELLOW marked packets from data traffic

    3G flow control & TCP/IP LTE sessions will slow down traffic of both Operators

    Thus preserving GREEN packets (CIR) for both operators

  • 7/27/2019 Transmission Network Design and Architecture Guidelines Version 1 3