tp4 y2 annual - international networks at indiana university y2 annual web.pdf · ! 2!...
TRANSCRIPT
1
TransPAC4 Award #1450904
Year 2 Quarter 4 and Annual Report 1 Dec 2015 through 30 Nov 2016
Jennifer M. Schopf, Andrew Lee – Principal Investigators
Summary During the second project year, the TransPAC4 project completed the transition from the TransPAC3 project and focused on expanding partnerships and adding in additional monitoring and analysis for the link. This report outlines collaborations, software and systems work, operational activities, and usage statistics for the project. It covers the period of December 1, 2015 to November 30, 2016.
1. TransPAC4 Overview The TransPAC project supports circuits and network services between the US West coast and Asia. During Year 2, these circuits included:
• The TransPAC 10G Circuit: a 10Gbps link between Los Angeles, California, and Tokyo, Japan. This had been the primary, NSF-‐funded circuit for the TransPAC3 project and is used for the bulk of the production project network bandwidth. This circuit was be decommissioned on May 31, 2016.
• The TransPAC-‐Pacific Wave 100G Circuit: a 100Gbps link between Seattle, Washington, and Tokyo, Japan. This circuit was brought up in November 2015 in experimental mode for use during SuperComputing ‘15, and passed production tests in February 2016. In May 2016, the last of the production traffic from the TransPAC3 10G circuit shifted to this route and it became the primary project circuit going forward for TransPAC4.
• The JGN-‐X Circuit: a 10Gbps layer-‐2 circuit, largely used for experiments and Software Defined Networking (SDN) trials. The Japan Gigabit Network Extension (JGN-‐X) project is a testbed funded by the Japanese National Institute of Information and Communications Technology (NICT) (http://www.nict.go.jp/en). This link is not supported by NSF funds. A backup routed peering connection between TransPAC and APAN also runs across this link. This link is expected to remain in place at least until the end of NICT’s fiscal year, which is March 2017.
These circuits are used in production to support a wide variety of science applications and demonstrations of advanced networking technologies. In addition,
2
the TransPAC award supports tool development, SDN experimental work, measurement deployments, and security activities.
2. Staffing At the start of Year 2, project staff consisted of: • Jennifer Schopf, Director • Andrew Lee, International Networks architect • Hans Addleman, primary TransPAC network engineer • Predrag Radulovic, Science Engagement Specialist During the year, Lee was made a co-‐PI of the project, and took on additional managerial and coordination activities. In addition, Addleman took on an expanded role as security expert for the full suite of International Networks at Indiana University (IN@IU) projects. Jointly with NetSage, we hired on 5 summer interns, funded by the NetSage project: Abhishek Singh (MS at IU), Abhinandan Sampathkumar (MS at IU), Ayush Kohli (BS at Southern Illinois University), Tina Yu (BS at UIUC), and Sydney Lyon (BA at IU). They focused on initial prototypes of analysis tools using flow data from the LA TransPAC3 circuit. Singh and Sampathkumar continued as hourly employees after the summer, and have some preliminary flow analysis tools., which will be adapted for use with current TransPAC4 flow data. Radulovic is overseeing the interns. At the end of Year 2, the funded staff on the project had not changed, although the roles had shifted slightly. • Jennifer Schopf, Director • Andrew Lee, Project management and oversight, Architect • Hans Addleman, primary TransPAC network engineer and security • Predrag Radulovic, Analysis, Intern advising, and Science Engagement In Year 3, we will be bringing on additional staff to assist with science and research engagement.
3. Travel and Training TransPAC staff participated in various meetings to support their role in collaborations in Asia. For Quarters 1, 2, and 3, these included:
• Schopf, Addleman, and Radulovic went to APAN41 in Manila, Philippines, Jan 24-‐29, 2016.
• Addleman attended NANOG 66 (https://www.nanog.org/meetings/nanog66/home/) in San Diego, CA, February 8-‐10, 2016.
3
• Lee attended the Large Hadron Collider Open Networking Environment (LHCONE) and the International Symposium of Grids and Clouds (ISGC) meeting March 13-‐18, Taipei, Taiwan
• Radulovic attended CENIC 2016 annual meeting in Davis, CA, March 21-‐23 • On April 10, Schopf and Radulovic met with ESnet engagement team to
discuss future collaborations. • Schopf and Radulovic attended Cross-‐Connect: Bioinformatics workshop held
April 11-‐12, 2016, in Berkeley, CA • Lee and Addleman attended a Pacific Wave meeting, April 21-‐22. • Chevalier, Lee, Addleman, and Schopf participated in the 2016 spring
planning meeting for perfSONAR workshop on May 10th and 11th in Bloomington, IN.
• Schopf, Lee, and Addleman attended the Internet2 Global Summit (https://meetings.internet2.edu/2016-‐global-‐summit/) in Chicago during the week of May 15th.
• Lee and Schopf attended the TNC16 conference in Prague on June 12-‐16. • Lee, Addleman, and Chevalier attended an Operating Innovative Networks or
OIN training in Indiana on July 12-‐13. • Lee, Addleman and Greg Boles of the GlobalNOC attended the APAN
conference in Hong Kong on July 31-‐August 5th. • Lee and Addleman attended a meeting with representatives from APAN,
KDDI, NICT, SINET, Pacific Wave, and others in Tokyo on August 8th and 9th. 2016.
• Radulovic participated in a Galaxy Conference held at IU-‐Bloomington in July 2016.
In Project Quarter 4, these included:
• Radulovic attended LHCONE meeting held in Helsinki, Finland on Sep 20 and 21, collocated with the Nordunet conference. Focus of this meeting included push for IPv6 deployment and performance mesh, approval process for projects joining LHCONE, and various evaluations of cloud services as potential use by the community. TransPAC is continuing support for end-‐user LHC applications. Radulovic spoke to Harvey Newman, Edoardo Martelli, Hsin-‐Yen Chen, Damir Pobric, and Michael O'Connor about ongoing projects and support. He followed-‐up with Rob Gardner from Univ of Chicago about performance issue between Univ of Chicago and Italy.
• Schopf and Addleman attended the Internet2 Technology Exchange Meeting in Miami September 25-‐28. Discussions took place for coordination with other Asia pacific groups as well as other IRNC PIs. A Guam exhcnage point was also discussed.
• Lee attended the GLIF meeting September 27-‐30, 2016 in Miami. There he discussed with our partners at Pacific Wave the possibility of deploying NSI to benefit the Seattle-‐Tokyo 100G circuit after watching an NSI demo involving Pacific Wave’s Los Angeles switches. He also discussed with several GLIF members the continuation of the Open Exchange attributes work that had been done by our group in the past.
4
• Addleman attended the CACR Cyber Security Summit in Indianapolis on September 29.
• Radulovic attended the CANS conference October 17-‐19. He had conversations with Jennifer An from CERNET and Jiangning Chen from CSTnet. Topics included small node PerfSONAR deployments, DTN/SciDMZ, backup over TransPAC links, and future Bioinformatics workshop in Beijing in 2017.
• Schopf, Lee, Radulovic and Chevalier attended SuperComputing 2016 in Salt Lake City November 13-‐18, 2016. They held several meetings with our partners including APAN-‐JP, Pacific Wave and others. They were on hand to offer support for the demos that NICT was conducting. Chevalier continued to support the perfSONAR analysis and use as a member of the SCInet measurement team.
• Addleman attended a SANS Security Course November 27-‐30, 2016, as part of his work toward a certification.
4. Additional Collaborations
4.1 IRNC Project Collaboration Collaboration with the IRNC AMIS awardee, NetSage, is moving forward successfully with TransPAC able to be a guinea pig for first deployments of several measurement sources, in addition to sharing SNMP and perfSONAR data. A deployment of the Tstat tool, which collects unsampled flow data, is planned in Year 3. The IRNC NOC continues to provide Tier 1 support services including monitoring the state of the trans Pacific circuit and the installed equipment in Seattle. The GlobalNOC continues to supply Tier 2 and Tier 3 services.
4.2 Interns In the Summer of 2016, International Networks at Indiana University hired five interns to work on netflow analysis, primarily funded by NetSage but working with TransPAC data. This included three undergraduate students, from three different universities, participating in the Summer Research Opportunities in Computing (SROC) at IU's School of Informatics and Computing (SoIC), and two graduate students from IU. The main project they worked on jointly involved analysis of netflow data from the IRNC’s TransPAC project, including data collected in 2016. Students tried out various analytics tools and finally ElasticSearch, Logstash, and Kibana (ELK) toolset was selected due to its great scalability in large datasets ("big data analytics"). Analysis included searches for largest and longest flows, top talkers and flow profiles for elephant flows. One of the undergraduate students also worked on pattern analysis of BGP routing table archive from the R&E community. The two MS summer students continued to work with the project to produce prototypes of possible data analysis tools based on TransPAC and ACE data in house. These tools include:
5
• Processing netflow records that do not contain Layer3 information (source and destination ASnum)
• Generating flow profiles for significant (large) flows from netflow data • Automation of quarterly Top10 Talker traffic reporting for IRNC projects
based on netflow data
This analysis work will continue in Year 3.
4.3 External Collaborations MOUs for the project were delayed in order to better understand Indiana University’s internal processes. It appears that all MOUs issued prior to TransPAC4 only a PI signature are not valid according to university regulations. The first MOU, with NICT, was signed at APAN in August 2016. We are now in discussions with SINET, TEIN, and APAN for additional MOUs. We have initiated planning with ESnet for two Cross Connect Workshop focusing on bioinformatics. The first workshop took place in Spring 2016, in Berkeley, CA, with details in section #3. A planned follow on workshop is expected to take place coincident with the August APAN meeting but held at the IU offices in Beijing. These workshops will be supported with both TransPAC funding (as part of the planned TransPAC4 science engagement work) and other NSF funding for Cross Connect workshops. Initial speakers have been identified. More information can be found at http://www.es.net/science-‐engagement/programs-‐and-‐workshops/crossconnects-‐workshop-‐series/crossconnects-‐bioinformatics/ At Supercomputing 2016, the TransPAC team collaborated with NICT to support two demos between the US and Japan. During the conference, production traffic was shifted over to our existing backup path in advance of the conference in order to prevent issues with regular usage caused by the high demand demos as well as to give the demos maximum bandwidth. The first demo was related to the display of 8k video over a long distance from the Kanagawa Institute of Technology (KAIT) to the SC venue in Salt Lake City. Another component of the demo, involved motion capture of a person in a special suit at KAIT. At the SC venue, there was another person who was similarly outfitted. The motion capture data from both was sent to a Motion Capture computer generation system at StarBED in Nomi, Japan, where the motion capture inputs were combined and an 8k video was generated and sent back to the show floor, which had the two remote participants ‘dancing’ with each other as cartoon avatars. The data rate of the video exceeded 26Gbps and was encrypted using IPSEC. The other demo was a continuation and improvement of the ultra high speed data transmission protocol known as MMCFTP that was developed by Dr. Yamanaka. The protocol is designed for high speed file transfers over excessive latencies. The demo utilized both the TransPAC 100G and the NII 100G that goes from Tokyo to Los
6
Angeles. The data rate exceeded 130Gbps over the two links. In comparison, two years ago at SC’15 the data rate achieved was only 15Gbps. Figures 1 and 2 show traffic graphs for that week, and reflect the success of the demos in achieving high throughput rates despite the latency involved.
Figure 1: TransPAC-‐Pacific Wave 100G Circuit (NSF-‐funded) traffic using smoothed 9 minute averages for the week of Nov 13, 2016.
Figure 2: TransPAC-‐Pacific Wave 100G Circuit (NSF-‐funded) traffic using smoothed 9 minute maximums for the week of Nov 13, 2016.
7
5. Circuit Status The TransPAC-‐Pacific Wave 100g is now fully operational and carrying all the production traffic that was previously on the 10G TransPAC3 LA-‐Tokyo circuit. The 10G link was decommissioned on May 31, 2016. We continue to have discussions with the wider community on a possible circuit to Asia via Guam. There is an opportunity for a possible circuit between Guam and Hong Kong, Tokyo, or Manila, and that various collaborations for this circuit to cost share may be feasible.
6. Software and Systems Work Software and systems work for TransPAC were shifted from the TransPAC3 project to TransPAC4. No new tools were needed to be developed. Some existing tools were modified to allow for monitoring of the 100G circuit where it lands on the Pacific Wave switch in Seattle. These updates allowed us to collect traffic and uptime statistics even though we do not own the equipment that the circuit lands on.
7. Measurement Activities
7.1 PerfSONAR The TransPAC project supports a perfSONAR deployment in Seattle that provides periodic testing between several US and Asian sites. TransPAC participates in the IRNC mesh available at http://data.ctc.transpac.org/maddash-‐webui/index.cgi?dashboard=IRNC%20Mesh . We also participate in the APAN testing matrix, http://ps2.jp.apan.net/maddash-‐webui/. 7.2 Flow Data TransPAC is currently collecting SFLOW data in Seattle and Tokyo. De-‐identified versions of the data are shared with the IRNC NetSage project. Figures 3 and 4 display the top 10 talkers for inbound to the US flows by autonomous system sources and destinations. Figures 5 and 6 display the top 10 talkers for outbound to Asia flows by autonomous systems sources and destinations. Looking at a full year of flow data reveals that most of our traffic is spread between many different sources and destinations. Our top talkers over an entire year are a only 15-‐27 percent of the overall traffic. The University of Tasmania shows up as a top source for inbound traffic to the US in Year 2. This may be due to the Foundation USA project at the University of Tasmania that encourages research between Australia and the USA by providing grants (http://www.utas.edu.au/giving/foundationusa).
8
Destinations for traffic coming into the US is similar to what was report in prior quarters. We continue to see data from Asia heading across the continent to European destinations. We are working with the Asi@connect and APAN communities to make sure that efficient routing to Europe is in place. Sources of traffic leaving the US towards Asia did not change at the yearly resolution. The main US sources continue to be the Internet Archive (which hosts numerous scientific databases and backups), NASA, and weather data. We do see some traffic from Europe well. We have seen India become a top destination for traffic from the US this year and that is reflected in the yearly top 10. A second notable destination is A-‐Star in Singapore, despite their new 100G link between Singapore and Internet2 in Los Angeles.
Figure 3: Top 10 talkers by autonomous system source, inbound to the US for Year 2.
9
Figure 4: Top 10 talkers by autonomous system destination, inbound to the US for Year 2.
10
Figure 5: Top 10 Talkers by autonomous system source, outbound from the US for Year 2.
11
Figure 6: Top 10 talkers by autonomous system destination, outbound from the US for Year 2.
8. Traffic and DownTime There was approximately 73.5 hours of downtime on the circuit for the annual period. However there were no unscheduled outages during the fourth quarter period. Unusual extended outages for prior quarters were addressed in their reports.
8.1 Traffic Graphs Figures 7 and 8 show the traffic on the TransPAC 10G Circuit between Los Angeles and Tokyo during the period of Dec 1, 2015 – May 31, 2016. Figure 9 and 10 show the TransPAC-‐Pacific Wave 100G circuit between Seattle and Tokyo during the period of May 25,2106 to Nov 30,2016. The TransPAC traffic is fully shifted to the TransPAC -‐ Pacific Wave Circuit in May of 2016.
12
Figure 7: TransPAC Los Angeles to Tokyo 10G Circuit (NSF-‐funded) traffic using smoothed daily averages.
Figure 8: TransPAC Los Angeles to Tokyo 10G Circuit (NSF-‐funded) traffic using maximum daily averages.
13
Figure 9: TransPAC-‐Pacific Wave 100G Circuit (NSF-‐funded) traffic using smoothed daily averages.
Figure 10: TransPAC-‐Pacific Wave 100G Circuit (NSF-‐funded) traffic using maximum daily averages.
8.2 Trouble Tickets Table 1: Scheduled maintenance for TransPAC equipment and circuits, Dec 1, 2015 -‐ Nov 30,2016.
Ticket # Customer
Impact Network Impact
Title Maintenance Type
Source Of Impact
Start Time (UTC)
End Time (UTC)
14
1682 3-‐Elevated 1-‐Critical Maintenance Completed -‐ TransPAC Core Node SEAT (Code Upgrade)
Hardware Internal
05/06/2016 7:08 PM
05/06/2016 7:09 PM
1702 3-‐Elevated 1-‐Critical Emergency Maintenance Completed -‐ TransPAC Core Node SEAT
Hardware Internal
05/13/2016 5:14 PM 05/13/201
6 6:03 PM
1727 3-‐Elevated 2-‐High Maintenance Completed -‐ TransPAC Backbone SEAT-‐TP-‐TOKY
Hardware Vendor 06/17/2016 1:01 AM
06/17/2016 2:10 AM
1739 3-‐Elevated 2-‐High Emergency Maintenance Completed -‐ TransPAC Backbone SEAT-‐TOKY
Circuit Vendor 07/21/2016 10:02 PM
07/21/2016 10:04 PM
1743 3-‐Elevated 2-‐High Emergency Maintenance Completed-‐ TransPAC Backbone SEAT-‐TOKY
Hardware Vendor 08/24/2016 1:03 PM
08/24/2016 1:41 PM
1759 3-‐Elevated 2-‐High Emergency Maintenance Completed -‐ TransPAC Backbone SEAT-‐TOKY
Software Vendor 09/23/2016 12:59 AM
09/23/2016 1:01 AM
15
1762 3-‐Elevated 2-‐High Emergency Maintenance Completed -‐ TransPAC Backbone SEAT-‐TOKY
Software Vendor 09/29/2016 12:20 PM
09/29/2016 12:24 PM
1769 4-‐Normal 2-‐High Emergency Maintenance Completed -‐ TransPAC Backbone SEAT-‐TOKY
Circuit Vendor 10/13/2016 7:22 AM
10/13/2016 10:12 AM
1770 3-‐Elevated 2-‐High Maintenance Completed -‐ TransPAC Backbone SEAT-‐TOKY
Circuit Vendor 10/21/2016 1:59 PM
10/21/2016 2:11 PM
10/21/2016 3:42 PM
10/21/2016 3:43 AM
1774 3-‐Elevated 2-‐High Maintenance Completed -‐ TransPAC Backbone SEAT-‐TOKY
Hardware Vendor 10/25/2016 8:46 AM
10/25/2016 8:47 AM
1775 3-‐Elevated 2-‐High Emergency Maintenance Completed -‐ TransPAC Backbone SEAT-‐TOKY
Circuit Vendor 10/26/2016 4:08 PM
10/26/2016 4:35 PM
1780 3-‐Elevated 2-‐High Maintenance Completed -‐ TransPAC Backbone SEAT-‐TOKY
Circuit Vendor 11/20/2016 5:18 AM
11/20/2016 6:04 AM
16
Table 2: Unscheduled Outages for TransPAC equipment and circuits Dec 1, 2015 -‐ Nov 30,2016.
Ticket # Customer
Impact Network Impact
Title Outage Type Source Of Impact
Start Time (UTC)
End Time (UTC)
1636 4-‐Normal 2-‐High Brief Outage Resolved -‐ TransPAC Backbone SEAT-‐TOKY
Unannounced Maintenance
Vendor
02/04/2016 6:48 PM
02/04/2016 7:10 PM
1735
4-‐Normal 2-‐High Outage
Resolved-‐ TransPAC Backbone SEAT-‐TOKY
Hardware Vendor
07/19/2016 3:04 PM
07/20/2016 11:50 AM
07/20/2016 12:18 AM
07/21/2016 7:02 AM
17
1737 4-‐Normal 2-‐High Availability -‐ TransPAC Backbone SEAT-‐TOKY
Undetermined
Vendor
07/21/2016 12:47 PM
07/21/2016 1:14 PM
1750 4-‐Normal 2-‐High Outage Resolved -‐ TransPAC Backbone SEAT-‐TOKY
Unannounced Maintenance
Vendor
09/08/2016 9:20 PM
09/08/2016 11:30 PM
1751 4-‐Normal 2-‐High Outage Resolved -‐ TransPAC Backbone SEAT-‐TOKY
Unannounced Maintenance
Vendor
09/09/2016 4:25 AM
09/09/2016 5:05 AM
1753 4-‐Normal 2-‐High Brief Outage Resolved -‐ TransPAC Backbone SEAT-‐TOKY
Unannounced Maintenance
Vendor
09/15/2016 12:04 AM
09/15/2016 12:11 AM
1757 4-‐Normal 2-‐High Brief Outage Resolved -‐ TransPAC Backbone SEAT-‐TOKY
Hardware Vendor
09/19/2016 2:21 PM
09/19/2016 2:22 PM
8.C Downtime and Availability During Year 2, there were seven unscheduled outages and twelve scheduled maintenances. These are detailed in the corresponding quarterly reports. Table 3 shows the reported downtime for core nodes on the project. Table 4 lists the downtime for the projects circuits.
18
Table 3: Downtime and availability for TransPAC core nodes.
TransPAC Core Nodes Down Time Reporting Period Availability 52 Week Availability
TransPAC MX480 -‐ LA 0 hr 0 min 100.00000% 100.00000%
Brocade MLXe4 0 hr 50 min 99.99051% 99.99054%
OOB Router 0 hr 0 min 100.00000% 100.00000%
Table 4: Downtime and availability for TransPAC circuits.
TransPAC Backbone Circuits Down Time
Reporting Period Availability
52 Week Availability
TP-‐LOSA-‐TOKY-‐10GE-‐1 0 hr 0 min 100.00000% 100.00000%
TP2-‐LOSA-‐LOSA-‐100GE-‐01521 0 hr 0 min 100.00000% 100.00000%
TP2-‐LOSA-‐LOSA-‐10GE-‐01520 0 hr 0 min 100.00000% 100.00000%
TP2-‐SEAT-‐TP-‐TOKY-‐100GE-‐01522
73 hr 27 min
99.16382% 99.16610%
9. Security Events and Activities Basic security measures are being maintained, and there were no security incidents to report for this quarter. After a meeting with CACR Staff members Craig Jackson and Von Welch, we started drafting the necessary security documents. These documents will include a Master Information Security Policy and Procedures document, Network AUP, Netflow and Data Privacy statement, Information Classification and Inventory, and a Incident response document. We will be reviewing these documents with CACR over the next 6 months. In addition, because it was realized we needed more in-‐house security knowledge, Addleman began work toward a Security Leadership certification (http://www.giac.org/certification/security-‐leadership-‐gslc).
19
10. Reporting against Objectives for Year 2, Planning for Year 3 From the Work Breakdown Structure for Years 1 and 2. Bulleted items are status updates for the indicated WBS items. Additional WBS items are also indicted. 1.1 Planning / Coordination Year 1 1.1.1 Renew current 10G circuit -‐ Negotiate to renew the current TransPAC3 Los Angeles to Tokyo 10G Circuit.
• COMPLETED: Circuit currently extended through May 2016 and will be decommissioned after that
1.1.2 Research best new paths and end points -‐ Work with partners in both the Asia-‐Pacific and United States regions to determine appropriate end points for a circuit landing in Seattle.
• COMPLETED. The TransPac-‐PacWave 100G circuit runs from Tokyo to Seattle. Year 2 will bring production use of the circuit. Additional circuits will be sought in Year 2.
1.1.3 Start partner MOU process -‐ Contact partners and start the process of signing Memorandum of Understandings with each.
• ONGOING: Delay experienced due to IU process, but NICT was signed. Year 3 will see the signing of NII, APAN, and TEIN in the first 2 quarters.
1.1.4 Form TransPAC External Advisory Council populated by partner and support organizations.
• Canceled -‐ it was decided it would be more productive to work within existing groups, and we are also getting feedback from Lassner’s Guam meeting team, which meets this requirement
1.2 Planning / Coordination Year 2 1.2.1 Evaluate circuit capacity and community needs. Negotiate with vendors and partners for new circuits as capacity demands grow. Phase 2 planning.
• ONGOING. Discussions of a possible circuit via Guam took place with vendors and R&E partners, additional discussion and an RFP to follow in Year 3
1.2.2 Finish partner MOUs Finish the process of signing Memorandum of Understandings with each.
• Ongoing -‐ after a delay due to IU process – this will be a focus of Year 3 1.3 Planning / Coordination Ongoing 1.3.1 Evolve network architecture -‐ New network designs over the evolution of the 5 year award. This will include 100G circuit speeds, software defined networking / exchanges, possible new peering points, and greater then 10G flows.
20
• ONGOING Expectation to have an RFP for a second circuit, possibly based in Guam, in Project Year 3; Discussion ongoing for possible Open Exchange Points in Hong Kong and Guam
1.3.2 Coordinate with IRNC:NOC winner -‐ Coordinate with the IRNC:NOC awardee to ensure they have a sufficient and appropriate level of access to all of the TransPAC4 equipment supporting international activities. This includes appropriate logs, SNMP access, portal or login access to obtain data not available via SNMP, etc.
• SET UP COMPLETED: IRNC NOC took over responsibility for TransPAC in January 2016; Tier 2 and 3 support shifted to TP4 in May 2016.
1.3.3 Coordinate with IRNC:AMI winner -‐ Coordinate with the IRNC:AMI awardee for the appropriate distribution of flow data, per our own security and data policies, SNMP and other access as appropriate.
• ONGOING TransPAC is the first backbone to share measurement data, specifically SNMP and perfSONAR data, with NetSage. TransPAC is also acting as a testbed for the tstat tool set up NetSage is investigating.
1.3.4 Overall Management of the project
• ONGOING Meetings continue almost quarterly with project partners at conferences such as APAN, TNC, and Internet2’s Global Summit and TechX.
1.3.5 Project Reporting -‐ Report generation for the life of the project
• ONGOING -‐ Project Execution Plan was updated as part of the Y1 annual report; Reporting infrastructure in place for more up to date quarterly reporting; WBS update as part of this report
1.3.6 Documentation and dissemination
• ONGOING -‐ Website refresh in progress 1.3.7 Security plan for project
• ONGOING – Working with CACR team to develop standard forms; draft documents ready in Year 3 and follow on meetings will take place.
2.2 Outreach Year 2 2.2.1 Analyze usage data developed during TransPAC3 to identify geoscience/bioinformatics researchers. Leverage our TransPAC4 partners to provide support and if possible connectivity for these researchers.
• ONGOING -‐ Altered from original from genomics to include bioinformatics, in part in support of the cross connect with ESNet
2.3 Outreach Year 3 2.3.1 Coordinate with network partners to extend SDN/SDX to 100G circuits
• Planned Year 3
21
2.3.2 Analyze current Geoscience network traffic and reach out to possible new network users
• Initial work started, will be extended in Year 3 2.3.3 Evangelize Path Hinting
• Delayed due to postponement of path hinting research 2.6 Outreach Ongoing 2.6.1 Attend domestic and international conferences for application identification and relationship maintenance
• ONGOING: COMPLETED: o Pacific Telecommunications Conference (PTC), Hawaii January 2016 o APAN 41 Manilla, January 2016 o Cross Connect on Bioinformatics, Bekerely, CA April 2016 o Global Summit, Chicago, May 2016 o Terena, Prague, June 2016 o APAN 42, Hong Kong, August 2016 o Internet2 Technical Exchange, Miami, September 2016 o GLIF, Miami, September 2016 o SuperComputing’16, Salt Lake City, November 2016
2.6.2 Coordinate connectivity with existing and new TransPAC Partners
• ONGOING – meeting at APAN, TNC, and Internet2 Conferences 2.6.3 Ensure connectivity in support of the Large Hadron Collider
• ONGOING -‐ Attendance at LHC meetings 2.6.4 Ensure connectivity in support of Belle-‐II
• ONGOING -‐ Conversations continue at meetings for mutual backup possibilities
2.6.5 Coordinate with network partners and researchers to support large flows
• ONGOING – Will be aided by flow data analysis tools being developed 2.6.6 Explore additional application communities
• ONGOING – focus of Year 3 2.6.7 Identify and contact US branch campuses in Asia-‐Pacific region
• Planned for Year 3 3.1 Operations 3.1.1 Analyze TransPAC Flow data in support of research and operations. Develop policy and plan for anonymizing and storing data. Provide data to researchers as requested.
22
• COMPLETED -‐ infrastructure in place; analysis being performed jointly with NetSage
3.2 Operations Year 2 3.2.1 Integrate TransPAC3 SDN Controller -‐ Work with systems engineers to transition the TransPAC3 SDN controller into the TransPAC4 network.
• PLANNED – Y3 3.2.2 Deploy SDN DDOS Solution Deploy the SDN based DDOS mitigation solution developed in TransPAC3.
• PLANNED Year 3 3.2.3 Evaluate and update existing POPs and equipment Evaluate and install new points of presence and equipment as community demands expands and changes.
• ONGOING -‐ See discussions in Section 5 for additional circuits and OXP 3.2.4 Deploy Path Hinting service into the TransPAC4 routers and work with partners, connectors, and peers to adopt the service.
• DELAYED – Due to lack of need, this will shift to an evaluation in Year 3 3.3 Operations Year 3 3.3.1 Evaluate and deploy new circuits
• Ongoing; In original proposal, 100G circuit would be deployed in Year 3, but this is already present
3.5 Operations Ongoing 3.5.1 Refine network measurement and monitoring data Refine and make network telemetry useful to researchers and the IRNC:NOC. This will include creating public web pages and repositories that provide easy access to data.
• ONGOING -‐ coordinating with IRNC NOC 3.5.2 Tune and support large flows Monitor large flows across the network and work with researchers to fine tune the end points and entire path. Work with researchers to ensure performance is as expected.
• ONGOING 3.5.3 Deploy support and telemetry for large flows . Work with partners to configure and allow for large flows across the TransPAC4 network. Work with systems to deploy monitoring solutions for large flows.
• ONGOING
3.5.4 Operate Infrastructure; Pay for circuit, port, maintenance, and hardware costs.
• ONGOING
23
4.1 Research / Experimentation Year 1 4.1.1 SDN for DDOS mitigation -‐ Research the feasibility of using SDN technologies for detection and mitigation of DDOS attacks on the TransPAC network.
• ONGOING -‐ Continuation in Year 3 4.2 Research / Experimentation Year 2 4.2.1 Test larger than 10G flows Test network equipment, configuration, and support for greater than 10G flows.
• DELAYED until Year 3, not needed by applications until that time frame 4.2.2 Path Hinting deployment for testing, experimentation, and running community demonstrations.
• DELAYED until Year 4 4.2.3 SDN for Network Measurement and Monitoring. Use SciPass and Open Flow as load balancer to an intrusion detection system cluster or netflow cluster.
• CANCELED -‐ this approach was evaluated in Year 2, and decided to be sub-‐optimal. An approach using a network splitter has proven to be more efficient
4.2.4 WAN Acceleration Work with the Phoebus project to do WAN acceleration experimentation.
• DELAYED/CANCELED – Due to lack of contact with Swany throughout Year 1 despite numerous attempts to contact him, it is likely this portion of the project will need to be rescoped
4.2.5 Undergrad Research Project Work with 1-‐2 undergraduate students to form a research project of their choosing.
• COMPLETED -‐ three undergraduates spent summer 2016 working with us doing analysis of TP3 data
4.3 Research/Experimentation Year 3 4.3.1 SDN at 100G
• PLANNED for Year 3 4.3.2 Evaluate SDN in an Internet Exchange environment
• ONGOING 4.3.3 WAN acceleration
• CANCELED due to lack of response from Phoebus group
4.3.4 Undergrad Research projects • PLANNED for Year 3