using process-level packet routing to support real-time data visualization… · 2015. 7. 29. ·...

8
Using Process-Level Packet Routing to Support Real-Time Data Visualization: A Case Study in Antarctica Laura Connor and Kay A. Robbins Division of Computer Science University of Texas at San Antonio San Antonio, TX 78249-0667 lconnor, krobbins @cs.utsa.edu Presenting Author: Laura Connor Corresponding Author: Kay A. Robbins, (210)458-5543, FAX (210)458-4437 Abstract— Minimizing latency and jitter is an impor- tant goal for communication in many distributed multime- dia and visualization applications. This case study exam- ines the impact of user-level packet routing on performance in a distributed system for controlling and monitoring the acquisition of geophysical data during aerial surveying in Antarctica. The packet routing mechanism employs concur- rent processes and multiple display proxies communicating asynchronously via message passing and message queues. Asynchronous communication between processes reduces process blocking so that more data are available for real- time analysis. Performance tests indicate that proficient routing results in high packet throughput with low packet latency and jitter at multiple visualization stations during data acquisition. Our application routes packets of disparate sizes (73 - 12,386 bytes) at fixed rates (4 - 116 Hz). Performance metrics for individual packets (arrival times, travel inter- vals, drop times, drop intervals) reveal unexpected trans- fer anomalies. Large-sized packets show frequent periodic packet loss anomalies as packet transfer rates approach maximum system load. Intermittent losses of data stream synchrony due to small perturbations often result in large bursts of lost packets. These anomalies suggest that packet routing schemes operating near system capacity should be self-monitoring and possibly self-dampening when instabil- ities are detected. Keywords— packet routing, visualization, quality of ser- vice, data acquisition, performance, geophysics I. I NTRODUCTION Current computing trends are moving away from sin- gle user/single computer environments toward distributed environments supporting multiple users and real-time data visualization during data acquisition [1]. One example of a collaborative distributive effort is Georgia Tech’s Dis- tributed Laboratory Project [2]. Their complex set of sys- tems creates a dynamic interactive environment support- ing visualization and multiple computational instruments in a network distributed setting. Such laboratories allow scientists to interactively use real-time visualization for troubleshooting, monitoring, parameter modification, and experimental steering, making real-time visualization crit- ical to success in scientific experiments and simulations. These activities take on increased importance when sci- ence moves from the controlled environment of the labo- ratory into the field. Fieldwork is often performed in harsh environments, where scientists must contend with natural elements and the usual vagaries of hardware and software. Often such work must be performed in a predetermined time window with limited options for reconfiguration or equipment replacement. The role of visualization in diag- nosis increases in value in such environments. This paper describes an application in such an envi- ronment - aerial geophysical surveying in Antarctica [3] by the Support Office for Aerogeophysical Research (SOAR) [4]. Antarctic surveying is characterized by ex- treme weather conditions and narrow time windows for data collection. Faults in data acquisition that are not di- agnosed in the field result in lost data that cannot be re- covered later. Better real-time visualization tools reduce the number of people needed to acquire data and lessen the human impact on this fragile environment. The SOAR platform (Fig. 1) is a specially outfitted Twin Otter containing nine data collecting instruments and six computers that are located throughout the plane. Mul- tiple incoming data streams generated from instrumen- tation such as ice-penetrating radar and a magnetometer are captured and archived by a central computer that also routes the data to real-time visualization stations located throughout the plane (Fig. 2). All of the computer systems run QNX, a real-time variant of Unix that supports real- time scheduled and distributed communication in a global namespace [5]. This distributed configuration, multiple visualization stations sharing the same real-time data, re- quires a cohesive communication infrastructure. The pro-

Upload: others

Post on 06-Oct-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Using Process-Level Packet Routing to Support Real-Time Data Visualization… · 2015. 7. 29. · Keywords— packet routing, visualization, quality of ser-vice, data acquisition,

Using Process-Level Packet Routing to Support Real-Time Data Visualization:A Case Study in Antarctica

Laura Connor and Kay A. RobbinsDivision of Computer Science

University of Texas at San AntonioSan Antonio, TX 78249-0667flconnor, [email protected]

Presenting Author: Laura ConnorCorresponding Author: Kay A. Robbins, (210)458-5543, FAX (210)458-4437

Abstract— Minimizing latency and jitter is an impor-tant goal for communication in many distributed multime-dia and visualization applications. This case study exam-ines the impact of user-level packet routing on performancein a distributed system for controlling and monitoring theacquisition of geophysical data during aerial surveying inAntarctica. The packet routing mechanism employs concur-rent processes and multiple display proxies communicatingasynchronously via message passing and message queues.Asynchronous communication between processes reducesprocess blocking so that more data are available for real-time analysis. Performance tests indicate that proficientrouting results in high packet throughput with low packetlatency and jitter at multiple visualization stations duringdata acquisition.

Our application routes packets of disparate sizes (73 -12,386 bytes) at fixed rates (4 - 116 Hz). Performancemetrics for individual packets (arrival times, travel inter-vals, drop times, drop intervals) reveal unexpected trans-fer anomalies. Large-sized packets show frequent periodicpacket loss anomalies as packet transfer rates approachmaximum system load. Intermittent losses of data streamsynchrony due to small perturbations often result in largebursts of lost packets. These anomalies suggest that packetrouting schemes operating near system capacity should beself-monitoring and possibly self-dampening when instabil-ities are detected.

Keywords— packet routing, visualization, quality of ser-vice, data acquisition, performance, geophysics

I. INTRODUCTION

Current computing trends are moving away from sin-gle user/single computer environments toward distributedenvironments supporting multiple users and real-time datavisualization during data acquisition [1]. One example ofa collaborative distributive effort is Georgia Tech’s Dis-tributed Laboratory Project [2]. Their complex set of sys-tems creates a dynamic interactive environment support-ing visualization and multiple computational instruments

in a network distributed setting. Such laboratories allowscientists to interactively use real-time visualization fortroubleshooting, monitoring, parameter modification, andexperimental steering, making real-time visualization crit-ical to success in scientific experiments and simulations.These activities take on increased importance when sci-ence moves from the controlled environment of the labo-ratory into the field. Fieldwork is often performed in harshenvironments, where scientists must contend with naturalelements and the usual vagaries of hardware and software.Often such work must be performed in a predeterminedtime window with limited options for reconfiguration orequipment replacement. The role of visualization in diag-nosis increases in value in such environments.

This paper describes an application in such an envi-ronment - aerial geophysical surveying in Antarctica [3]by the Support Office for Aerogeophysical Research(SOAR) [4]. Antarctic surveying is characterized by ex-treme weather conditions and narrow time windows fordata collection. Faults in data acquisition that are not di-agnosed in the field result in lost data that cannot be re-covered later. Better real-time visualization tools reducethe number of people needed to acquire data and lessenthe human impact on this fragile environment.

The SOAR platform (Fig. 1) is a specially outfitted TwinOtter containing nine data collecting instruments and sixcomputers that are located throughout the plane. Mul-tiple incoming data streams generated from instrumen-tation such as ice-penetrating radar and a magnetometerare captured and archived by a central computer that alsoroutes the data to real-time visualization stations locatedthroughout the plane (Fig. 2). All of the computer systemsrun QNX, a real-time variant of Unix that supports real-time scheduled and distributed communication in a globalnamespace [5]. This distributed configuration, multiplevisualization stations sharing the same real-time data, re-quires a cohesive communication infrastructure. The pro-

Page 2: Using Process-Level Packet Routing to Support Real-Time Data Visualization… · 2015. 7. 29. · Keywords— packet routing, visualization, quality of ser-vice, data acquisition,

Fig. 1. The Twin Otter survey aircraft (top). Two onboardequipment racks (right 2 columns). The magnetometer(left).

gramming challenge posed by such distributed systems isto optimize interprocess communication and data routingduring data acquisition without adversely impacting real-time visualization and data processing. In many monitor-type activities, receiving every packet is not as impor-tant as receiving packets consistently, so quality of servicerather than loss-free delivery is the relevant metric.

This paper presents a case study of the aerial acquisi-tion system and its accompanying visualization displaysused by SOAR. A performance analysis shows that care-fully designed user-level packet routing and monitoringcan significantly enhance throughput and reduce jitter forvisualization while eliminating the need for costly equip-ment upgrades. Different factors affect the efficiency ofprocess-level routing, but reducing process blocking is akey step towards better real-time performance.

Radar Data InputSerial Data Inputs

Real - Time Visualization Stations

Data Acquisition System

Process-level Packet Routing

Fig. 2. Schematic of the onboard data acquisition system.

II. THE ROUTING SYSTEM

The current project was undertaken because the orig-inal survey system could not fully display the data be-ing acquired and exhibited irregular bursts of activity dur-ing which the distributed visualization displays stalled.Because upgrading the system was not an easy alterna-tive, the bottlenecks in the existing software were an-alyzed. The original acquisition system consisted ofthree processes - Spool, Grouper and Router (Fig. 3a).Spool records data to disk and initiates data packet rout-ing. Grouper provides temporary packet buffering andouter distributes data packets to remote visualization sta-tions. The three processes communicate by message pass-ing. Grouper alternates between waiting for packets from

Spool

packet(reply)

(request)bufferpacketbuffer

RouterGrouper

packet

per display

(request)

4 requestspacket(reply)[

disk

packet

(request)

[

(reply)bufferpacket

ORIGINAL PACKET ROUTING SYSTEM(a)

packetpacketpacketpacketpacketpacket

displayproxy

displayproxy

displayproxy

packetpacketpacketpacketpacketpacket

packetpacketpacketpacketpacketpacket

packetpacketpacketpacketpacketpacket

packet

mailbox

packet

packetdisk

(reply)

(request)

Spool

display

display

display

Disk-writer

CoordinatorNEW PACKET ROUTINGSYSTEM

(b)

notificationnon-blocking

Fig. 3. Process interactions within the original (a) and new (b)packet routing systems.

Page 3: Using Process-Level Packet Routing to Support Real-Time Data Visualization… · 2015. 7. 29. · Keywords— packet routing, visualization, quality of ser-vice, data acquisition,

Spool and checking for a buffer request from Router.Spool waits for a packet from the data acquisition system,records that packet to disk, and then responds to Grouper’srequest with the current packet which Grouper buffers.Router waits for a complete packet buffer from Grouperand then responds to four rounds of data packet requestsfrom the remote display computers. Both Grouper andRouter are blocked from engaging in other communica-tion during buffer transfers.

A new packet routing system was designed and imple-mented to increase network throughput without compro-mising the acquisition and storage of data. To accom-plish these objectives, bottlenecks in the original systemwere analyzed and process blocking was eliminated wherepossible. Two asynchronously communicating processes,Spool and Coordinator, regulate information flow in thenew system (Fig. 3b). Disk-writer, a separate child pro-cess of Spool, writes data packets to storage. Grouper hasbeen replaced with a message queue (mailbox).

Although weaknesses in the original packet routingscheme are readily apparent, it was not clear at the be-ginning of the project what impact an improved routingdesign would have on actual system performance. Theoriginal Router uses a blocking copy during packet trans-fers with the remote displays. In contrast, communicationbetween the new Coordinator and the remote stations ishandled by display proxies. Each display proxy is imple-mented as a thread within the Coordinator with its ownmailbox to buffer data packets for its remote client. Co-ordinator keeps a record of packet types needed by eachdisplay and notifies the appropriate display proxies whena packet of a particular type arrives. While the new rout-ing scheme uses packet copying (from the initial mailboxto mailboxes of the individual display proxies), the extracopies could be eliminated by incorporating a smart bufferscheme that passed pointers. However, the subsequentanalysis shows that this improvement was not needed toachieve the desired levels of performance.

The new packet routing scheme overlaps data recordingwith data routing and eliminates process blocking wherepossible. Display proxies control packet flow throughtheir mailboxes allowing each remote station to display atits own rate. When a display proxy must drop packets be-cause the remote display has fallen behind, the proxy canadapt its dropping to the type of remote display it is sup-porting. These proxies monitor routing performance bymeasuring packet throughput and latency for this traffic.Dropped packets are counted and linked with their asso-ciated mailbox allowing real-time monitoring of routingprocesses and packet flow.

III. PERFORMANCE TESTING

To compare the performance of the two routing designs,a laboratory testbed was set up and configured using hard-ware identical to SOAR’s real-time monitoring setup forAntarctic surveying. The main acquisition system was a486, 50 MHz computer, with 16 Mbytes of RAM. Three233 MHz Pentium computers were used as display nodes.During performance testing one display requested onlyradar packets, 12,386 bytes in length. The other two nodeseach requested all serial packets ranging in size from 73 to235 bytes.

On board the aircraft, instruments generate fixed sizepackets at a constant rate. To accurately simulate data col-lection, a low-overhead packet generator process was writ-ten. QNX timer structures simulated data transmissions toSpool at individual instrument rates. To eliminate extra-neous disk and network effects, a single packet from eachinstrument was kept in memory. Each time a packet wastransmitted, its sequence number field was incremented.Unused packet fields were used to store timing informa-tion as each copy of the packet traversed the system.

Performance testing began by transmitting data pack-ets at acquisition rates currently employed by the SOARproject. Rates for radar and serial packets were increasedseparately and throughput, latency, and packet drops weremeasured at the three test visualization nodes. A pre-requisite for the maximum transmission rate was that alltransmitted packets must be recorded to disk by both rout-ing systems. The original packet routing code was mod-ified slightly to record within each packet its arrival timeat Spool so that the visualization stations could calculatepacket latency. Display programs recorded statistics in-stead of doing data visualization. These programs wereidentical for each routing system except for setup mes-sages sent before packet routing actually began. The ac-tion of sending a request and receiving a data packet replywas consistent during all performance runs. Performancestatistics were based on data packets actually received bythe display nodes. Only the process level packet routingmechanism differed during testing.

IV. RESULTS

We compared the new process-level packet routingmechanism with the original mechanism under varyingpacket transmission rates. The maximum expected rateis drawn as a black dashed line. The two nodes receivingserial packets usually maintained identical rates and wereplotted as a single point, but when rates differed, they wereaveraged. Since radar and serial data streams are acquiredindependently, all test runs held the rate of one packet type

Page 4: Using Process-Level Packet Routing to Support Real-Time Data Visualization… · 2015. 7. 29. · Keywords— packet routing, visualization, quality of ser-vice, data acquisition,

4

6

8

10

12

14

16

Serial Rate: 34Hz

Radar Packet Rate (New System)Radar Packet Rate (Orig System)Radar Packet Rate (expected)

4 6 8 10 12 14 164 16

Rad

ar P

acke

ts R

ecei

ved

(p

er s

ec)

Original Packet Routing System

New Packet Routing System

Current System Configuration

8 12

Radar Packets Transmitted (per sec)

10

15

20

25

30

35

Serial Rate: 34Hz

Serial Packet Rate (New System)Serial Packet Rate (Orig System)Serial Packet Rate (expected)

4 6 8 10 12 14 16

Original Packet Routing System

New Packet Routing System

Current System Configuration

Ser

ial P

acke

ts R

ecei

ved

(p

er s

ec)

Radar Packets Transmitted (per sec)4 8 1612

Fig. 4. Radar (upper) and serial (lower) packets received as theradar transmission rate is increased.

fixed while varying the other. SOAR’s current setup ac-quires large radar packets at a 4 Hz rate and small serialpackets at 34 Hz. Performance tests used these rates as theinitial and fixed rate. An increased radar rate could occurif the current, low resolution radar, were replaced with ahigher resolution system. An increased serial packet ratecould occur if additional instruments were added or by in-creasing current sampling rates.

As radar packet transmissions increase from 4 Hz to 16Hz the redesigned routing system transfers the maximumnumber of serial and radar packets until radar transmis-sions exceed 12 Hz (Fig. 4). The original system routesapproximately 30% of all serial packets and this rate goesdown as the transmission rate increases. The original sys-tem routes a greater percentage of radar packets, but can-not maintain the maximum expected rate as greater num-bers of packets are transmitted.

As serial packet transmissions increase from 34 Hz to116 Hz, the number of received packets at the displaynodes shows more variability (Fig. 5). Both systems could

1

2

3

4

Radar Rate: 4Hz

Radar Packet Rate (expected)

RPR - high (New System)

RPR - high (Orig System)

RPR - low (New System)

RPR - low (Orig System)

40 50 60 70 80 90 100 110Rad

ar P

acke

ts R

ecei

ved

(p

er s

ec)

Serial Packets Transmitted (per sec)1165634

Original Packet Routing System

New Packet Routing System

Current System Configuration

86

20

40

60

80

100

Radar Rate: 4Hz

SPR - high (New System)SPR - high (Old System)SPR - low (New System)SPR - low (Old System)Serial Packet Rate (expected)

40 50 60 70 80 90 100 110

Ser

ial P

acke

ts R

ecei

ved

(p

er s

ec)

Serial Packets Transmitted (per sec)865634

Original Packet Routing System

New Packet Routing SystemCurrent System Configuration

116

Fig. 5. Radar (upper) and serial (lower) packets received as theserial transmission rate is increased.

successfully route almost all radar packets at the initialserial rate and at 56 Hz. The upper boundary representsthe best throughput rate observed at the specified gener-ation rate, while the lower boundary represents the worstthroughput rate observed. For the new system, this vari-ability is due to changing run priorities of the display prox-ies to encourage higher numbers of radar packets received.The high variability of received radar packets using theoriginal system is due to inherent instability within theoriginal routing mechanism, discussed later. The origi-nal system reaches its maximum serial throughput level at56 Hz. In contrast, the new packet routing system rapidlyloses its performance advantage above the 86 Hz trans-mission rate. This performance loss is not observed in theoriginal packet routing system because serial packet lossis consistently large even at the lowest transmission rates.

Packet latency is another indicator of performance inreal-time visualization applications. In addition to a sig-nificant number of lost packets, the original scheme showsa large variance in packet latencies during performanceruns (Fig. 6). The uneven arrival rate makes smooth visu-

Page 5: Using Process-Level Packet Routing to Support Real-Time Data Visualization… · 2015. 7. 29. · Keywords— packet routing, visualization, quality of ser-vice, data acquisition,

0

20

40

60

80

100

60 100 140 180 220 260 300

New Packet Routing SystemRadar Packet Rate: 4Hz

Serial Packet Rate: 34Hz

CAM - Packets

RAD - Packets

RTQC - Packets

% P

acke

ts R

ecei

ved

Packet Latency (msec)No Show260+

(a)

0

20

40

60

80

100

60 100 140 180 220 260 300

Original Packet Routing SystemRadar Packet Rate: 4Hz

Serial Packet Rate: 34Hz

CAM - PacketsRAD - PacketsRTQC - Packets

% P

acke

ts R

ecei

ved

Packet Latency (msec)No Show260+

(b)

Fig. 6. Distribution of packet latencies at the three remote nodesusing the new (a) and original (b) packet routing systems.

alization nearly impossible to achieve. For example evena simple strip chart visualization of serial data appears tojump and jerk and may prevent an operator from seeingtiming relations among the data streams.

The new packet routing system consistently generates anormal distribution of packet latencies. The central ten-dency values (mean, mode, median) are relatively close invalue. Packet latencies for the original system are not nor-mally distributed. The central tendency measurements ofpacket latency for these runs indicate a non-normal dis-tribution. For the new packet routing system, variancein packet latency increases exponentially with increasingtransmission rates (except for radar packet latencies whena higher thread priority is used for transferring radar pack-ets). The variances calculated when using the originalpacket routing system are always high.

In addition to collecting overall latency and throughputstatistics, individual packets were time stamped and theirlatencies recorded at the test display stations. These statis-tics produced a more detailed view of the instabilities ob-

served at the higher transmission rates. For example, radarthroughput using the original routing system oscillated be-tween 50% and 100% when radar packets were transmit-ted at 4 Hz and serial packets were transmitted at 86 Hz(Fig. 5, top graph). A more detailed view of this unsta-ble behavior shows packet latency as a function of arrival

10

100

1000

104

1 2 3 4 5

Serial Packet Rate - 86 HzRadar Packet Rate - 4Hz

Travel Time (CAM)Travel Time (RAD)Travel Time (RTQC)

Pac

ket

Lat

ency

(m

sec)

Arrival Time (min)

(a)

10

100

1000

104

1 2 3 4 5

Serial Packet Rate - 86 HzRadar Packet Rate - 4Hz

CAM - Travel TimeRAD - Travel TImeRTQC - Travel TIme

Pac

ket

Lat

ency

(m

sec)

Arrival Time (min)

(b)

10

100

1000

104

1 2 3 4 5

Serial Data Rate - 86 HzRadar Data Rate - 4 HzCAM - Packets

RAD - Packets

RTQC - Packets

Pac

ket

Lat

ency

(m

sec)

Arrival Time (min)

(c)

Fig. 7. Three performance runs using identical transmissionrates show very different packet transfer patterns and laten-cies: original routing system (a)(b), redesigned routing sys-tem (c).

Page 6: Using Process-Level Packet Routing to Support Real-Time Data Visualization… · 2015. 7. 29. · Keywords— packet routing, visualization, quality of ser-vice, data acquisition,

101

102

103

2 3 4 5

Radar Packet Rate: 16HzSerial Packet Rate: 34Hz

Packet Travel Time (CAM)Packet Travel Time (RTQC)PacketTravel Time (RAD)CAM - averageRTQC - averageRAD - averagePacket Drop Interval (Coord)Packet Drop Interval (RAD)

Pac

ket

Inte

rval

s (m

sec)

Arrival Time (min)

0

100

200

300

400

500

600

700

3.3 3.35 3.4 3.45 3.5 3.55 3.6

Serial Packet Rate - 34 HzRadar Packet Rate - 16 HzTravel Time (CAM)

Travel Time (RAD)Travel Time (RTQC)Packet Drops (Coord)Packet Drops (RAD)

Pac

ket

Inte

rval

s (m

sec)

Arrival Time (min)

Fig. 8. Packet transfer patterns across an Ethernet show pe-riodic bursts of lost packets as packet latency suddenly in-creases. The lower graph zooms in on a small anomaly inthe upper graph.

time during two performance runs using the original rout-ing system and identical transmission rates (Fig. 7a,b). Asexpected, a pattern of slowly increasing latency repeatsas packets build up in Grouper and Router falls behind.When Router’s buffer is replaced with Grouper’s buffer,the latency drops as Router is effectively restarted. Whatwas completely unexpected in these results is the vari-ability in the period of buildup from run to run under ex-actly the same experimental conditions. The buildup andrestart occurs about 24 times a minute in one run (Fig. 7a)and twice a minute in the other run (Fig. 7b). Further-more, during the second run, no radar packets at all werereceived during two intervals of the 5 minute run. Thesame data stream run under the new packet routing schemeshows the packet latencies as much more stochastic, re-sulting in a smoother visual appearance at the remote dis-plays (Fig. 7c).

Another issue explored was the affect of thread priority

on throughput and latency. When the display proxy trans-ferring radar packets was allowed to execute at a higherpriority, throughput measured at the corresponding displaystation was greater and radar packet latency was reduced.For example, by giving radar packets higher priority, theaverage radar packet travel time decreased from 180 msto 50 ms. Concurrent with the increased radar through-put was a decrease in throughput at the stations collectingserial packets. Packet latency for serial packets also in-creased slightly. Since radar packets are at least 50 timeslarger than any serial packet, the cost of holding traveltime below 100 ms is extracted from serial packet per-formance. When a machine is running at its limit, user-adjustable priorities can optimize performance of particu-lar segments of the data stream.

Visualization of the time series highlights another rout-ing phenomenon. At the beginning of two similarly spacedintervals, radar packet latency suddenly increases. Dur-ing this increase in latency, a burst of radar packets aredropped during packet transfers across the Ethernet. Aclose-up plot of a small anomaly shows three packetstreams coexisting until some perturbation causes the in-terval pattern to lose synchronicity resulting in a burst ofdropped packets (Fig. 8). This anomaly is also evident forthe smaller, serial packets, but only at high serial packetacquisition rates.

Recording packet drops allows packet drop anomaliesto be identified. This was not done while comparing thetwo routing systems because disk writes put an extra loadon the new system. Nevertheless, by comparing a timeseries plot including dropped packets with a time seriesplot omitting dropped packets, the same type of latencyanomaly within the packet arrival pattern signals similarbursts of dropped packets. A time series plot overlay-ing packet drops with packet arrivals reveals two types ofpacket dropping. Packets are dropped uniformly from thefirst mailbox at the onset of packet routing, when trans-mission rates are too high for the routing system to han-dle. The display proxies drop their packets in bursts, atthe second mailbox, when the packet request interval sud-denly increases. These burst losses occur during Ethernettransfers.

V. DISCUSSION

The current Antarctic aerial survey system uses threeof the four visualization computers for monitoring andvisualizing data acquired via serial input lines. Serialdata throughput at these display stations using the originalpacket routing system is approximately 30% of the totalamount of serial data acquired. The new packet routingsystem is able to route 100% of the serial packets under

Page 7: Using Process-Level Packet Routing to Support Real-Time Data Visualization… · 2015. 7. 29. · Keywords— packet routing, visualization, quality of ser-vice, data acquisition,

normal operating load to each of the experimental visual-ization stations. The radar data comprises approximately90% of all collected data. The original packet routing sys-tem is optimized to route all radar packets to the radar vi-sualization station at the expense of the serial data.

Besides increasing packet throughput, the new systemdecreases packet latency. This lower packet latency givesthe new system the ability to handle higher data acquisi-tion rates. The new system reduces packet jitter by main-taining more consistency in packet latency during routing.These results indicate that careful user-level packet rout-ing can significantly improve the capacity and robustnessof such systems.

Periodic packet loss observed in the measurements re-ported here can be caused by a number of different inter-actions. The 50 MHz acquisition computer was operatingclose to its performance limit. Slight delays introduced byperiodic network tasks (on this system, a name locator pro-cess periodically probes the nodes on the network to main-tain the system’s global name space for distributed compu-tation) perturbed synchrony of the data streams. Periodicprobing may contribute to clusters of packet loss observedduring performance testing of the new packet routing sys-tem. The frequency of burst occurrence was somewhat,although not exactly, correlated with the settable probetime interval. A surprising observation was the signifi-cant variability in the number of drops per burst. Regu-lar packet transmission rates from the packet generators,and constant requests from the display programs, injectsynchronicity into the running system, so eventually, theexecuting threads develop a distinguishable pattern. Anydisruption in this pattern is evident as a perturbation in thearrival pattern of packets at the display computers.

Consecutive packet loss (i.e. burst losses) and periodicpacket losses were also identified during a multicasting ex-periment over the Mbone [6]. This experiment measuredpacket loss during Internet transfers among 12 recipientsparticipating in a world-wide multicast session. Signif-icant burst loss events lasting from a few seconds up tothree minutes and periodic packet loss events lasting ap-proximately 0.6 sec and occurring at 30 sec intervals wereattributed to periodic updates by network routers. This pe-riodic updating corresponds to periodic name probing inour system.

High-bandwidth multimedia experiments over the veryhigh-speed vBNS (Internet2) measured packet loss andthroughput at the application-level and network-level andfound network performance to be influenced more bypacket-rate (i.e., number of packets) than by bit-rate (i.e.,packet size) [7]. Instead of collecting transfer statisticsat internal nodes or measuring end-to-end behavior, an-

other experiment measured network performance and dy-namics by using multicast probes to infer packet loss ratesbetween the senders and receivers [8].

Continued network performance studies are required tobetter understand these observed packet loss anomaliesand their spatial and temporal interdependence. Time se-ries methods have been used to search for structure in Eth-ernet traffic [9][6][10] and may be useful for identifyingpatterns of packet loss during packet routing. The ulti-mate goal of using correlation methods is to predict whenpacket burst losses will occur. Modeling packet loss maysuggest how to avoid dropping so many packets, or how toavoid long bursts of packet loss. A model could optimizea set of parameters to minimize packet loss for a given de-livery rate at the visualization stations. These parameterslikely include message queue size, process and messagepriorities, scheduling algorithms, and network process pa-rameters. Differential caching has been shown to success-fully modulate digital video traffic to different clients [11].A similar scheme can be implemented using multiple mes-sage queues between the acquisition computer and the dis-play stations.

The characteristics of our problem are also typical ofcontinuous media systems with network transfers near ca-pacity. Video and audio applications often generate fixedrate packet traffic that injects synchrony into the transmis-sion. Congestion control methods such as the Drop Pref-erence Management (DPM) or the Class-Based Thresh-olds (CBT) use active queue management protocols toconstrain packet throughput and minimize latency duringtimes of network congestion [12][13]. These protocolsmay offer solutions for reducing the high bursts of packetdrops seen at the display nodes during high volume Ether-net transfers.

One of the authors (L.C.) was a member of the SOARAntarctic team for the 1998-1999 and 1999-2000 field sea-sons. She participated in the aerial surveys and observeduse of the real-time visualization system in action. Mon-itoring allowed the detection of radar equipment failuresduring initial test flights. Real-time data visualization re-vealed ongoing problems such as icing of the exterior partof the pressure transducer (which was cleared by an oper-ator blowing warm air through the flexible tubing insidethe aircraft).

During implementation and testing of this closed dis-tributed network we encountered numerous issues com-mon to packet routing, measurements of network dynam-ics, and network performance analyses. Our work mea-sures packet loss seen within a small, private, real-timedistributed network during simulated high-speed data ac-quisition. Burst loss events and periodic loss events were

Page 8: Using Process-Level Packet Routing to Support Real-Time Data Visualization… · 2015. 7. 29. · Keywords— packet routing, visualization, quality of ser-vice, data acquisition,

identified during packet transmission experiments. At-tempts were made to understand and moderate the affectthese events had on real-time visualization performance.We suggest that the behavior of these small, closed, real-time systems may provide clues to wide-area network per-formance anomalies.

Finally, different hardware configurations may routepackets more efficiently. Faster processors would increasethe speed of the acquisition and display computers. Dif-ferent network topologies and network speeds may trans-fer packets with fewer collisions. Nevertheless, when thehardware limit is reached, good software approaches torouting can enhance these existing systems.

ACKNOWLEDGEMENTS

The authors would like to acknowledge Don Blanken-ship from the University of Texas Institute for Geophysics,for his initial support of the project, Scott Kempf fromSOAR, for programming assistance with QNX and thedata acquisition system, Dr. Matt Peters from SOAR, fortechnical assistance with the instrumentation, Charles B.Connor from Southwest Research Institute, for his assis-tance with the geophysics, and Steve Robbins from theUniversity of Texas at San Antonio, for his assistance con-verting graph images to postscript and converting the doc-ument to LaTeX format. This project was partially fundedby NSF OPP-9319379 and NSF ACI-9721348.

REFERENCES

[1] L. Connor, “Real-time airborne data visualization un-der QNX: Performance evaluation of a geophysical ap-plication in antarctica,” Master of science in computerscience, University of Texas at San Antonio, Oct 1999,http://vip.cs.utsa.edu/mobileviz/pubs/laurasthesis/.

[2] B. Plale, G. Eisenhauer, K. Schwan, J. Heiner, V. Martin, andJ. Vetter, “From interactive applications to distributed laborato-ries,” IEEE Concurency, vol. 6, no. 2, pp. 78–90, Apr-Jun 1998.

[3] R. Bell, D. Blankenship, C. Finn, T. Scamobs, J. Brozena, andS. Hodge, “Influence of subglacial geology on the onset of a westantarctic ice stream from aeorgeophysical observations,” Nature,vol. 394, pp. 58–61, Jul 1998.

[4] S. Magsino, D. Blankenship, and R. Bell, “Soar (support officefor aerogeophysical research) annual report,” Sep 1998.

[5] QNX Software Systems LTD, QNX Operating System 4.24: Sys-tem Archicture, QNX Software Systems LTD, 1998.

[6] M. Yajnik, J. Kurose, and D. Towsley, “Packet loss correlationin the mbone multicast network,” in Proceedings of the IEEEGlobal Internet Conference (GLOBECOM’96), London, Eng-land, Nov 1996, pp. 94–99.

[7] M. Clark and K. Jeffay, “Application-level measurements of per-formance on the vbns,” in Proceeeings of IEEE InternationalConference on Multimedia Computing and Systems, Florence,Italy, Jun 1999, vol. 2, pp. 362–366.

[8] R. Caceres, N. Duffield, J. Horowitz, and D. Towsley, “Multicast-

based inference of network-internal loss characteristics,” Tech.Rep. 98-17, University of Massachusetts at Amherst, Mar 1998.

[9] M. Yajnik, S. Moon, J. Kurose, and D. Towsley, “Measurementand modelling of the temporal dependence of packet loss,” inProceedings of The 18th Annual Joint Conference of the IEEEComputer and Communications Societies (IEEE INFOCOM’99),New York, Mar 1999.

[10] S. Moon, J. Kurose, P. Skelly, and D. Towsley, “Correlation ofpacket delay loss in the internet,” Tech. Rep. 98-11, Universityof Massechusettes Dept. of Computer Science, Jan 1998.

[11] S. Sen, D. Towsley, Z.-L. Zhang, and J. K. Dey, “Optimal mul-ticast smoothing of streaming video over an internetwork,” inProceedings of The 18th Annual Joint Conference of the IEEEComputer and Communications Societies (IEEE INFOCOM’99),New York, NY, Mar 1999.

[12] M. Parris, K. Jeffay, and F. Smith, “Lightweight active router-queue management for multimedia networking,” in MultimediaComputing and Networking, San Jose, CA, Jan 1999, vol. 3654of SPIE Proceedings, pp. 162–174.

[13] M. Parris, K. Jeffay, F. D. Smith, and J. Borgersen, “A better-than-best-effort service for continuous media udp flows,” in Pro-ceedings of The 8th International Workshop on Networking andOperating System Support for Audio and Video (NOSSDAV’98),1998, pp. 193–197.