drone net architecture for uas traffic management multi...

18
Submitted to IEEE Aerospace, Big Sky, 2018 1 Drone Net Architecture for UAS Traffic Management Multi-modal Sensor Networking Experiments Sam Siewert ERAU 3700 Willow Creek Rd Prescott, AZ 86314 928-777-6929 [email protected] Andalibi, Mehran ERAU 3700 Willow Creek Rd Prescott, AZ 86314 928-777-6754 [email protected] Stephen Bruder ERAU 3700 Willow Creek Rd Prescott, AZ 86314 928-777-3848 [email protected] Iacopo Gentilini ERAU 3700 Willow Creek Rd Prescott, AZ 86314 928-777-6626 [email protected] Jonathan Buchholz ERAU 3700 Willow Creek Rd Prescott, AZ 86314 [email protected] AbstractDrone Net is a conceptual architecture to integrate passive sensor nodes in a local sensor network along with traditional active sensing methods for small Unmanned Aerial System (sUAS) traffic management. The goal of the proposed research architecture is to evaluate the feasibility of the use of multiple passive sensor nodes integrating Electro- Optical/Infrared (EO/IR) and acoustic arrays networked around a UAS Traffic Management (UTM) operating region (Class G uncontrolled airspace for general aviation). The Drone Net approach will be further developed based on the feasibility analysis provided here, to compare to and/or be used in addition to RADAR (Radio Detection and Ranging) and Automatic Dependent Surveillance-Broadcast (ADS-B) tracking and identification in future experiments. We hypothesize that this hybrid passive plus active sensing approach can better manage non-compliant small UAS (without ADS-B transceivers) along with compliant UAS and general aviation in sensitive airspace, urban locations, and geo-fenced regions. Numerous commercial interests are developing UTM instrumentation for compliant and non-compliant drone detection and counter measures, but performance in terms of ability to detect, track, classify (bird, bug, drone, general aviation), identify, and localize aerial objects has not been standardized or well developed to compare multi-sensor solutions. The proposed Drone Net open system reference architecture is designed for passive nodes organized in a network, which can be integrated with RADAR and ADS-B. Here we present preliminary proof of concept results for two primary methods of truth comparison for generation of performance in terms of true and false positives and negatives for detection, classification, and identification. The first ground truth method designed and evaluated uses sUAS Micro Air Vehicle Link (MAVLink) ADS-B data along with EO/IR range detection experiments. The second ground truth method requires human review of triggered detection image capture and allows for truth performance assessment for non-compliant sUAS and other aerial objects (birds and bugs). The networked passive sensors have been designed to meet Class G and geo-fence UTM goals as well as assist with urban UTM operations. The approach can greatly complement NASA UTM collaboration and testing goals for 2020 and the “last fifty foot challenge” for package delivery UAS operations. The EO/IR system has been tested with basic motion detection for general aviation and sUAS in prior work, which is now being extended to include more sensing modalities and more advanced machine vision and machine learning development via the networking of the nodes and ground computing. The paper will detail the hardware, firmware and software architecture, and preliminary efficacy of the two ground truth methods used to compute standard performance metrics. TABLE OF CONTENTS 1. INTRODUCTION ....................................................... 1 2. DRONE NET CONCEPT ............................................ 3 3. SYSTEM ARCHITECTURE ........................................ 4 4. SOFTWARE ARCHITECTURE................................... 7 5. HYPOTHESIS TO TEST ............................................ 8 6. METHOD .................................................................. 9 7. RELATED RESEARCH ............................................ 11 8. LOCALIZATION DRONE DETECTION ANALYSIS AND TRUTH MODEL ................................................... 11 9. DRONE NET TRUTH MODELS ............................... 12 10. FUTURE PLANNED ACOUSTIC WORK ................ 14 11. FUTURE PLANNED ACTIVE SENSING WORK ..... 14 12. SUMMARY ............................................................ 16 ACKNOWLEDGEMENTS ............................................ 16 REFERENCES ............................................................. 16 BIOGRAPHY ............................................................... 17 1. INTRODUCTION The goal of the proposed work is to create a baseline to compare methods of UTM small Unmanned Aerial System (sUAS) observation and tracking, with contingency scenario triggering for non-compliant sUAS and establish local and regional Drone Net sensor systems. The system and software architecture is a new construction and concept for which our team first wanted to assess feasibility of using passive instruments (EO/IR and acoustic) either in place of or in

Upload: phamanh

Post on 14-Jun-2019

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

Submitted to IEEE Aerospace, Big Sky, 2018

1

Drone Net Architecture for UAS Traffic Management Multi-modal Sensor Networking Experiments Sam Siewert

ERAU 3700 Willow Creek Rd

Prescott, AZ 86314 928-777-6929

[email protected]

Andalibi, Mehran ERAU

3700 Willow Creek Rd Prescott, AZ 86314

928-777-6754 [email protected]

Stephen Bruder ERAU

3700 Willow Creek Rd Prescott, AZ 86314

928-777-3848 [email protected]

Iacopo Gentilini

ERAU 3700 Willow Creek Rd

Prescott, AZ 86314 928-777-6626

[email protected]

Jonathan Buchholz ERAU

3700 Willow Creek Rd Prescott, AZ 86314

[email protected]

Abstract— Drone Net is a conceptual architecture to integrate passive sensor nodes in a local sensor network along with traditional active sensing methods for small Unmanned Aerial System (sUAS) traffic management. The goal of the proposed research architecture is to evaluate the feasibility of the use of multiple passive sensor nodes integrating Electro-Optical/Infrared (EO/IR) and acoustic arrays networked around a UAS Traffic Management (UTM) operating region (Class G uncontrolled airspace for general aviation). The Drone Net approach will be further developed based on the feasibility analysis provided here, to compare to and/or be used in addition to RADAR (Radio Detection and Ranging) and Automatic Dependent Surveillance-Broadcast (ADS-B) tracking and identification in future experiments. We hypothesize that this hybrid passive plus active sensing approach can better manage non-compliant small UAS (without ADS-B transceivers) along with compliant UAS and general aviation in sensitive airspace, urban locations, and geo-fenced regions.

Numerous commercial interests are developing UTM instrumentation for compliant and non-compliant drone detection and counter measures, but performance in terms of ability to detect, track, classify (bird, bug, drone, general aviation), identify, and localize aerial objects has not been standardized or well developed to compare multi-sensor solutions. The proposed Drone Net open system reference architecture is designed for passive nodes organized in a network, which can be integrated with RADAR and ADS-B. Here we present preliminary proof of concept results for two primary methods of truth comparison for generation of performance in terms of true and false positives and negatives for detection, classification, and identification. The first ground truth method designed and evaluated uses sUAS Micro Air Vehicle Link (MAVLink) ADS-B data along with EO/IR range detection experiments. The second ground truth method requires human review of triggered detection image capture and allows for truth performance assessment for non-compliant sUAS and other aerial objects (birds and bugs).

The networked passive sensors have been designed to meet Class G and geo-fence UTM goals as well as assist with urban UTM operations. The approach can greatly complement NASA UTM collaboration and testing goals for 2020 and the “last fifty foot challenge” for package delivery UAS operations. The EO/IR system has been tested with basic motion detection for general

aviation and sUAS in prior work, which is now being extended to include more sensing modalities and more advanced machine vision and machine learning development via the networking of the nodes and ground computing. The paper will detail the hardware, firmware and software architecture, and preliminary efficacy of the two ground truth methods used to compute standard performance metrics.

TABLE OF CONTENTS

1. INTRODUCTION ....................................................... 1 2. DRONE NET CONCEPT ............................................ 3 3. SYSTEM ARCHITECTURE ........................................ 4 4. SOFTWARE ARCHITECTURE ................................... 7 5. HYPOTHESIS TO TEST ............................................ 8 6. METHOD .................................................................. 9 7. RELATED RESEARCH ............................................ 11 8. LOCALIZATION DRONE DETECTION ANALYSIS

AND TRUTH MODEL ................................................... 11 9. DRONE NET TRUTH MODELS ............................... 12 10. FUTURE PLANNED ACOUSTIC WORK ................ 14 11. FUTURE PLANNED ACTIVE SENSING WORK ..... 14 12. SUMMARY ............................................................ 16 ACKNOWLEDGEMENTS ............................................ 16 REFERENCES ............................................................. 16 BIOGRAPHY ............................................................... 17 

1. INTRODUCTION

The goal of the proposed work is to create a baseline to compare methods of UTM small Unmanned Aerial System (sUAS) observation and tracking, with contingency scenario triggering for non-compliant sUAS and establish local and regional Drone Net sensor systems. The system and software architecture is a new construction and concept for which our team first wanted to assess feasibility of using passive instruments (EO/IR and acoustic) either in place of or in

Page 2: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

2

addition to active RADAR and ADS-B methods of UTM. Ultimately, we would like to make a direct comparison to ADS-B and/or RADAR, but we felt it was necessary to first architect and assess the concept in terms of feasibility for UTM detection, tracking, and identification of UAS with passive methods alone.

Drone Net is being architected, designed, and prototyped by a collaborating team lead by Embry Riddle with support from students at University of Colorado Boulder in the Embedded Systems Engineering program.

Drone Net integrates passive sensor nodes in a local sensor network on the ground and on cooperative sUAS along with traditional active sensing methods for sUAS traffic management. The goal of Drone Net is to evaluate the use of multiple passive sensor nodes integrating Electro-Optical/Infrared (EO/IR) and acoustic arrays networked around a UAS Traffic Management (UTM) operating region (Class G uncontrolled airspace for general aviation (GA)). Drone Net is intended to support UTM operations concepts including real-time de-confliction (see and avoid sUAS-to-sUAS, and between GA and sUAS), contingency alerts, and event logging for non-cooperative sUAS and operation of sUAS in both urban and rural class G scenarios. Drone Net in particular is a fully open hardware, firmware and software architecture such that anyone can implement the sensor network or elements of it to collect data and share this data broadly to support machine vision and machine learning research for UTM real-time automation, which is described as UTM “autonomicity” in the UTM concept of operations [23].

The open architecture also provides a baseline that can allow for performance assessment of commercial UTM solutions and comparison of active and passive sensing instruments. For the work described in this paper, the Drone Net approach will be compared to and/or used in addition to RADAR (Radio Detection and Ranging) and Automatic Dependent Surveillance-Broadcast (ADS-B) tracking and identification. In this paper, we present preliminary feasibility analysis for two truth models to be used in performance comparison: 1) human review of imaging and acoustics and 2) geometric and physical computation of observability. Active sensing systems using LIDAR, RADAR, and ADS-B have limitations we discuss in this paper in detail including range, time reference precision, sample rate, cost, and durability/reliability for long term use. While flight nodes do have specific requirements for navigation that are unique compared to ground nodes, we have found significant overlap and commonality in requirements – the main difference noted is that LIDAR is of high value on the flight nodes, but all nodes require not only GPS or ADS-B localization, but also inertial sensing. With a network of passive sensors, placed on the perimeter and interior of an aerial column, and cooperative flight nodes, the goal for this work is supplant the need for ground RADAR and improve upon ADS-B alone or simply show that Drone Net in addition to RADAR can provide better situational awareness for UTM. We

hypothesize that the Drone Net approach can better manage non-compliant sUAS (without ADS-B transceivers) along with compliant UAS and general aviation in sensitive airspace, urban locations, and rural geolocations.

Drone Net objectives include passive instrument reference design, networking of instrument nodes, a clear comparison of performance for UTM automation machine vision and learning algorithms, and a standard for data collection, processing, and storage for broader use by the UTM community. To start, we have implemented a prototype EO/IR instrument used to collect data to establish feasibility results presented in this paper, but also outline plans for two additional instruments and a flight configuration for EO/IR with added LIDAR. To summarize, our objectives include:

1) Design and test 3 passive instruments for ground and flight use in a wireless network of sensors including:

a. EO/IR – narrow field LWIR (10-14 micron), 32 degree horizontal field of view (HFOV) and 26 degree vertical field of view (VFOV) with one or two narrow field panchromatic area scan cameras, 13 degree HFOV, 9.8 degree VFOV all interfaced via USB-3 to System-on-chip (SoC) processing. The EO/IR has been prototyped and has been under testing for the past year at ERAU with ability to catalog common aerial objects within a 100 meter radius column [29].

b. All-sky visible camera – 180 degree hemispherical detection with 6 cameras arranged with overlapping conical wide fields of view (fish eye) all streaming MPEG Transport Streams (MPTS) to EO/IR for processing and to control tilt/pan for detecting aerial objects.

c. Acoustic array – perimeter, intensity probe and/or beam-forming array microphones [26] with analog input into EO/IR for processing or streaming over MPTS.

2) Provide software automation for a clear comparison of standard performance metrics for multi-sensor fusion and machine learning approaches to detection, classification, and identification for aerial objects.

3) Create catalogs of aerial objects by type (classification) based on detection (motion trigger baseline, salient object for machine learning) and build database of aerial objects with EO/IR, acoustic, and compare-to active sensing data.

4) Capture aerial object event logs for cataloged objects for UTM tests and experiments.

5) Fuse aerial identification and tracking information from other sources including aggregators such as flightradar24, Air Traffic Control (ATC), and military.

6) Provide a recipe for creation of campus (urban) Drone Net at ERAU and a version that can be transported to other UTM test sites.

Page 3: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

3

7) Publish a shared architecture, system reference design and Software Requirements Specification and Design (SRS/SRD) so others can replicate using off-the-shelf hardware, firmware (embedded Linux and Zephyr [34]), and software (Tensor flow [22], RDBMS, and POSIX compliant operating systems and file systems).

8) Provide public facing for collaboration on machine learning and machine vision algorithm development for UTM use.

9) Define system, firmware, and software architecture for use in UTM experiments and determine range limitations of each sensor along with performance when combined with competing software algorithms for data fusion, machine vision, and machine learning.

Based on our architecture presented here, along with EO/IR reference design and partially completed Drone Net prototype system and software, we have captured data to determine the feasibility of two truth models for which we have provided a comparison of True/False Positives (TP, FP) and True/False Negatives (TN, FN) for presence of a test compliant sUAS (ADS-B and flight navigation data available) including:

1) EO/IR Truth model 1 – human review to determine TP, FP, TN, FN

2) EO/IR Truth model 2 – geometric observability based on sUAS navigation logging and ADS-B transmissions with ground ADS-B logs to determine TP, FP, TN, FN.

The analysis of the data collected from two flight experiments presented here provides evidence for our assertion that these two truth models can be used effectively to compare a wide range of machine vision and machine learning software algorithms for detection, classification, localization (from multiple sensor samples), and identification of aerial objects in a class G air column. Collection of TP, FP, TN, FN over a range of machine vision and learning sensitivity parameter settings can be used to create Receiver Operator Characteristic (ROC) as we demonstrated in prior work [29], as well as Precision/Recall (PR) and F1 measure statistics for detection performance. Further analysis to classify and identify target types (sUAS, GA, natural, other) can be summarized with confusion matrix analysis. The long term goal of this analysis is to support creation and curation of a high quality public database to catalog aerial objects and to make available event logs from UTM testing. We have basic tools to reduce the fatigue of human review, but also have plans for future work for gamification of the human review of image and acoustic classification of data.

Finally, we also present the viability of acoustic detection and classification of sUAS to complement EO/IR with preliminary and basic characterization of sUAS acoustic spectral analysis outdoors, with and without background noise. Overall, the goal for this paper is to document the Drone Net architecture, establish feasibility of the approach, and to present our hypothesis that a low-cost network of ground and flight sensors may be more effective than ADS-

B along and/or ground RADAR, or at least provide better situational awareness for UTM than the active methods alone.

2. DRONE NET CONCEPT

The Drone Net concept includes elements shown in Figure 1. The work presented here has focused on feasibility testing of ground EO/IR and characterization of acoustic viability with the intent to expand nodes in a wireless communication network of nodes in a Drone Net Sensor Network in an air-column of 1Km diameter to start with. We intend to expand the range as we progress to the single pixel limits of EO/IR with pointed narrow field optical gain and high resolution detectors. For feasibility, we have not yet concerned ourselves with range, and in fact, flying our sUAS Beyond Visual Line of Sight (BVLOS) is a future goal to expand the size of our test column, which is a shared UTM goal. The Passive Sensing Network will include at least the EO/IR, a future All-sky camera (in development), and a future acoustic array (also in development) at a location as indicated around the perimeter and within the ground footprint of the column. Cooperative sUAS are equipped with LIDAR (for proximity operations and collision avoidance) and an EO/IR identical to the ground, but with a gimbaled mount (compared to tripod tilt/pan). It is imagined that the flight nodes can become part of Drone Net with a DSRC communication protocol (to be determined), but that non-cooperative, non-compliant sUAS may also be in this column along with natural aerial objects (birds, bugs, meteors, debris, ground clutter, etc.).

The active sensors, RADAR, and LIDAR will be used on the ground as secondary validation of our results and to support our hypothesis that the network of passive sensors can compete with RADAR or even outperform in terms of classification and identification performance at lower cost. The flight segment includes passive EO/IR and IMU navigation, as well as active ADS-B and LIDAR to support sUAS-to-sUAS, and GA see-and-avoid along with ability to provide as-flown geometric truth data from navigation logs and ADS-B broadcasts. Flying a higher fidelity IMU allows us to grade the value of ADS-B in such a small column given the protocol’s limited sample rate (2Hz), precision limits with geodetic data (1/n degrees latitude and longitude). Additionally, this approach allows for experimentation with a potential improvement to the ADS-B protocol, or ADS-B++.

Page 4: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

4

To enable data sharing for de-confliction with ground node information and to experiment with ADS-B++, we plan to incorporate IEEE 802.11 Direct Short Range Communication (DSRC) between ground nodes and cooperative sUAS, as well as between the ground nodes in a local area network. The Drone Net concept is intended to support near term goals for EO/IR and acoustic characterization and feasibility analysis, but also longer-term goals including:

1) Development, competitive comparison and use of machine vision and machine learning for detection, classification, and ultimately identification.

2) Field testing by hosting and participating in UTM experiments including urban, rural, see-and-avoid, cooperative and non-cooperative de-confliction, with DSRC.

3) Aggregation and curation of high quality sUAS catalogs with event logging of test data including: EO/IR images, acoustic spectrum analysis, flight IMU logs, LIDAR point clouds, along with active sensing RADAR and ADS-B data.

4) Simulation of sUAS and passive ground instruments to predict detectability and to guide future experiments and placement of sensor nodes.

5) Re-play ADS-B and flight navigation data in a simulation to create an observation truth model for EO/IR and acoustic sensors (using MATLAB).

6) Improvement of sensor nodes, placement, and DSRC networking based on simulation and re-play analysis.

7) Comparison of proposed commercial and research sUAS UTM, safety/security, and counter UAS systems with common performance analysis and data baseline.

Key contributions of Drone Net include reference instrument designs, open source software (embedded and server), and the public machine learning database of images, cataloged aerial objects, and event logs. The open source software approach will allow for rapid evaluation of competing machine vision and machine learning algorithms using open source libraries such as OpenCV [35] and numerous machine learning stacks such as Tensorflow [22].

3. SYSTEM ARCHITECTURE

The Drone Net system architecture is intended to be scalable on an 802.11 wireless network based upon per node processing, where data shared is not raw video, acoustic or image streams, but rather detections with classification meta-data. The DSRC messaging will include events for new aerial object detection (re-detection), conflict contingency events, tracking messages during active observing (e.g. EO/IR tilt/pan azimuth and elevation updates) and some opportunistic uplink of images to the Local Drone Net Machine Learning Server for novel aerial objects and to establish identification of both compliant and non-compliant sUAS and GA, as well as characterization of natural objects, debris and clutter.

Figure 1. Drone Net Concept Diagram

Page 5: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

5

The Drone Net ground and flight elements are shown in Figure 2 as a block diagram showing DSRC data flow to and from processing elements. Each EO/IR node includes an 802.11 networked SoC for machine vision and deployed machine learning applications that can at least detect and classify aerial objects (identification will likely require Drone Net Local Server inferential lookup from a database of cataloged aerial objects). The flight configuration is envisioned to include a LIDAR pre-processor (likely Raspberry Pi 3) and a machine vision SoC for see-and-avoid and processing for detected aerial objects it can downlink for cataloging and/or event notifications to Drone Net. Compliant sUAS are also envisioned to request notification uplinks (perhaps via publish/subscribe parameters) for Drone Net supported sense-and-avoid and de-confliction or contingency compliance through UTM protocols [23].

Ground node configuration

The ground instruments for Drone Net include three passive sensor node elements that will be deployed together since the EO/IR enclosure also provides the embedded SoC machine vision processing and acoustic sampling and pre-processing. For experiments they can be unit tested alone using a Linux laptop and collaborators can implement a subset of instruments in a basic ground node. To summarize, the ground node includes:

1) EO/IR – GP-GPU System-on-chip processing, single narrow field LWIR and one or two visible camera systems with built-in IMU, compass, tilt/pan, ADS-B

receiver, and DSRC (802.11) for communication with local-area nodes

2) All-sky – 6 visible camera systems with wired/wireless network for MPTS streaming to EO/IR node for detection and pointing of narrow field camera assembly

3) Acoustic array – 6 or more wired microphones cabled to EO/IR node for audio capture and analysis.

4) Integrated EO/IR instrumentation for self-localization using GPS and inertial sensors for elevation and roll.

5) Observation of other nodes in the system via infrared stimulator (light emitting diode) to determine EO/IR camera system azimuth (pan) pointing.

6) Communication via wireless DSRC protocol to share detection information and to coordinate time based on GPS.

The all-sky-camera and acoustic array will be interfaced to the EO/IR SoC computer using MPTS for video and audio streaming, enabling the EO/IR computer to detect and determine azimuth and elevation of aerial objects, thus providing the EO/IR tilt/pan tracking in real-time. Through the use of simulation and field tests, the optimal configuration of cameras and microphones will be determined as shown in Figure 2c.

Figure 2. Drone Net Block Diagram

Page 6: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

6

Flight node configuration

The compliant flight instruments for Drone Net include 2 passive sensors, the LWIR and visible cameras, as well as the

active LIDAR sensor. Only a single camera is needed as the flight node can derive depth information using structure from motion. To summarize, the flight node includes:

(a) (b)

(c)

Figure 3. (a) EO/IR Ground Node Block Diagram, (b) EO/IR + LIDAR Flight Node Block Diagram and (c) all-sky-camera and microphone ground array interfaced to EO/IR SoC

1) EO/IR – GP-GPU System-on-chip processing, single narrow field LWIR and visible camera system with built-in IMU, compass, tilt/pan, ADS-B receiver, and DSRC (802.11) for communication with local-area nodes.

2) Interface between the flight SoC and ground via wireless DSRC, but also a data link to the flight control autopilot for optical navigation.

3) Wireless DSRC to ground for publish/subscribe notification to de-conflict with GA and other sUAS and notification of contingencies that require immediate ground safe recovery operations.

4) LIDAR scan processing for obstacle avoidance and proximity navigation in urban environments.

A typical system configuration of Drone Net EO/IR ground nodes is shown in Figure 4 which allows for full localization and orientation of each node. Each node is able to observe the infrared light emitting diode which will be tripod fixed such that the azimuth of the tilt/pan EO/IR camera assembly can be determined based upon the know GPS location of the observed node. This can be done on start-up and as needed to calibrate before each node enters an active tilt/pan tracking mode.

Page 7: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

7

Figure 4. Drone Net Node Azimuth Determination Strategy

4. SOFTWARE ARCHITECTURE

The Drone Net software architecture includes ground and flight system services operating in real-time running on embedded Linux using POSIX real-time extensions for predictable response. Figure 5 is a Data Flow Diagram (DFD) showing the major services with message-based (publish/subscribe) communication between flight nodes and ground nodes via DSRC flight and ground messaging services providing and interface between segments and transport of the messages over 802.11 and to the Internet Cloud for aggregation of data in a public database.

The flight node services segment includes:

1) LIDAR interface and 3D model to represent the proximal world of the sUAS for urban operations.

2) EO/IR image fusion and aerial object detection for see-and-avoid features and testing.

3) ADS-B receiver interface and conflict prediction based on compliant near-by aerial objects.

4) Flight control interface for experiments with see-and-avoid and automatic de-confliction and contingency safe recovery operations.

5) A rule oriented inference service for decision making to balance obstacle avoidance, see-and-avoid, de-confliction, contingency safe recovery, and normal flight operations.

The ground node services segment includes:

1) Event Log for DSRC relayed flight node events, ground node EO/IR detection and tracking events, and events aggregated in the local ML Server.

2) An Aerial Object Catalog for newly observed aerial objects from flight or ground nodes.

3) EO/IR image fusion, detection, and tracking interface to the EO/IR subsystem.

4) Detection and azimuth, elevation estimation interface to the all-sky camera subsystem that produces tilt/pan pointing commands for EO/IR.

5) Pointing to control tilt/pan and to determine which detected object should be tracked if multiple are active in the same column of air.

6) Spectral analysis of acoustic data from the microphone array with an interface to the Aerial Object Catalog to record acoustic signatures of detected aerial acoustic sources.

The ground system may be composed of any number of EO/IR ground nodes with all-sky and acoustic array subsystems (optional) and each node in turn interfaces to one local area ML Server, which includes:

1) Use motion based detection (difference frames with statistical filtering) for the all-sky camera to provide azimuth and elevation localization of aerial objects entering the monitored column of air. The all-sky-camera has a hemispherical view with 6 megapixel cameras. Acoustic cues may also assist with azimuth and elevation localization, but range of detection for both requires further analysis.

2) Tilt and pan based upon all-sky-camera azimuth and elevation localization of aerial objects detected and track objects of interest by maintaining the centroid of the object in the center of the narrow field of view. Based

Page 8: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

8

upon the fixed location and tilt pan of the camera we believe this can be accomplished with simple Histogram of Oriented Gradients (HOG) and thresholds with LWIR and visible.

3) Classify detected and tracked objects as sUAS, GA, natural, and other based upon a Convolutional Neural Network (CNN) or other methods of machine learning that can be deployed to the embedded SoC.

4) Uplink each uniquely detected aerial object to the Local Drone Net ML Server for cataloging and potentially for full identification.

5) See-and-avoid detection and tracking of non-compliant sUAS and GA using machine learning and HOG tracking

6) Ground node master event log and catalog with optional raw data collection for verification.

7) Machine learning training, validation, test and deployment based on master even log and catalog data.

8) Uplink of master event log and catalog to Cloud for assessment by other Drone Net geo-locations.

9) Download of other geo-location event logs and catalogs.

Figure 5. Envisioned Real-Time Software Architecture

5. HYPOTHESIS TO TEST

Drone Net is being built to support general research for UTM, ML/MV, and sensor fusion and to compare active and passive sensing, but overall, our group is testing the following hypothesis:

1) A network of passive sensors can effectively monitor, track, catalog and identify aerial objects in a column of air including general aviation, sUAS and natural objects with lower cost, better performance and longer term reliability compared to ADS-B and RADAR alone

2) Networked ground and flight Drone Net nodes can communicate to de-conflict sUAS from each other and general aviation with greater success than ADS-B and RADAR alone

3) Human review truth and geometric flight navigation data can be used to compare ground sensor network proposals for UTM to optimize overall class-G shared airspace use with minimal conflict

Based upon prior work and issues with ADS-B alone, cost of RADAR, and problems with reliable GPS availability in

Page 9: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

9

urban UTM environments, we believe passive sensing networks will have an overall advantage to active methods of ADS-B and RADAR alone.

6. METHOD

The general method of cataloging, tracking and logging events related to aerial objects makes use of MV/ML detection and tracking. Many methods for detecting and tracking aerial objects from ground fixed nodes (that do tilt/pan) and moving aerial nodes have been investigated [32]. In general, while this is more challenging than detection of a moving object in a fixed field of view, many ML algorithms combined with MV show good to excellent performance [20]. The problem is selecting the best performing MV/ML combination tailored to EO/IR and acoustic sensors, so Drone Net has focused upon a common architecture and methods of performance comparison first. For any given MV/ML algorithm to be tested, the performance is graded based upon:

1) Accuracy of frames captured for detected aerial objects compared to continuous frame capture at 10Hz to provide a baseline of comparison. In terms of True Positive (TP) and False Positive (FP) for the detection compared to review of all frames.

2) ML detection, classification and identification images reviewed for accuracy with range of ML threshold settings (sensitivity) and for ROC, PR, F-measure, confusion matrix analysis.

3) Automated geometric analysis or human review of full image capture used to verify detection.

First we will describe automated geometric analysis using a navigational truth model with re-simulation of events, described in more detail in section 9. Figure 6 shows an example of a coordinated view of ground nodes from a test sUAS in (a) and the ground node view of the sUAS in (b). Based upon the geometry of the test our simulation can recreate what the ground node should be able to observe using geometric localization.

(a)

(b)

Figure 6. (a) LWIR and (b) visible image example thumbnails Geometric with localization

The aforementioned navigation sensors placed on each ground Drone-Net node and aerial node provide the necessary information to determine observability of said cooperative sUAS within the field of view of a given sensor (e.g., EO/IR camera). Knowing the static geodetic coordinates (latitude ( cL ), longitude ( c ), and height ( ch ))

of a given ground node (i.e., camera), and thus, the position vector in earth-centered earth-fixed (ECEF) coordinates

, ,eec c c cr L h

and similar instantaneous position of an

aerial node , ,eeb b b br L h

, the relative position vector

, , , ,e e ecb eb b b b ec c c cr r L h r L h

can be described in

a locally-level (i.e., tangential) coordinate frame as

,t t ecb e c c cbr C L r

. Finally, knowing the static

orientation of the camera, a normalized pointing vector, in the camera coordinate frame, becomes

ˆ , , /c c t tcb t cb cbr C r r

. This vector can be

transformed into horizontal and vertical angles (spherical coordinates) in order to determine if the target (aerial node) is within the horizontal/vertical field of view (FOV) of the camera. To simulate the drone flight and its projection in the camera, a model was made in MATLAB, where the ADS-B data from the UAS acquired in real time (or recorded offline) was used for 3D visualization of UAS in the world coordinate frame. Namely, latitude, longitude, and altitude for the UAS location and pitch, roll, and yaw for the UAS orientation were utilized as can be seen in Fig 7.a.

(a)

(b)

Figure 7. 3D Visualization of the UAS in the world

Page 10: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

10

coordinate frame, and (b) projection of UAS onto the camera projection plane.

The camera has been considered to be at the origin of the local world coordinate frame. To project the UAS key points onto the camera projection plane, first the UAS key points used in part (a) will be projected to the world coordinate frame:

11

1

(1)

Where subscripts and represents the coordinates of the key points in the world and UAS body frames, respectively.

Denoting roll, pitch, yaw, latitude, longitude, and altitude by , , , Lat, Long, and Alt, respectively, rotation matrix

and translation vector are given by:

(2)

cos coscos sin

1 sin (3)

where is the Earth eccentricity and is the Earth radius of curvature calculated by:

1 (4)

with being the Earth Equatorial radius.

In Eq (3) for vector , we need to subtract the ECEF coordinates of the camera from the UAS Earth-Center Earth-Fixed (ECEF) coordinates to acquire local world coordinates frames since the ECEF origin is at the Earth center.

Then, the world coordinates of the UAS key points will be projected to the camera frame via the following equations:

11

1

(5)

Denoting camera pan, tilt, and roll by , , and , respectively, and considering the z axis of the camera frame to be the optical axis pointing outward, and positive directions of x axis to be toward left and y axis downward, rotation matrix and translation vector are given by:

λ0 1 00 0 11 0 0

(6)

000

(7)

where the last matrix on the RHS of the first equation is to change to coordinate frame from world to camera and the vector is zero due to considering the camera at the origin of the world frame.

Finally, the key points described in camera frame will be projected on the camera projection plane using camera intrinsic parameters.

Let be the normalized image projection:

// (8)

Let r2=x2+y2 and kc=[kc(1) kc(2) kc(3) kc(4) kc(5)] be the vector of camera distortion parameters. The effect of tangential and radial distortion parameters is calculated:

12

1 1 2 52 3 4 2

3 2 2 4

(9)

Then, the pixel coordinates of the key points are calculated by the camera intrinsic matrix, as:

1

1 1 10 2 20 0 1

121

(10)

where cc(1) and cc(2) are the pixel coordinates of the camera projection plane principal point, α is the skew coefficient, and fc(1) and fc(2) are the focal lengths in the x and y direction expressed in pixels. The focal length in each direction in pixels is given by:

(11)

where p is the pixel pitch in that direction.

The FLIR LWIR camera used in this study had a focal length of 19 mm, horizontal and vertical fields of view of 32o and 26o, and image size of [640 512] pixels, respectively. The relation between the field of view, focal length, and the image size in each direction is given by:

Page 11: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

11

2arctan _

2 (12)

with “img_sz” being the image size in that direction in pixels.

The result of applying Eq's (1) to (12) can be seen in part (b) of Fig 7, where a convex hull is used to connect the projected key points together.

The simulation to recreate flight path and corresponding observability of an sUAS is based upon time coordination of sensors in each segment as follows:

1) flight sensors – second sUAS is observable based on cooperative sUAS navigation shared data or ground-based observation of non-cooperative

2) ADS-B flight-to-ground – target is observable based on camera location, pointing and field-of-view for ADS-B location at corresponding time

3) sUAS navigation – improved knowledge of sUAS location with better temporal and spatial resolution compared to ADS-B

4) ground sensor – target is observable based on camera system location, orientation, and filed-of-view over time (fixed or tilt/pan), to determine when cooperative sUAS should be observable based on ADS-B or sUAS shared navigation

While NASA UTM is pursuing goals to which Drone Net aligns well, we are not aware of a similar comprehensive research effort with open design to explore sUAS detection, tracking and identification with resilience quite like Drone Net. Industry efforts are in progress to build similar system solutions, but our goal is to provide an open reference, a high quality database from testing and to share our results and methods.

7. RELATED RESEARCH

The Drone Net architecture was inspired by previous work on smart camera systems [14, 15] and a previous experiment to assess viability of using a software defined multi-spectral camera to catalog aerial objects [29]. Results from this prior work along with NASA UTM operations concepts for rural and urban sUAS [23] as well as a wide variety of industry drone privacy protection and security instrument systems inspired our group to create the Drone Net open architecture and design. Prior work on acoustic detection of GA and drones [25, 27] as well as products such as acoustic cameras [36] inspired us to consider how acoustic sensing could be combined with EO/IR in networks. Analysis completed by Sandia Labs [28] convinced us that focus on passive sensors and EO/IR in particular is a promising approach for the detection of sUAS, with small cross section, which makes RADAR less effective and lowers initial cost and cost of operation over long periods of time. Related RADAR research [24] has shown that RADAR requires costly X-band

and S-band systems in order to obtain small cross section shape and track information critical to high performance detection (in terms of ROC) and to support classification and identification goals we are pursuing. Overall, RADAR can detect sUAS, but based on prior work, we believe that comparison to RADAR with Drone Net EO/IR is valuable research and provides significant cost and operational ease of use advantages, that can be demonstrated with the analysis methods we have presented in this paper. We have consulted many excellent sources for methods of image, sensor, and information fusion including LWIR with visible image fusion at pixel and feature level [16] and plan to pursue pixel level fusion within our EO/IR devices based on multi-spectral stimulation and calibration due to the well documented challenges known for LWIR + visible coarse feature level fusion [17]. Finally, we intend to leverage existing machine learning methods with focus on open source software such as TensorFlow [22], but also want to open up are architecture for simple replication and use by collaborators to explore a wide range of Machine Learning (ML) and Machine Vision (MV) algorithms used in flight and on the ground. Detection and tracking with moving cameras (flight) and gimbaled or tilt/pan fixed cameras (flight and ground) has well known challenges that require more advanced algorithms than the motion detect baseline we have used in this paper [33]. Overall we recognize that Drone Net is a complex and involved project, so our main goals are to produce an open architecture and reference designs as well as a high quality public database of drone images, acoustics, and detection, classification and identification training and validation sets for broad use.

8. LOCALIZATION DRONE DETECTION ANALYSIS

AND TRUTH MODEL

Using ADS-B data, OEM navigation, or our own high fidelity navigation systems, we track the flights of our test sUAS and then reconstruct the trajectory in a MATLAB simulation. If the projected UAS fall within the image boundaries, it is claimed as seen in the MATLAB model. The re-simulation of the flight can in fact be used as a method to compare what any ground node should be able to observe in terms of the sUAS of interest and to generate a synthetic view from that ground node as the Figure 7 example shows. Figure 8 shows a 3D reconstruction of the flight in ground node relative coordinates that can be correlated to the ground track shown in Figure 9. The ground track can likewise be ground node relative or in absolute geodetic latitude and longitude. The geodetic ground track in Figure 9 was collected by the sUAS OEM navigation system. Overall, we have three sources of navigation data including: ADS-B (coarse, accurate and precise locations, but relative low precision time), OEM navigation (good, but accuracy unknown), and our own HF navigation which is best (but under development).

Page 12: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

12

Figure 8. 3D visualization of one portion of UAS trajectory that can be seen in the simulation model

(a)

(b)

Figure 9. (a) Top view of one UAS trajectory generated by the MATLAB model, and (b) top view of the UAS trajectory generated by DJI Mavic software

ADS-B has limitations including: bandwidth, a sample rate of 2Hz or less as well as limited digits of precision for geodetic state (Earth fixed) compared to relative). As such,

we plan to build the HF navigation as a supplement to ADS-B for our experiments.

An Automatic Dependent Surveillance-Broadcast (ADS-B) -out system transmits GPS derived position and speed with additional information (such as, identification, callsign, and timestamps) at a nominal 1 Hz update rate with a range of approximately 100 miles. A variety of message formats are available, however, the “traffic report message” is the most germane to the localization task and an example of which is shown in Fig. 10.

The message itself does not provide an explicit timestamp, however, based on an assumed 1 Hz update and the “time since last contact/communication”, a crude time epoch of origination can be deduced (±1 sec). Such imprecise timing is a challenge, when considering the use of this sensor as a “ground truth” source for a 30 frame per second sensor; however, it is certainly suitable for performing data association based on an array of detections corresponding to a track.

In order to provide ground truth for a compliant UAS, a high-precision high-bandwidth position, velocity, and orientation (PVA) solution will be obtained from the integration of GPS, inertial sensors, and a barometric altimeter. In the event that it is infeasible to instrument the compliant UAS, an alternative approach will be to access the OEM GPS data from the telemetry stream.

9. DRONE NET TRUTH MODELS

Truth models for Drone Net have varying degrees of reliability and are either based upon geometric analysis and re-simulation of compliant/cooperative sUAS navigational data or human review. The geometric truth models are based upon one of three navigational sources:

1) ADS-B transmit/receive data, which is accurate in terms of localization, but has a very low sample rate and time is based upon receipt rather than the sampled state. This is sufficient for de-confliction of sUAS with GA, but may not be sufficient for sUAS to sUAS de-confliction and is of limited value to our truth analysis based on the low sample rate and potential error in time correlation.

2) OEM navigational log data, which is of unknown accuracy and sUAS specific, but generally at a higher sample rate than ADS-B and with clear low-latency time stamping. We have used the OEM navigational data as our truth for this paper.

3) High Fidelity (HF) navigational log data is a planned enhancement to include a low-drift and high precision inertial measurement unit and GPS receiver with the goal to outperform the OEM and ADS-B in terms of localization accuracy over time.

ADS-B is the only navigation truth method that supplies identification as well as localization data. For our testing, identification could be added to the DSRC messaging to

Track segment shown in “a”

Page 13: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

13

supplement ADS-B. In general ADS-B is trusted, but could be inaccurate or spoofed in real world scenarios.

Figure 10. ADS-B Log by ICAO Identification with Geodetic Location and Implied Relative Samples over Time

The other major method for truth analysis used in this paper and planned for future Drone Net experiments is human review. Human review leverages human visual intelligence and in fact with review tools that are well designed can take advantage of human behavioral intelligence (e.g., a flight trajectory, shape and behavior common to an insect rather than an sUAS or GA). This was demonstrated in prior Drone Net related testing [29]. The human review does not rely

upon ADS-B identification and therefore potentially could identify ICAO spoofing or inaccuracies and provide classification that includes non-compliant, non-cooperative sUAS and natural aerial objects. Overall, the post flight analysis for Drone Net experiments is shown in Figure 11.

Figure 11. Drone Net Truth Modeling and Analysis Methods

Page 14: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

14

In prior Drone Net related work [29], we have compared salient object detection algorithms to simple motion detection with statistical filtering and adjustable thresholds for statistical change to produce a Receiver Operator Characteristic. This requires repetition of experiments to provide a sensitivity analysis of the False Positive (FP) rate as a function of True Positive (TP) accuracy. Related measures including Precision Recall (PR) and F-measure present the overall TP, FP, True Negative (TN) and False Negative (FN) data for a test or series of tests with variation of parameters used in detection, classification and identification. Figure 12 shows a basic human review analysis of two tests completed in a feasibility study completed on October 29, 2016. While the Motion Detect (MD) algorithm is a basic detection scheme that does not yet incorporate machine learning, it shows that our method of performance analysis will provide a valid method of comparing detection methods. Our goals for this paper were to establish the feasibility of the methods of analysis and the Drone Net architecture and we plan to repeat this experiment many times in the future to perfect and fully characterize it and to automate the production of ROC, PR, F-measure and confusion matrix analysis for our open design solution as well as competitive solutions from industry [37]. We believe with the demonstration of feasibility provided here that we can extend this work to compare many different types of passive and active sensors, sensor networks, and sensor fusion systems. Likewise, we can hold the sensor configuration constant and test a variety of MV/ML algorithms in on flight and ground EO/IR nodes to compare detection, classification and identification performance of each. Finally, we can also compare our concepts for HF navigation and improved ADS-B, ADS-B++, with the architecture and experimental methods we have prototyped and evaluated for feasibility.

We expect HF navigation combined with our geometric analysis and MATLAB re-simulation of events and HFOV, VFOV provides our best truth model, but we will analyze all truth models to compare them. Specific scenarios can lead to errors from any of the three navigational truth models (limited sampling rates, spoofing, multiple objects in a field of view at the same time, instrument errors, and signal integrity issues to name a few). Therefore, we believe it is of high value to collect and process data from multiple sources. Human review has high value when comparing results to MV/ML, as humans have irrefutable high visual intelligence [31], but with multi-spectral images (sensing not natural for humans), there are also misconceptions and sources of error in human review. We intend to address this through gamification and automation of human review in order to promote a statistically significant outcome and to build a high quality database of human reviewed images and acoustic samples.

Figure 12. Detection Performance for Motion Based Differencing with Erosion Filter [29]

10. FUTURE PLANNED ACOUSTIC WORK

The idea of combining acoustic data with EO/IR came out of our investigation of acoustic cameras and based upon related research, we have decided to explore acoustic characterization to determine if an acoustic camera will be of value to Drone Net. Here we present our preliminary results with focus on spectral analysis used to classify sUAS acoustic sources and to provide a secondary method for azimuth and elevation angle locations of sources to guider our narrow field EO/IR pointing.

On one hand, acoustic sensors are passive, have non-line-of-sight capability, and are small, low-power, and inexpensive. Acoustic microphones configured in an array are capable of classifying/identifying and estimating the azimuth and elevation angles to detected targets of interest at distances of several kilometers [1]. Many UAS are relatively small in size, so they can be difficult to detect optically. In addition, they can maneuver at low altitudes and not have considerable metal parts and a large RADAR section, so they can elude RADAR detection. UASs powered by small electric motors might not have sufficient thermal emittance to be detected during day time against the sun glare or hot background objects [2].

On the other hand, there are several limitations to what acoustic sensors can do for drone detection and classification: (1) If the UAS is in the far field, only Direction Of Arrival (DOA) can be detected and the UAS cannot be localized; (2) UAS classification based on acoustic signature requires powerful recording microphones and significant signal processing to separate the UAS acoustic wave from the background noise and it needs a significant database of acoustic signatures for training any classifier which is not publicly available; (3) Existence of wind and temperature gradients would change the direction of travel for acoustic waves, they could be bent upward or scattered and not be captured by the microphone; (4) reflectance of acoustic waves from the terrain might interfere destructively with the

00.10.20.30.40.50.60.7

0 0.1 0.2 0.3 0.4 0.5

True Positive Rate

False Positive Rate

Drone ROC for Motion Detect

MD

RAND

Page 15: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

15

acoustic waves; (5) Absorption of higher frequencies in the acoustic waves by the atmosphere and interference from the background noise make it difficult to extract the original acoustic wave and reduce the effective distance over which the acoustic event can be recorded as can be seen in Fig 13. Here the spectrograms for our UAS flying in real world conditions with background noise and wind and a typical UAS moving in an anechoic chamber are shown in parts (a) and (b), respectively. Spectrogram is a logarithmic plot of the squared magnitude of the short-time Fourier transform of a signal which describes the frequencies available in the signal at different time points [38]. While in [b] the change in frequency due to drone direction of motion with respect to the microphone, i.e., Doppler Effect, can be seen on vertical harmonic lines, it is not visible in our UAS acoustic signal spectrogram due to the interference from the background noise and wind. Also, as the frequency increases on the horizontal axis in (a), it is less observable due to atmospheric absorption.

(a)

(b)

Figure 13. Spectrogram of DJI Mavic acoustic wave in the outdoor conditions subjected to moderate wind, considerable ambient noise, and flying on one side of the microphone, and (b) spectrogram of the a typical UAS acoustic wave in an anechoic chamber, where the Doppler effect is clearly observed

In Fig 14 (a), the rotors were spinning with an RPM of about 7200 (or equivalently 120 Hz), so the Blade Passing Frequency (BPF) was 240 Hz (each motor shaft has 2 blades) which can be seen in Fig 14 (b) along with its harmonics.

(a)

(b)

Figure 14. (a) rpm of UAS motors vs time, and (b) spectrogram of DJI Mavic acoustic wave where BPF

harmonics can be clearly seen.

11. FUTURE PLANNED ACTIVE SENSING WORK

As shown in Figure 1, our long term goal is to compare our passive sensing methods to active. The best advantage of RADAR is detection range which can be over kilometers although this is not well established for sUAS with small cross section. Furthermore, RADAR provides track and some RADAR systems can provide basic shape information. We want to use RADAR to compare to, in order to test our hypothesis that passive sensing, with a network of sensors than can ring and or be laid out in a mesh can perform as well as RADAR for detection, but perhaps far better for classification and identification of non-cooperative and non-compliant sUAS and other aerial objects (e.g. birds). ADS-B is expected to work well for GA, but for sUAS, operating with higher density, in urban and rural environments with high ground clutter, we’d like to compare performance. For ground nodes LIDAR is costly and has limited range. For flight Drone Net nodes, we believe LIDAR has much

Tim

e (s

)

Frequency (kHz)

0 1 2 3 4 5 6 7 8 9

-120 -100 -80 -60 -40 -20 0 dBm/Hz

Frequency (kHz)

0 1 2 3 4 5 6 7 8 9 10

-100 -80 -60 -40 -20 0 dBm/Hz

25

20

15

10

5

0

Tim

e (s

)

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2

Frequency (kHz)

-120 -100 -80 -60 -40 -20 0 dBm/Hz

Time(s)

0 100 200 300 400 500 600

front right front left rear right rear left

8000 7000 6000 5000 4000 3000 2000 1000

0

Rot

atio

nal v

eloc

ity

(rpm

)

25

20

15

10

5

0

5

4

3

2

1

0

Tim

e (s

)

Page 16: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

16

more value compared to the ground, especially for urban scenarios that can be GPS limited or denied to support missions such as parcel delivery. Our future plans include combining active sensor either to test our hypothesis (the case with ground RADAR) or to enhance flight node operations.

12. SUMMARY

We have shown feasibility for use of two truth models to assess performance of sensor networks that employ MV/ML for detection, tracking, classification and identification of both compliant and non-compliant sUAS. We have also characterized acoustics of our test sUAS (DJI Mavic) to establish that acoustic signatures are for additional information to establish sUAS type could enhance our goals and objectives. Finally, we have provided a reference for the Drone Net hardware, firmware, software architecture so that we can proceed with development of reference designs for flight and ground nodes with an open invitation for other researchers to contribute, improve and collaborate on this approach to improving sUAS safe operations in class G shared airspace consistent with UTM operational concepts and goals.

ACKNOWLEDGEMENTS

The authors thank Embry Riddle Aeronautical University Accelerate Research Initiative program for funding to build and purse the feasibility experiments presented in this paper.

REFERENCES

[1] Benyamin, Minas, and Geoffrey H. Goldman. Acoustic Detection and Tracking of a Class I UAS with a Small Tetrahedral Microphone Array. No. ARL-TR-7086. ARMY RESEARCH LAB ADELPHI MD, 2014.

[2] Pham, Tien, and Leng Sim. Acoustic Data Collection of Tactical Unmanned Air Vehicles (TUAVs). No. ARL-TR-2749. ARMY RESEARCH LAB ADELPHI MD, 2002.

[3] Zelnio, Anne M., Ellen E. Case, and Brian D. Rigling. "A low-cost acoustic array for detecting and tracking small RC aircraft." Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, 2009. DSP/SPE 2009. IEEE 13th. IEEE, 2009.

[4] Massey, Kevin, and Richard Gaeta. "Noise measurements of tactical UAVs." 16th AIAA 3911 (2010): 1-16.

[5] Hans-Elias de Bree, Guido de Croon, "Acoustic Vector Sensors on Small Unmanned Air Vehicle", the SMi Unmanned Aircraft Systems UK, 2011.

[6] Sadasivan, S., M. Gurubasavaraj, and S. Ravi Sekar. "Acoustic signature of an unmanned air vehicle exploitation for aircraft localisation and parameter estimation." Defence Science Journal 51.3 (2001): 279.

[7] Jeon, Sungho, et al. "Empirical Study of Drone Sound Detection in Real-Life Environment with Deep Neural Networks." arXiv preprint arXiv:1701.05779 (2017).

[8] Kloet, N., S. Watkins, and R. Clothier. "Acoustic signature measurement of small multi-rotor unmanned aircraft systems." International Journal of Micro Air Vehicles 9.1 (2017): 3-14.

[9] Intaratep, Nanyaporn, et al. "Experimental study of quadcopter acoustics and performance at static thrust conditions." 22nd AIAA/CEAS Aeroacoustics Conference. 2016.

[10] Heilmann, Dipl Wi Ing Gunnar, Dirk Doebler, and Magdalena Boeck. "Exploring the limitations and expectations of sound source localization and visualization techniques." INTER-NOISE and NOISE-CON congress and conference proceedings, Melbourne, Australia. Vol. 249. 2014.

[11] Liu, Hao, et al. "Drone Detection Based on an Audio-Assisted Camera Array." Multimedia Big Data (BigMM), 2017 IEEE Third International Conference on. IEEE, 2017.

[12] Bougaiov, N., and Yu Danik. "HOUGH TRANSFORM FOR UAV’s ACOUSTIC SIGNALS DETECTION."

[13] Shi, Weiqun, et al. "Detecting, Tracking, and Identifying Airborne Threats with Netted Sensor Fence." Sensor Fusion-Foundation and Applications. InTech, 2011.

[14] Siewert, V. Angoth, R. Krishnamurthy, K. Mani, K. Mock, S. B. Singh, S. Srivistava, C. Wagner, R. Claus, M. Demi Vis, “Software Defined Multi-Spectral Imaging for Arctic Sensor Networks”, SPIE Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXII, Baltimore, Maryland, April 2016.

[15] S. Siewert, J. Shihadeh, Randall Myers, Jay Khandhar, Vitaly Ivanov, “Low Cost, High Performance and Efficiency Computational Photometer Design”, SPIE Sensing Technology and Applications, SPIE Proceedings, Volume 9121, Baltimore, Maryland, May 2014.

[16] Piella, G. (2003). A general framework for multiresolution image fusion: from pixels to regions. Information fusion, 4(4), 259-280.

[17] Blum, R. S., & Liu, Z. (Eds.). (2005). Multi-sensor image fusion and its applications. CRC press.

[18] Sharma, G., Jurie, F., & Schmid, C. (2012, June). Discriminative spatial saliency for image classification. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on (pp. 3506-3513). IEEE.

Page 17: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

17

[19] Richards, Mark A., James A. Scheer, and William A. Holm. Principles of modern radar. SciTech Pub., 2010.

[20] Panagiotakis, Costas, et al. "Segmentation and sampling of moving object trajectories based on representativeness." IEEE Transactions on Knowledge and Data Engineering 24.7 (2012): 1328-1343.

[21] flightradar24.com, ADS-B, primary/secondary RADAR flight localization and aggregation services.

[22] Abadi, Martín, et al. "Tensorflow: Large-scale machine learning on heterogeneous distributed systems." arXiv preprint arXiv:1603.04467 (2016).

[23] Kopardekar, Parimal, et al. "Unmanned aircraft system traffic management (utm) concept of operations." AIAA Aviation Forum. 2016.

[24] Mohajerin, Nima, et al. "Feature extraction and radar track classification for detecting UAVs in civillian airspace." Radar Conference, 2014 IEEE. IEEE, 2014.

[25] de Bree, Hans-Elias, and Guido de Croon. "Acoustic Vector Sensors on Small Unmanned Air Vehicles." the SMi Unmanned Aircraft Systems, UK (2011).

[26] Case, Ellen E., Anne M. Zelnio, and Brian D. Rigling. "Low-cost acoustic array for small UAV detection and tracking." Aerospace and Electronics Conference, 2008. NAECON 2008. IEEE National. IEEE, 2008.

[27] Zelnio, Anne M., Ellen E. Case, and Brian D. Rigling. "A low-cost acoustic array for detecting and tracking small RC aircraft." Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, 2009. DSP/SPE 2009. IEEE 13th. IEEE, 2009.

[28] Birch, Gabriel Carisle, John Clark Griffin, and Matthew Kelly Erdman. UAS Detection Classification and Neutralization: Market Survey 2015. No. SAND2015--6365. Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States), 2015.

[29] S. Siewert, M. Vis, R. Claus, R. Krishnamurthy, S. B. Singh, A. K. Singh, S. Gunasekaran, “Image and Information Fusion Experiments with a Software-Defined Multi-Spectral Imaging System for Aviation and Marine Sensor Networks”, AIAA SciTech 2017, Grapevine, Texas, January 2017.

[30] Geiger, Andreas, et al. "Vision meets robotics: The KITTI dataset." The International Journal of Robotics Research 32.11 (2013): 1231-1237.

[31] Deng, Jia, et al. "Imagenet: A large-scale hierarchical image database." Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on. IEEE, 2009.

[32] Aker, Cemal, and Sinan Kalkan. "Using Deep Networks for Drone Detection." arXiv preprint arXiv:1706.05726 (2017).

[33] Zhu, Yukun, et al. "segdeepm: Exploiting segmentation and context in deep neural networks for object detection." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015.

[34] Amiri-Kordestani, Mahdi, and Hadj Bourdoucen. "A Survey On Embedded Open Source System Software For The Internet Of Things." (2017).

[35] Pulli, Kari, et al. "Real-time computer vision with OpenCV." Communications of the ACM 55.6 (2012): 61-69.

[36] Hansen, R. K., and P. A. Andersen. "A 3D underwater acoustic camera—properties and applications." Acoustical Imaging. Springer US, 1996. 607-611.

[37] Hearing, Brian, and John Franklin. "Drone detection and classification methods and apparatus." U.S. Patent No. 9,697,850. 4 Jul. 2017.

[38] Fulop, Sean A., and Kelly Fitz. "Algorithms for computing the time-corrected instantaneous frequency (reassigned) spectrogram, with applications." Journal of the Acoustical Society of America. Vol. 119, January 2006, pp. 360–371.

BIOGRAPHIES

Sam Siewert has a B.S. in Aerospace and Mechanical Engineering from University of Notre Dame and M.S. and Ph.D. in Computer Science from University of Colorado Boulder. He has worked in the computer engineering industry for twenty four years before starting an academic

career in 2012. Half of his time was spent on NASA space exploration programs and the other half of that time on commercial product development for high performance networking and storage systems. In 2014 Dr. Siewert joined Embry Riddle Aeronautical University as full time faculty and retains an adjunct professor role at University of Colorado Boulder.

Page 18: Drone Net Architecture for UAS Traffic Management Multi ...mercury.pr.erau.edu/~siewerts/extra/papers/IEEE-Aero-Big-Sky-2018-2604_10.0309.pdf · Drone Net Architecture for UAS Traffic

18

Mehran Andalibi received his Ph.D. degree in Mechanical Engineering from Oklahoma State University in 2014. He is currently an Assistant Professor in the Department of Mechanical Engineering, Embry-Riddle Aeronautical University, AZ. His research interests are the

application of computer vision in intelligent systems, including vision-based navigation of autonomous ground robots, detection, tracking, and classification of unmanned aerial vehicles, and development of vision-based medical devices.

Stephen Bruder, Ph.D., is a subject matter expert in the area of GPS denied navigation with 20+ years of experience and more than 50 peer reviewed publications. He is

currently an associate professor at Embry-Riddle Aeronautical University, member of the ICARUS research group, and a consultant in the area of aided navigation systems. Dr. Bruder has served as principal investigator on aided navigation projects for MDA, AFRL, NASA, SNL, USSOCOM, and others, to include the development of a GPS denied navigation algorithms for unmanned ground vehicles (SNL) and a satellite based auto-calibrating inertial measurement system (MDA).

Iacopo Gentilini Received his M.S. degree in Mechanical Engineering in 2010 and a Ph.D. in Mechanical Engineering from Carnegie Mellon University in 2012. He is currently an Associate Professor of Aerospace and Mechanical Engineering at Embry-Riddle Aeronautical University. His

interests include mixed integer non-linear programming, robotic path planning in redundant configuration spaces, and cycle time and energy consumption minimization for redundant industrial robotic systems.

Jonathan Buchholz is currently an undergraduate student of Mechanical Engineering at Embry-Riddle Aeronautical University. His interests include aerial robotics, machine learning, and autonomous systems.