automatic system tests on airborne radar systems1178388/fulltext01.pdf · on airborne radar. with...

44
UPTEC E 18 001 Examensarbete 30 hp 30/01/2018 Automatic System Tests on Airborne Radar Systems David Johansson Sebastian Lindblom

Upload: duongtuong

Post on 15-Jul-2018

230 views

Category:

Documents


0 download

TRANSCRIPT

UPTEC E 18 001

Examensarbete 30 hp30/01/2018

Automatic System Tests on Airborne Radar Systems

David JohanssonSebastian Lindblom

Teknisk- naturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student

Abstract

Automatic System Tests on Airborne Radar Systems

David Johansson & Sebastian Lindblom

The aim of this master thesis was to create automated system tests on airborne radar systems. Automated tests can reduce time spent on repetitive and monotone work and instead create time for exploratory testing and customer criteria testing. Nightly builds and well designed tests can improve robustness and create a more stable system for the user.

The project started with a pre-study, which consisted of researching the system in question, researching the possibilities of automated tests on said system and different tools that could be used in the project. Furthermore the pre-study contained interviews, visits and workshops within SAAB, with topics including automated tests, automation software and the SAAB GlobalEye system.

The solution included two separated tests, one regression test and one startup test. The regression test verifies that selected standard functions works with the new soft/hardware updates and the Startup test verifies that the starting sequences for selected subsystems are executed correctly. Both of these tests were installed in the test laboratory. The tests were separated in to two parts; one control part and one analysis part. By recording the data being sent on the different interfaces while performing the tests, it's made possible to analyze the recorded data. This method enables many tests to be performed on the same set of data. The control part was solved with a third party software, called Squish, from a German company named Froglogic. The analysis part was written in MATLAB, where the test results were presented as well.

The chain of events was set up and executed by Jenkins (an open source automation software), which also served as a scheduler, to enable nightly builds.

ISSN: 1654-7616, UPTEC E 18 001Examinator: Tomas NybergÄmnesgranskare: Uwe ZimmermannHandledare: Niklas Staafgård & Karin Thorvaldsson

Populärvetenskaplig sammanfattning

Examensarbetets syfte var att utvecka och implementera automatiseradetester på ett flygburet radarsystem. Med kontinuerlig integration uppdat-eras mjuk- och hårdvara regelbundet. Genom att implementera automatis-erade regressionstester kan felaktig mjuk- och hårdvara snabbt avlusas ochskickas tillbaka till konstruktör för korrigering. Väl utformade tester bidrartill ett mer robust system och ger mer möjligheter för utforskande tester.Projektet bidrog även till att effektivisera verifiering av systemet i form avatt ersätta återkommande tester genom automatisering.

Projektet inleddes med en förstudie där projektets medlemmar lärde sigförstå systemet och undersökte möjligheterna och begränsningarna med au-tomatiserade tester på systemet i fråga. Vidare undersöktes olika kommer-siella och egenutvecklade automatiseringsverktyg inom SAAB, samt hurdessa kunde användas i projektet.

Vidare fortsatte projektet med utveckling och implementation av två olikatester. Ett regressionstest som testar utvald standardfunktionalitet och somäven ger en överblick hur systemet beter sig i relation till föregående körningar.Det andra testet är ett så kallat "startup" test som verifierar att de nöd-vändiga startsekvenserna genomförs korrekt för utvalda delsystem. Till-sammans ger båda testen en bra överblick på hur den nuvarande versio-nen av systemet står sig i förhållande till föregående versioner. Genom attföra statistik på testerna kan man även se hur systemet har utvecklats övertiden och det kan även ge en indikation på vilka delar som bör testas nog-grannare.

Projektet avslutades med en rekommendation på hur arbetsgivaren bör gåvidare med projektet. Projektet omfattar bara en del av systemet och detbör vidareutvecklas till att omfatta fler delsystem för att ge en högre test-täckning.

Acknowledgements

To our reviewer, Uwe Zimmermann, we are very grateful that you have ac-cepted and reviewed our thesis.

We would like to thank our supervisors at SAAB, Karin Thorvaldsson andNiklas Staafgård, for all the encouragement and help. We would also liketo thank all other great colleagues at SAAB, who have all given us adviceand guidance along the way and who have made our stay very joyful andinspirational. Being able to work with tomorrows technology has been agreat experience and we are both truly grateful for this opportunity.

Division of work between project members

The major work in this thesis was done together. Both David and Sebastiancan take account for everything written in this master thesis. A lot of timewas spent on getting to know the system, which is a task not well suitedfor dividing.

The research done on tools used in automation in the pre-study was donein parallel, each of the project members looking into different tools.

Much of the MATLAB development was done together. David focused onthe storing of statistics while Sebastian focused on the presentation of theresults.

Sebastian was in charge of setting up the test system, including settingup the Local Area Network and its configuration. David was in charge ofgathering reference data and designing the reference scenario and decidedwhich tests were to be included.

Sebastian was responsible for the regression test and David was respon-sible for the startup test.

Glossary

AEW&C = Airborne Early Warning and ControlDescribes the complete system

ADS-B = Automatic Dependent Surveillance – BroadcastUniversal positioning system used for maritime and airborne vessels

AIS = Automatic Identification SystemUniversal identification used for maritime vessels

C2 = Command and ControlDescribes the real world using a set of sensors and radars

C2-LAN = Command and Control Local Area NetworkCommunication between the Mission Computer Unit and the graphical HMI C2

CMS = Control and Monitoring SystemPowers and controls the electric power for the rest of the system

DOU = Dorsal UnitTop mounted radar containing the Transmit/Receive modules (i.e antennas)

ELINT = Electronic IntelligenceUsed for gathering of information, such as emitted identification

EO/IRElectro optical / infrared sensor, mounted on the hull

ESM = Electronic Support MeasuresElectronic support self defense, such as flares etc

EXR = Exiter ReceiverOscillator that outputs the frequencies used in the radar

GUI = Graphical User InterfaceGraphical presentation of data

HMI = Human Machine InterfaceThe interface between humans and machines

IFF = Identification Friend or FoeUsed to broadcast ones own identity and identify other airborne aircrafts

MMSI = Maritime Mobile Service IdentifierUniversal database containing information about maritime vessels

NERES = New Erieye Recording and Evaluation SystemRegistration unit used to capture data packages sent within the system

NAV = Navigational System DataNavigational data used for positioning

PSR = Primary Surveillance RadarThe entire top mounted radar system, including DOU and EXR

RES = Radar Environment SimulatorEnvironment simulator used instead of the PSR system

RR-LAN = Radar Local Area NetworkCommunication between the Signal Data Unit and the Mission Computer Unit

SATCOM = Satellite CommunicationEncrypted communication between mission units and ground control

TRM = Transmitter/Receiver moduleAntennas placed inside the DOU

UTC = Coordinated Universal TimeUsed to synchronize clocks within the system

Contents viii

Contents

1 Introduction 11.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Delimitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 Automated regression test - The main idea . . . . . . . . . . . 3

2 Pre-study 42.1 System under test . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.1.1 Signal Data Unit . . . . . . . . . . . . . . . . . . . . . . 52.1.2 Mission Computer Unit . . . . . . . . . . . . . . . . . . 62.1.3 NERES . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.1.4 Reafil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.1.5 Command & Control . . . . . . . . . . . . . . . . . . . . 7

2.2 Simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.1 Sysim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2.2 Radar Environment Simulator . . . . . . . . . . . . . . 7

2.3 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Tools used for automation . . . . . . . . . . . . . . . . . . . . . 8

2.4.1 Squish, Froglogic . . . . . . . . . . . . . . . . . . . . . . 82.4.2 Jenkins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.5 Reference Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.6 Interviews & workshops within SAAB . . . . . . . . . . . . . . 9

2.6.1 Interviews regarding automated testing of PS05 & SAFT 102.6.2 Interviews regarding GUI-testing (Squish) . . . . . . . 102.6.3 Workshop about C2 and The GlobalEye Program . . . 11

3 Test Development 123.1 System setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.2 Regression test . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

3.2.1 Choosing reference data for regression test . . . . . . . 133.2.2 Analysis of regression test data . . . . . . . . . . . . . . 14

3.3 MATLAB development . . . . . . . . . . . . . . . . . . . . . . . 153.4 Squish script development . . . . . . . . . . . . . . . . . . . . . 17

3.4.1 Squish script for regression test . . . . . . . . . . . . . 173.5 Startup test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Contents ix

3.5.1 Choosing reference data for startup test . . . . . . . . . 183.5.2 Analysis for the Startup test . . . . . . . . . . . . . . . 19

3.6 Standard tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Results 224.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.2 Chain of events . . . . . . . . . . . . . . . . . . . . . . . . . . . 224.3 Presentation of Regression Test . . . . . . . . . . . . . . . . . . 244.4 Presentation of Startup Test . . . . . . . . . . . . . . . . . . . . 294.5 Internal Manuals . . . . . . . . . . . . . . . . . . . . . . . . . . 30

5 Discussion 315.1 Why MATLAB? . . . . . . . . . . . . . . . . . . . . . . . . . . . 315.2 Choosing GUI testing tool . . . . . . . . . . . . . . . . . . . . . 325.3 Test Laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.4 Testing at system level vs. subsystem testing . . . . . . . . . . 335.5 Future Development . . . . . . . . . . . . . . . . . . . . . . . . 33

References 35

1

1 Introduction

1.1 Purpose

This master thesis investigates methods for creating automatic system testson airborne radar. With continuous integration, new hardware and/or soft-ware updates are integrated frequently. The update frequency of a subsys-tem is very independent and can vary from time to time. Some subsys-tems deliver updates on a daily basis, while others deliver updates less fre-quently. Integrating new updates that are under development often resultsin interface errors or defects. With automated regression tests the verifierscan easily find defects and return the faulty updates to the constructorsfor correction, thus increasing the robustness in the system. By collectingstatistics from the different increments, it’s easy to see how the system isperforming and how it has evolved with the new updates. The purpose ofimplementing automatic regression tests is to make it easier to find errorsand faulty software and/or hardware updates early and therefore open upmore time for the verifiers to do more exploratory and experience basedtesting as a complement to the scripted testing.

1.2 Background

Testing and verifying of a product is a big part in ensuring the qualityof the product. By letting the verifiers complement requirement testingwith exploratory testing and experience based testing, errors and faultybehavior that isn’t easily discovered can be found. In larger systems wherecontinuous integration is used and subsystems can be delivered as often ason a daily basis, just getting the system running is very time consuming.Automated regression tests can contribute to decrease that over time.

1.3 Delimitations

In order to make the project possible with an acceptable magnitude, thisproject focused on two different types of tests. The first test focused onhelping the verifiers to find critical errors quickly when new hard-/softwareupdates are implemented. The second test focused on functionality of theRR-LAN and C2-LAN, where the communication between the SDU (SignalData Unit), described in Section 2.1.1, MCU (Mission Computer Unit), de-scribed in Section 2.1.2, and the C2 (Command and Control) is transferred.

1.4 Methodology 2

Figure 1.1: System overview

The project was implemented in one of the test laboratories where the sys-tem isn’t complete, due to high occupation and full work load in the fullyequipped lab. This means that not all functionality that is desired to betested, can be tested. The project goal was to include as many test featuresas possible, and make them as suitable as possible for the complete system.There is a future plan to extend the project test lab to a complete systemlab, and the project goal is to be ready for that.

Since the tests won’t include the actual DOU, it relies heavily on the radarenvironmental simulator working properly, and the tests are limited to theradar environmental simulators capability. This project will be continuedafter this master thesis is concluded, including more parts of the system inthe automated tests.

1.4 Methodology

This project will be divided in three sections.

• Pre-study

• Test development

• Test verification

1.5 Automated regression test - The main idea 3

The pre-study focused on getting to know the system. Learning its archi-tecture, limitations, basic functionality and most important, what part ofthe system the project should focus on. Interviews were conducted duringthis phase of the project, with the purpose of learning more about how testsare designed and implemented on the subsystems from people with expe-rience. The pre-study also contained research of commercial tools used forautomation, looking into GUI-testing tools and other software that could beused in this project, see Section 2.4.

The next phase, test development, contained the development of the test en-vironment, including both software and setting up the hardware (i.e server& connections). This part focused on designing the tests, gathering refer-ence data, and designing the presentation of the tests.

The last phase, test implementation, was where the designed test cases wereimplemented in the system.

1.5 Automated regression test - The main idea

The idea, as mentioned in Section 1.1, was to increase the robustness in thesystem. Automated regression tests are widely used in software develop-ment. It usually consists of nightly builds and nightly tests to ensure thatthe work that has been done the day before works properly. The tests caninclude anything from being able to compile the software, basic functional-ity to advanced functionality. Having this done automatically saves time foreveryone and produces a secure and stable procedure if designed correct.

4

2 Pre-study

This chapter includes findings from the pre-study. During this part of theproject, time was spent on learning the system in question. In order to gainknowledge about the system and how automated testing is used on thesubsystems, interviews within SAAB were held. The pre-study includedvisits to SAAB in Linköping and Järfälla where automated testing is usedon a daily basis. The pre-study also researched what kinds of tests wererelevant, how the different parts should be tested, how test results shouldbe presented in the best manner and what tools that can be helpful.

2.1 System under test

To truly ensure territorial integrity and security in today’s complex environments,airborne surveillance is crucial. So too is extended range coupled with the abil-ity to detect low-observable air, sea and ground objects. With the new GlobalEyeAEW&C solution, you get it all.[6]

The system under test is an advanced system. The purpose is to test andverify the system before it’s finalized and merged together in the airplane.The final solution includes many different sensors, two radar systems, selfdefense systems and advanced communication systems described in [6].

Figure 2.1: The final system including all sensors described in [6]

2.1 System under test 5

The system under test includes radar systems and sensors such as:

• Erieye ER Radar - Extended range radar for airborne surveillance

• Maritime Radar - Sea target detections

• IFF - Airborne identification sensor [10]

• AIS - Maritime identification sensor [8]

• ADS-B - Positioning sensor [7]

• ESM/ELINT sensor - Signal intelligence [9]

• EO/IR sensor - Electro optical camera

• SATCOM/Datalinks - Communication system

All of these sensors works together to produce an airborne early warningsurveillance solution.

2.1.1 Signal Data Unit

The SDU (Signal Data Unit) is in charge of the signal processing of the in-coming video data from the EXR. The SDU translates raw video data (fre-quency data) to data describing the real world environment using Fouriertransform. The SDU merges together tracks from different sources (i.edoppler tracks, IFF tracks, maritime tracks etc) to produce a stable and pre-cise output. The same is done with sea targets, using video data and AISdata. The in-house developed process is called TDFE (Track Data FusionEngine) and is explained briefly in Figure 2.2

2.1 System under test 6

Figure 2.2: Tack Data Fusion Engine used in the SDU 2.1.1

2.1.2 Mission Computer Unit

The MCU (Mission Computer Unit) is the most centralized part of the sys-tem. It’s where all the sensors connects. The MCU controls all the sensorsand decide what is presented for the operator in C2.

2.1.3 NERES

NERES (New Erieye Recording and Evaluation System) is a recording de-vice. It is used to capture and monitor all computer traffic being sent inthe system to reassure connections are stable. In the integration and verifi-cation phase it serves as a way to verify the data being sent correctly. Thepossibility to record traffic enables verification to be done post tests. It alsoenables several requirements to be verified within the same test, using onerecording of data. NERES was used in this project.

2.1.4 Reafil

Reafil is an in-house developed application that controls and interprets datafrom NERES. Using Reafil, the operator can control NERES to start & stopthe recording of data. Reafil is also used to decide where to store therecorded data and which interfaces is to be recorded. Reafil translates the

2.2 Simulators 7

data using the corresponding signal table for each specific interface. Reafiloffers the possibility of real-time analysis of the incoming data. Reafil wasused in this project.

2.1.5 Command & Control

Command & Control, also referred to as C2, is a common term used withinthe defense industry. It describes the presentation of information for themission operators, ground control and mission command. In this projectit refers to the graphical interface where the operator controls the differentsensors and radar systems. It’s through C2 the operator controls the radarsystem, and other sensors, where to focus their energy, and what objects toprioritize.

2.2 Simulators

This section describes the different simulators used in the test environment.

2.2.1 Sysim

Sysim is a scenario generator where the user can design its own environ-ment. Sysim can handle all types of object the radar is designed to find. Thisincludes air targets, sea targets, landmarks etc. The user can specify whichkind of air targets that are to be used in the scenario, e.g. commercial air-crafts, fighter aircrafts, private aircrafts, of all sizes and models. The samesetup can be used with sea targets. In Sysim the user can also specify IFFcodes and which IFF modes that are to be active in each air target. With seatargets the user can specify MMSI-identity for each sea target, or generateone, that fulfills the users requirements. MMSI-identities are fetched froma database containing real MMSI-identities with real data on the ships, i.eweight, height, nationality etc.

Sysim sends instructions to the RES that in turn transforms the scenarioto raw video data.

2.2.2 Radar Environment Simulator

The Radar Environment Simulator (referred to as RES in this project), whichis a part of sysim, replaces the EXR and creates raw video data (i.e frequencydata) that describes a scenario created by the user.

2.3 Interfaces 8

2.3 Interfaces

The interfaces mainly consists of Ethernet network traffic. MIL-STD-1553[11] are used to distribute navigational data within the system. MIL-STD-1553 is a military graded serial data bus widely used in aircrafts. NERESas described in Section 2.1.3 is connected in parallel to every subsystem,listening in on all traffic being sent over the buses.

2.4 Tools used for automation

Automation tools are widely used when testing. Its general purpose is tosave time, eliminate the monotone part of testing and to increase robust-ness of the test object. Human error is something found in everything, andis impossible to rule out. Automating the process decreases the possibil-ity of human error influencing the test process. There are many differentautomation tools, specializing in different tests.

2.4.1 Squish, Froglogic

Squish [1] is a test tool used for automatic tests for GUI applications de-veloped by Froglogic. It features extensive opportunities when it comes toautomated GUI testing. The feature record and playback along with its ownverification feature allows users to easily create test scripts with built-in ver-ification points. The verification feature allows for a list of different abilitiesto verify different sort of data. This includes data verifying, image com-paring, objects property and so forth. Validation points are easily createdwithin the record and playback feature of the program. Squish generatesscripts from the record and playback feature in most well-known script-ing languages (i.e. Python, Java-Script, Perl, Ruby) which makes it easy tochange existing scripts and/or implementing your own script. This is use-ful if the user would like to implement loops, cases or data driven tests. Inthis project, Squish was used to control applications that have no other pointof entry besides the GUI. Squish simulates an operator using the mouse andkeyboard to control an application.

Squish offers a server feature, called SquishServer allowing the user to recordand execute squish scripts through a local network. With this feature it’seasy to set up a local server on the local network and run tests withouthaving physical access to the system under test. Although Squish needs tobe installed on the same nodes as the application under test, in order for

2.5 Reference Data 9

SquishRunner to control the application. Squish also allows for easy Jenkins-integration.

2.4.2 Jenkins

Jenkins [3] is an open source automation server. It serves as a top layer inmany continuous integration projects, where it can for example serve as aversion handler and/or test suite handler. It’s well suited for the roll of an“administrator”, telling other programs what to do and when to do it. Italso works as a presenter, showing the user some of the results from thetest-suite. Jenkins offers a variety of plugins. For example, a Squish pluginwhich allows Jenkins to run Squish tests on command.

In this project, Jenkins presented the results from performing the test, i.ethe results gathered from the execution of the Squish scripts.

2.5 Reference Data

Reference data is crucial when testing anything. Without a reference it’s im-possible to separate a successful test from a failed test. The reference data isused to compare the test output to a known output, the reference. Testing aproduct like a radar, it can be difficult to set up a real deterministic environ-ment, acting as a reference. In this project this is solved by simulating theradar, using one of SAABs in-house simulators RES and Sysim. The benefitof using a simulated environment is that the user can design any scenariowithin the ability of the simulator. The downside is the fact that the useris limited by the simulator, i.e it’s impossible to test anything outside theconstraints of the simulator. The simulator in this project suffice, since itspurpose is not to test neither the radar nor the simulator, but to test thecommunication between the SDU, described in Section 2.1.1, and the MCU,described in Section 2.1.2. This method does rely heavily on the simulatorworking properly.

2.6 Interviews & workshops within SAAB

These interviews were held in a non-formal matter and were not recorded.Therefore no exact quotes can be given.

2.6 Interviews & workshops within SAAB 10

2.6.1 Interviews regarding automated testing of PS05 & SAFT

An interview was held with two people working with the subsystem PS05,which is the radar positioned in the front on the fighter aircraft JAS Gripen.[2] The topic of the interview was automatic tests. Especially how they usedautomatic tests in their work, which tools they used, what they tested andhow often the tests were executed.

Automated tests were widely used in their work when developing newsoftware. Along with nightly builds and continuous integration, it’s nec-essary to ensure robust and working software. In their work they used atesting framework called FitNesse with an in-house developed plugin calledSAFT (SAAB Adaptation of FitNesse Testing). One of them specialized indemands, telling the other what to test and what qualified as a successfultest. The other one specialized in test design. He designed the tests usingthe FitNesse framework, SAFT and Java code.

The setup of tests was also discussed. Most tests can be divided in to twoseparate stages. The first stage, data gathering, is where data is gatheredfrom an environment, e.g a simulator. The second stage, data analysis iswhere the data gathered from stage one is analyzed. This structure fits test-ing where a simulator is used to set up the testing environment very well.Usually it’s the first stage, gathering data, that require most time, while theanalysis is often done very quickly. This structure allows for many differenttests to be performed on the same set of data, using different analysis onthe same data set, thus making the test more effective.

When talking about the test frequency they said that it varies from testto test. They mostly have regression tests running every night to ensurethat the latest build is running properly. These tests are updated when newfunctionality is implemented in the builds.

2.6.2 Interviews regarding GUI-testing (Squish)

An interview was held with two people working with C2 application, de-scribed in Section 2.1.5. The C2 application is a graphical HMI where theoperator controls the system by selecting which sensors should be active,what data to analyze and also interprets the presented results. Testing aGUI application differs from testing non interactive applications. The ver-ifying process of testing a GUI application is more complex, and is often

2.6 Interviews & workshops within SAAB 11

hard to automate. The topic of the interview was how to do automated teston the C2 application using the automation tool called Squish, described inSection 2.4.1. A presentation and a short demo of how Squish can be usedfor automated tests on a GUI application was held.

A workshop was held on SAAB in Järfälla with focus on how to use Squish.At SAAB in Järfälla Squish was used to test communication on the interfacebetween the system and the customers data handling center. During thismeeting tutorials on Squish was held, showing how Squish features couldbe used. Explaining the different concepts of validation points, record andplay, how to set up a test suite and how to use Squish on different applica-tions in the same test suite. This workshop was also an opportunity to seeif Squish fulfilled all requirements for the project.

2.6.3 Workshop about C2 and The GlobalEye Program

The project members were invited to a workshop held over two days thatwould give a better understanding of the capability and possibilities thenew GlobalEye [6] program will provide. This workshop is held every nowand then for SAAB employees to increase knowledge in the product.

The first day consisted of lectures describing the system in detail. Each sen-sor and its purpose was described. The second day consisted of a workshopwith the purpose of teaching how to operate the C2 application, describedin Section 2.1.5.

12

3 Test Development

This part of the report describes the setup of the test environment followingwith the development and integration of the two tests. It also describes howthe MATLAB development took shape and the thought process behind theGUI development and presentation of the results.

3.1 System setup

A computer was acquired to act as a server. The "test server" is the masternode in this setup. It’s where all the evaluation is done, where the squishscripts are written and executed and where Jenkins is hosted. The serveris set up as a standard Windows 7 computer. The choice of Windows 7 wasnot ours to make. The computer had recently served as a simulator, andwas already configured and approved to be used on the security classifiednetwork. The master node (called "Test Server" in Figure 3.1) communicateswith all the necessary components through two different API’s using a lo-cal network. It uses SquishServer to communicate with SquishRunner thatcontrols the C2 application, see Section 2.1.5. SquishRunner was installed ontwo computers, Sysim and the C2 operator station. To communicate withReafil, who in turn communicates with NERES, an API (Reafil Controller) toReafil was developed by an in house employee.

3.2 Regression test 13

Figure 3.1: Block diagram of the system setup including test setup

Due to security reasons the entire connection chart can’t be shown in thisreport.

3.2 Regression test

This test mostly focuses on the basic functionality that shouldn’t be affectedwhen new updates arrives. The main idea is to have the same scenarioplaying in Sysim, and evaluate the data (sent over C2-RR LAN, see Figure3.1) recorded with NERES, see Section 2.1.3, when updates arrives. Due tothe scenario being the same every test, the recording should look the sameas well, assuming the new updates are working properly.

3.2.1 Choosing reference data for regression test

Choosing reference data is very important when testing, as discussed inSection 2.5. The regression test is supposed to test the basic functionality ofthe system when new updates arrives. By using an environment simulator,described in Section 2.2, the user can control the environment by setting

3.2 Regression test 14

up scenarios. Choosing what the scenario includes is crucial. To cover thebasics, the scenario used in this project contains:

• One fighter plane without IFF active

• One commercial airliner with IFF active on 3 modes

• One small fishing boat without ADS-B data

• One freighter with 60 m2 radar cross-section with ADS-B data active

ADS-B isn’t implemented in the project test laboratory, but should nonethe-less be included in the test to provide the possibility to test it once the labhas been upgraded as mentioned in Section 1.3.

The complexity of the scenario increases as the number of targets increase.Since this project doesn’t focus on any performance tests, it’s unnecessaryto use a complex scenario. The only difference this would make is that thefiles from NERES contains more of the same data, and the files will grow insize making the analysis slower.

3.2.2 Analysis of regression test data

As mentioned in Section 3.2, the regression test covers basic functionality inthe system. It focuses on ensuring that communication is established andvalid. In other words, the test verifies that data is sent over the selectedchannels, that the amount of messages is correct and in some cases the testverifies that the correct data is sent. The cases where the test looks at thecontents of the messages can be seen in the list below. The following testswere selected as they are fundamental in making the system robust andstable.

• Continuous Heart Beat (C2 & RR)

• System Time test (C2 & RR)

• IFF Mode test

• Frequency Band test

• Calibration test

3.3 MATLAB development 15

The radar and the C2 have their own Heart Beat message, which is trans-mitted to all receivers with a given frequency. This communication functionworks as an underlying guarantee that communication exists, but ignoresif the content in communication is correct or not. The Heart Beat test givesthe result "Pass" if the communication exist during the whole test scenarioand if the communication is lost, the result is passed as "Fail".

It’s important that the subsystems have synchronized clocks when send-ing data between each other. Clock synchronization of the subsystems ismade in the startup of the whole system. It’s also important to make surethe synchronization remains synchronized after the startup. The SystemTime test makes sure that the synchronization never loses connection be-tween the RR-LAN and C2-LAN during the Regression test. Results arepresented as either "Pass" or "Fail".

The IFF Mode test is designed to make sure that C2 sends the correct com-mands to the MCU, see Section 2.1.2, in order to prepare it for incomingdata on the specified IFF mode. This test is done by selecting each IFFmode one by one and verifying that the information C2 is sending to theMCU is correct. Results are presented as either "Pass" or "Fail".

The Frequency Band test is designed to confirm that the C2 applicationsends the correct commands to the MCU in order to change frequency bandof the radar. This test is done by selecting the frequency band one by oneand verify that the messages incoming to the MCU is corresponding withthe selection made in C2. Results are presented as either "Pass" or "Fail".

The Calibration test is designed to confirm that the MCU is sending mes-sages to the C2 when a calibration is done. This happens automaticallywhen the aircraft is turning with enough tilt. Results are presented as ei-ther "Pass" or "Fail".

3.3 MATLAB development

MATLAB [4] is used as scripting language to analyze the incoming recordeddata from NERES, see Section 2.1.3. This project uses the built in GUI func-tions in MATLAB, to present the results.

The analysis method is designed to be as generic as possible, allowing

3.3 MATLAB development 16

changes in the system without changing the test case. The analysis scriptcovers any output from NERES as long as it’s translated according to thecorresponding signal table, which Reafil, see Section 2.1.4, takes care of.

MATLAB development started by designing the GUI. When designing theGUI a lot of time was spent choosing what to highlight in the presenta-tion. One of the most important pillars in testing is observability. The usershould be able to find exactly where the test fails, if it fails. However, whenthe outcome is positive it doesn’t necessarily have to show the exact results,assuming the test is properly written. Filtering abilities was desired as well,allowing the user to filter the visible data with two constraints, Max Differ-ence and Data Type to easier find errors and differences between the testoutputs. An idea for the GUI structure was drawn, see Figure 3.2

Figure 3.2: Idea of structure for the GUI

The MATLAB script offers the possibility to store statistics from the test.This enables the user to compare different tests, allowing the user to seechanges in the system. If the test results are negative or interrupted mid-test, the user can choose not to store the test results in the statistics.

3.4 Squish script development 17

The statistics were first stored in a single text file, separating each analy-sis with a new line. The file became big and complex fast as more testswere made. Storing the statistics this way made the importing procedure ofthe file to require much CPU power, because each value had to be matchedto its corresponding message. This had to be done for every analysis storedin the file. The required computational power increased exponentially foreach test stored. This was solved by storing the statistics as a table in a textfile with each message having its own row, and each analysis having its ownset of columns with data. This solution helped to speed up the importingprocess significantly, and also made it easier to analyze the files in Excel orsimilar programs.

In order to allow Jenkins to open and run the MATLAB script, batch scriptshad to be written: Two simple scripts, one for the regression test and onefor the startup test. Each script running its corresponding test script andchanges the current working folder to the given folder. The chosen foldershould be the output folder from Reafil, i.e the recorded data on which theanalysis should be made.

3.4 Squish script development

Scripts in Squish were for these tests written in Python [5]. Even thoughmost of the scripting was generated through recording mouse and key-board actions, the scripts were generally in need of adjustments and fixes.The standard libraries of Python include almost everything needed to writepowerful and effective scripts with high readability.

3.4.1 Squish script for regression test

The regression test starts with attaching to C2 with the intention of clearingold settings so that the start of each test has some form of "default" setting.This includes shutting down the radar and IFF, removing all current searchareas and deleting all manual and PSR tracks (both airborne and maritimetargets). When this is done Squish attaches to Sysim to start recording andSquish attaches back to C2. Squish activates two PSR search areas over thelocation of the tracks placed in Sysim. One Primary Air Surveillance area tosearch for the airborne targets and one Sea Target Area to search for the mar-itime targets. The next step is to step through the frequency bands, havingeach one active while the others are inactive a small period of time. The

3.5 Startup test 18

next step is to activate the IFF, and have each IFF mode active while the restof the IFF modes are inactive for a small period of time.

Once the test is done a final Squish script is executed. The final script clearsthe C2 of the all variables and search areas needed for the test, restoringC2 to the state it had before the test. Allowing normal activity to continuewithout the test settings interrupting.

3.5 Startup test

This test focuses on the startup of the system. The idea is to use NERES,see Section 2.1.3, to record the communication between subsystems dur-ing system startup. When the system is in startup mode, several startupsequences are performed to establish the communication between subsys-tems. There are also startup sequences that contains setups for differentsensors. A startup sequence usually consists of request and response mes-sages between two subsystems. By recording the startup sequence and ver-ifying that all messages are sent and received on time, errors can be foundif they occur during this process. The purpose of the test is to easier findcritical errors related to startup of the system.

3.5.1 Choosing reference data for startup test

There were some question marks about NERES being able to record withouta stable system running. It was known that NERES wasn’t able to recordsome of the connections unless the system was up and running. Tests werefirst made to make sure that NERES was able to record both C2-LAN andRR-LAN from startup, with success.

Several startup recordings were made to get an overview of what was pos-sible to test during the startup and what could go wrong in the startupsequences. Negative tests were recorded as well, where the start sequenceswere interrupted to ensure a negative result. These tests were purely to seethe results of a faulty startup.

The next step in choosing reference data was to look at the actual startingsequences for both RR-LAN and C2-LAN. These were found in internal,classified documents and can’t be used as a reference in this report.

3.5 Startup test 19

3.5.2 Analysis for the Startup test

The Startup test focuses on three different startup states in the system: RRStartup, PSR Startup and IFF Startup. The RR Startup consists of sequencesto establish communication and system time synchronization between theRadar and the C2. In this state, NAV system is also selected. All the se-quences in the RR Startup state is fundamental for the system to work prop-erly and has to be completed before the PSR Startup and IFF Startup statescan begin. The PSR Startup focuses on calibration of the DOU and givesinformation if any errors were to occur during this process. The IFF Startupconsists of sequences to setup the IFF sensor correctly, enabling C2 to startoperate the IFF function.

The list below shows the tests for the three different Startup states:

• RR Startup

– Continuous Heart Beat (C2 & Radar)

– Built-In-Test

– System Time Synchronization (C2 & Radar)

– NAV system selection

• PSR Startup

– Calibration permission

– Calibration

– TRM Changes

– PSR Start transmission

– PSR Block Init

• IFF Startup

– IFF Ready

– IFF has operative permission

– IFF Coverage

Below follows explanations about the tests. Radar refers to the radar systemwhich includes the MCU, the SDU and the EXR.

3.5 Startup test 20

The Continuous Heart Beat test for the Startup test is the same as describedfor the Regression test in Section 3.5.2.

When the Heart Beat has been established between the C2 and the Radar,the Radar requests to get a BIT-message from the C2, which shall reply withthe correct BIT-message to the radar. The BIT-message trigger an internalsetup test in the Radar. Exactly what these Built-In-Test do is classified. TheBuilt- In-Test ensures that the correct BIT-messages are sent over the C2-and RR-LAN .

The System Time Synchronization test verifies that the system time for theC2 and the Radar are synchronized. When starting the system, each unithas its own system time. To synchronize the time, the Radar requests to getthe system time form the C2, which replies with a message containing itssystem time. The Radar then corrects its own system time corresponding tothe system time for C2. This is required for the system in order to be ableto tag data with the correct system time. When this is done, the data can besent freely between these two units. The System Time Synchronization testshows either "PASS" or "FAIL". Later when the NAV system is selected theRadar synchronizes with the UTC time in the NAV system and then changesthe system time for both the C2 and the Radar.

The NAV system selection test verifies that the system has selected the cor-rect NAV system. The Radar requests to get information about which NAVsystem to select from the C2. The C2 shall reply with the desired NAV sys-tem. When the Radar has established communication with the chosen NAVsystem, data is periodically sent from the Radar to the C2 containing NAVdata. This can only be done once the C2 and radar are synchronized andare running on the same system time.

Before the PSR function can start, several conditions has to be fulfilled.First, the Radar indicates that the PSR function is ready, this happens whenthe RR Startup is finished. This gives the C2 permission to send severalinitial control messages to the Radar. The C2 can now instruct the Radar tobegin a calibration of the DOU. The Calibration permission test confirmsthat the permission messages between the C2 and the Radar have been sentcorrectly and the Calibration test confirms that a calibration has been done.The outcome of these test shows either "PASS" or "FAIL".

3.6 Standard tests 21

When the calibration of the DOU is done, the Radar sends a status updateof the TRMs in the DOU. The TRM test verifies that no errors have beendetected in the TRMs during the startup. After this the PSR function canchange state from startup to operative and searching areas can be deployedin the C2 application. The PSR Start Transmission test and the Block Inittest verifies that information about PSR tracks and areas are sending overthe RR- and C2-LAN.

Before the IFF function can change state from startup to operative, sev-eral conditions has to be fulfilled. When the RR Startup has finished, theRadar indicates to the C2 that the IFF function is ready to be controlled.This means that the C2 can give instructions about which modes that shallbe activated for IFF identification. The IFF Ready test confirms that the IFFis ready to start, while the IFF has operative permission test confirms thatthe C2 gives correct mode instructions to the IFF function.

To start the IFF function, the C2 sends task instructions to the Radar whichresponds with performance and status of the commanded IFF tasks. TheCoverage test verifies that the Radar has received the task instructions andreplies with status for each instruction.

3.6 Standard tests

A few test cases were designed for one of the coworkers that specializesin C2 verification. These test cases were pure C2 tests (i.e no data wasrecorded) and verification was done using Squish in C2. These tests werenot a part of the main project, but were highly appreciated by the coworker.The tests were quite basic but helped the project to get a better insightand overall knowledge about Squish and gave the opportunity to introduceSquish to the coworkers.

Performance tests were designed as well. These were designed based onthe customer requirements, and included creating X geo-points, linking Ymanual tracks etc. These performance tests were pure C2 tests as well.

22

4 Results

This section covers the results of this project. Firstly, it’ll describe the finalsystem setup, following with the chain of events and the presentation oftest results in MATLAB.

4.1 Setup

One major concern was how to setup a test system that wouldn’t impactthe system under test. This project wasn’t interested in CPU-rates or per-formance tests in first hand, but the system had other users working on itat the same time. If the tests were to compromise the system stability orperformance it would decrease the work capacity for all other users. Theother major concern was how to connect everything. There were severalseparated networks not supposed to interact with one another.

The two major concerns mentioned above were solved. Running the anal-ysis on a stand-alone computer removed most of the CPU and memorydependent parts of the test from the system. SquishRunner was the onlyapplication that affected the system (during the regression test), and itsCPU and memory usage was negligible. During the startup test the sys-tems were completely separated. Reafil was also located on a stand-alonecomputer, making it separated from the system under test.

4.2 Chain of events

The aim of the regression test was to cover as much basic functionality aspossible in one recording. Collecting data from many different tests in onerecording saved time and allowed for faster testing. This was achieved andthe solution allowed for any number of tests performed in one recording,as long as the tests were performed during the recording. Figure 4.1 de-scribes the chain of events executed in the regression test. Jenkins acts as anadministrator, telling Squish, Reafil and MATLAB what to do, and when todo it. The arrows in Figure 4.1 and 4.2 represents the flow of commands, i.ehow communication is set up. An arrow pointing in one direction meansthat there’s no expected acknowledgement going the other way from thereceiving application.

4.2 Chain of events 23

Figure 4.1: Chain of events in the regression test

The chain of events described in Figure 4.2 is what the process will look likeonce the CMS system is installed in the test laboratory. During the time ofthis project there was no CMS system, therefore the shutdown and startupof the system had to be done manually. Steps 2 and 4 in Figure 4.2 were notimplemented in the current setup.

4.3 Presentation of Regression Test 24

Figure 4.2: Chain of events in the start up test

4.3 Presentation of Regression Test

The MATLAB scripts are used to present the results from the tests. The GUIis designed to be as intuitive and easy to understand as possible. Figure 4.3is the overview of the regression test, and also the start page of the script.The total number of messages, the bus load (in both percent and Mbit/s)and the average message length are displayed in the top left corner. Thistable is there to give an indication on how much data traffic was sent dur-ing the tests, and how the system performed. The message table is there toshow the test results, and compare them to the reference data. Each mes-sage has three columns: its total number of messages, message frequencyand message error count. The reference data is there to make it easy forthe user to compare the test results to previous test runs. The referencedata can be set using the button "Set Reference Data" located in the top ofthe screen. Pressing the button sets the current test run as reference. Thebutton "Add to statistics" adds the current test run to a text file of statistics.

4.3 Presentation of Regression Test 25

The button updates the statistics file to include the current test run. Thebutton "Write to file" writes the current test run to a text file. This enablesthe tester to be able to store data from test runs and use them later on. Allstatistical files can be opened and reviewed in Excel.

All message names in the figures below have been censored for securityreasons.

Figure 4.3: Regression test summary page

The button "Filter", see Figure 4.4, enables filtering of the data set. The usercan choose to filter on any data type displayed (i.e number of messages,frequency or error count) to easily see differences between the current testrun and the reference data. The filtered results are displayed in the tableshown in Figure 4.3

4.3 Presentation of Regression Test 26

Figure 4.4: Filtering inputs for the regression test

Figure 4.5 displays statistics from every saved run. Mean values, currentrun and the reference are displayed next to each other, making it easy tosee differences.

Figure 4.5: Regression test, statistics summary

C2-LAN and RR-LAN has its own page, (RR-LAN displayed in Figure 4.6)where messages originating from respective LAN are displayed along withits reference data and the differences in number of messages between the

4.3 Presentation of Regression Test 27

current run and the reference data. The filtering feature is implementedhere as well. The second button, "Open *LAN* Stats" displays statisticsfrom previously saved tests, and the user can compare the results. Thestatistics are displayed in the same window. Figure 4.7 shows how severaltests runs are displayed in the RR-LAN page. Here it’s easy to see patternsfor results between test runs. C2-LAN has its corresponding page.

Figure 4.6: Regression test RR-LAN page

4.3 Presentation of Regression Test 28

Figure 4.7: Statistics for RR-LAN page

The Specific Test cases have their own page. Here are the results from thetest that can be presented as "Pass" or "Fail". The table displays the namesof the tests, its result, a small description of the test and the correspondingmessages. Which tests are presented can be found in Section 3.2.2.

4.4 Presentation of Startup Test 29

Figure 4.8: Regression test, Specific test page

4.4 Presentation of Startup Test

The MATLAB scripts used to present the result of the Startup Test, includesa GUI. The GUI is designed to give a good overview of the results. Figure4.9 and 4.10 shows the results of the Startup Test. The idea of the GUI is togive the result "Pass" or "Fail" to the user. If the result shows "Fail", the useris referred to analyze the corresponding message for that specific test casein Reafil, described in Section 2.1.4.

Figure 4.9: Startup test, RR Startup page

4.5 Internal Manuals 30

Figure 4.10: Startup test, PSR Startup page

Figure 4.9 and 4.10 shows the result for the RR- and PSR-startup sequence.The first column shows the name of each test, second column shows theresult of the test where the output can be either "Pass" or "Fail". The thirdcolumn gives a short description of the test and the forth column showswhich messages are involved in the test.

As described in Section 4.2, the chain of events for the Startup test wasnot completed during this project but when the CMS system will be in-stalled the idea is to include this test in a nightly build. By executing thetest repeatedly, it will help to ensure that the startup of the system is stable.

4.5 Internal Manuals

Internal manuals for the SAAB personnel have been written. This includesinstallation and user manuals for all necessary applications including thedeveloped MATLAB script and connection charts covering the setup. Themanuals cover everything from installation to creating specific test cases.

31

5 Discussion

Like every project, problems arise on the road to success. The biggest prob-lem for this project was understanding the system and realizing the pos-sibilities. The complexity of the system under test (described in section2.1) is immense and even learning the basics is a challenging task. With avery limited time frame, areas of focus had to be selected. The project wasmostly a proof of concept, which made selecting areas of focus difficult.The supervisor didn’t know what was possible, therefore much of the timeduring the pre-study had to be spent getting to know the system and whatkind of tests were possible.

5.1 Why MATLAB?

MATLAB was chosen to handle the evaluation part of the testing process.Why was MATLAB chosen over Python or Java? When starting the projectthere was an idea of including raw video data (i.e frequency data) in thetests. For this idea MATLAB was suited for the project, and the evaluationdesign started. After a while, the idea of including a part where the videodata was analyzed would take up a large amount of time and that timewould be better spent on other things. By this time the GUI and most ofthe analysis design was done and there wasn’t enough time to re-write itin another language. If this would’ve been known in the beginning of theproject, Python would probably have been more suited for the evaluationpart. Python or Java would probably be more applicable than MATLAB forfurther development. Although MATLAB does allow for further develop-ment, it would be more versatile with Python or Java since Python has lesslimitations in its programming structure than MATLAB.

Python would’ve made the project more coherent, since all the Squish scriptswere written in Python. Its light weight and the fact that there’s no need topre-compile Python scripts would have made it more suitable for changes.Java would have made it possible to control the GUI-part of the analysisprogram with Squish, which could become useful when further developedand more features have been added.

5.2 Choosing GUI testing tool 32

5.2 Choosing GUI testing tool

During the pre-study, see Chapter 2, many different GUI-testing tools wereresearched. The wanted features and requirements needed in the GUI-testing tool took some time to figure out. The research started by lookinginto what was available and not what was needed, which was a mistake.Time could’ve been saved by finding out the requirements first. Though itcan be defended by saying the project members had no experience with thesystem or any experience with GUI-testing at the start. Knowing what waspossible made it easier knowing what was desired.

Among the researched tools were:

• Squish

• EggPlant

• SilkTest

• Maveryx

• Cucumber

• Renorex

• Rapice

Each of these were evaluated based on features, compatibility, learningcurve and price. Not all of the items in the list above had the required fea-tures for the project and could be dismissed early in the research. Otherswere researched further but in the end Squish was chosen. Squish was cho-sen because of its features, easy integration with Jenkins and because it wasalready used within the SAAB group. The Batch-Testing feature in Squish,allowing tests to be executed on several nodes at the same time, made it fitthe project well. The fact that it was already used within the SAAB groupopened up the possibility to join a workshop about Squish in another in-stance of SAAB (the workshop is described in Section 2.6.2). Squish alsooffered easy integration with Jenkins, low CPU and memory requirementsand was easy to use. The downside with Squish was that it was expensive,relative to the other tools.

5.3 Test Laboratory 33

5.3 Test Laboratory

The laboratory where the tests were conducted and the test environmentwas implemented, was used by more project groups. The test laboratorywasn’t equipped with the complete system, which limited the project. Thelack of a proper CMS system (Control & Monitoring System) to controlsystem power supply complicated the startup test process. Once the testlaboratory is complete with all the subsystems, the tests can be expandedto include the added subsystems.

During the late part of the project there were ongoing Acceptance Testsin the laboratory. This meant limited access to the system and recordingdevices.

5.4 Testing at system level vs. subsystem testing

Each subsystem is extensively tested before it’s integrated with the system.What separates this project from the tests that are already conducted oneach subsystem? The tests developed in this project tests the chain of infor-mation sent between the subsystems, not that the subsystem performs thecalled function correctly. Some output from functions are tested, describedin Sections 3.2.2 and 3.5.2, but it’s not the main focus of the project. Sim-ulating an operator using Squish allows to test the system from a real lifepoint of view.

5.5 Future Development

This project is in some form a proof of concept. The regression test is toshow that simple tests can easily find malfunctioning software and hard-ware updates with nightly builds. The startup test is to prove that it helpstroubleshooting and that it can provide a more robust system. Running thestartup test repetitively during nightly hours can help finding unusual bugsand errors in the system.

Future development should include more subsystems and interfaces. Thisproject has been limited to RR-LAN and C2-LAN, and the simulators. In-cluding raw video data from the DOU in the tests is a possibility. This isa tricky part, and requires some thinking before implementing. It’s hardto get a reference without simulating the radar input (i.e using RES, see

5.5 Future Development 34

Section 2.2.2). One way to solve this could be to implement some form ofa transponder at a given location with a GPS located on it and test the rawvideo data that way, making sure the transponder is located at the rightcoordinates in the raw video. Another implementation could be to look at acalibration in the DOU when a calibration is done. This is implemented ona higher level in project, meaning it looks for the correct communication,but doesn’t look for the actual calibration.

As mentioned in Section 5.1, an idea is to rewrite the analysis in eitherPython or Java. It’s not necessary, but would make it more effective andeasier for future developers.

The best solution would be to implement the entire analysis part in Reafil,see Section 2.1.4. Reafil already controls NERES and features a variety ofreal-time analysis, and is able to handle all data being recorded. Imple-menting the analysis part directly into Reafil would eliminate a third partysoftware analyzing the data, as NERES is already a part of the system.

References 35

References

[1] Froglogic. Squish. https://www.froglogic.com/squish/. 2017.

[2] Försvarsmakten. JAS Gripen C/D. http://www.forsvarsmakten.se/sv/information-och-fakta/materiel-och-teknik/luft/jas-39-gripen-cd/. 2017.

[3] Jenkins. Jenkins. https://jenkins.io/index.html. 2017.

[4] Mathworks. MATLAB. https : / / www . mathworks . com / products /matlab.html. 2017.

[5] Python. Python. https://www.python.org/. 2017.

[6] SAAB. Airborne Surveillance solutions - Global Eye. http://saab.com/air/airborne-solutions/airborne-surveillance/globaleye/. 2017.

[7] Wikipedia. Automatic Dependent Surveillance - Broadcast. https://en.wikipedia.org/wiki/Automatic_dependent_surveillance_-_broadcast.2017.

[8] Wikipedia. Automatic Identification System. https://sv.wikipedia.org/wiki/Automatic_Identification_System. 2017.

[9] Wikipedia. Electronic Intelligence. https://en.wikipedia.org/wiki/Signals_intelligence. 2017.

[10] Wikipedia. Identification, Friend or Foe. https://en.wikipedia.org/wiki/Identification_friend_or_foe. 2017.

[11] Wikipedia. MIL-STD-1553. https://en.wikipedia.org/wiki/MIL-STD-1553. 2017.