placement support for signal intelligence units1327615/fulltext01.pdf · en lokal s okmetod som utg...

45
UPTEC F 19028 Examensarbete 30 hp Juni 2019 Placement support for signal intelligence units Olle Frisberg

Upload: others

Post on 26-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

UPTEC F 19028

Examensarbete 30 hpJuni 2019

Placement support for signal intelligence units

Olle Frisberg

Page 2: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

Teknisk- naturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student

Abstract

Placement support for signal intelligence units

Olle Frisberg

The goal of this thesis was to develop an optimization model that automatically finds optimal groupings for signal intelligence units, aiming to maximize surveillance capability in a user-defined target area. Consideration was taken to transportation possibilities, type of terrain, and the requirement of radio communication between the direction finders. Three scenarios were tested, each providing its own topographical challenges. Several different derivative-free optimization methods were implemented and evaluated, including global methods to find approximate groupings using a geometrical model that was developed, and the local method pattern search that was using the already existing model. Particle swarm and a genetic algorithm turned out to be the best global solvers. A grouping found by the global method was later improved by pattern search by evaluating possible groupings nearby. The greatest practical challenge for particle swarm and pattern search was the ability to find feasible placement points given a desired direction and step length. Workarounds were developed, allowing for more dynamic search patterns. For future use, the placement support should be tested on more scenarios with different prerequisites, and the approved terrain types have to be adjusted according to the kind of vehicle carrying the direction finder.

ISSN: 1401-5757, UPTEC F 19028Examinator: Tomas NybergÄmnesgranskare: Di YuanHandledare: Petter Bivall och Leif Festin

Page 3: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

Popularvetenskaplig sammanfattning

Nar signalspaningsenheter placeras ut for att lyssna efter radiotrafik eller radarsignaleri syfte att positionera en sandare i ett givet malomrade, sa sker utplaceringen medmycket manuellt arbete utifran ett erfarenhetsbaserat tillvagagangssatt. Hansyn mastetas till om transportmojligheter finns till utplaceringsplatserna, vanligen anvands tvatill fyra enheter. Terrangen maste ocksa medge utplacering, en bil kan t.ex. inteplaceras i vatten.

Syftet med detta arbete var att skapa ett placeringsstod at Totalforsvarets Forskn-ingsinstitut (FOI), som automatiskt placerar ut signalspaningsenheterna sa att sandarenman lyssnar efter gar att positionera i ett sa stort omrade som mojligt. Flera olikaoptimeringsmetoder har utvecklats och undersokts. Dels globala sokmetoder som hit-tar bra grupperingar av signalspaningsenheter utifran ett geometriskt perspektiv, delsen lokal sokmetod som utgar ifran en gruppering som ar geometriskt bra och sedanforsoker forbattra losningen genom att utvardera mojliga punkter i narheten med enriktig vagutbredningsmodell.

I dom tre testscenarion som anvandes visade det sig att den snabba geometriskamodellen som utvecklades overensstamde relativt bra med den riktiga vagutbredningsmodellen.Metodernas parametrar optimerades sa att resultatet forbattrades och kortiden min-skade. For framtida arbeten bor metoderna och den geometriska modellen testas pamer komplexa scenarion och terrangen behover bedomas som anvandbar eller ej utifranden fordonstyp som sensorerna sitter pa.

3

Page 4: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

Contents

1 Introduction 81.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.3 About the project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81.4 Determining a position . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2 Problem description 102.1 Optimization problem formulation . . . . . . . . . . . . . . . . . . . . . 102.2 Size of solution space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3 Computing a result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.4 Test scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.5 Experienced based techniques . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Theory 153.1 Derivative-free optimization . . . . . . . . . . . . . . . . . . . . . . . . . 15

3.1.1 Generalized pattern search . . . . . . . . . . . . . . . . . . . . . 153.1.2 Random search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.1.3 Particle swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.1.4 Genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.2 Surrogate model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173.3 Flood fill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Method 184.1 Flood fill for generating the placement grid . . . . . . . . . . . . . . . . 184.2 Finding feasible points . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

4.2.1 Square neighborhood . . . . . . . . . . . . . . . . . . . . . . . . . 184.2.2 Choosing the highest feasible point . . . . . . . . . . . . . . . . . 19

4.3 Initial positions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.3.1 Particle swarm implementation . . . . . . . . . . . . . . . . . . . 194.3.2 Surrogate model implementation . . . . . . . . . . . . . . . . . . 20

4.4 Recursive random search implementation . . . . . . . . . . . . . . . . . 214.5 Pattern search implementation . . . . . . . . . . . . . . . . . . . . . . . 22

4.5.1 Pattern search with constant recursive square size . . . . . . . . 234.5.2 Pattern search with dynamic square size . . . . . . . . . . . . . . 23

4.6 Genetic algorithm implementation . . . . . . . . . . . . . . . . . . . . . 234.7 Combined solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5 Results 255.1 Feasible placement points . . . . . . . . . . . . . . . . . . . . . . . . . . 255.2 Surrogate model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.3 Recursive random search . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.4 Particle swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275.5 Genetic algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295.6 Initial positions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305.7 Pattern search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

5.7.1 Different methods of choosing a feasible point . . . . . . . . . . . 325.7.2 Opportunistic run . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4

Page 5: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

5.8 Combined solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.9 Global optimum in smaller scenario . . . . . . . . . . . . . . . . . . . . . 35

6 Analysis 366.1 Feasible placement points . . . . . . . . . . . . . . . . . . . . . . . . . . 366.2 Surrogate model versus real model . . . . . . . . . . . . . . . . . . . . . 366.3 Optimization methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

6.3.1 Particle swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376.3.2 Recursive random search and genetic algorithm . . . . . . . . . . 376.3.3 Function calls per iteration for RRS, PSO and GA . . . . . . . . 376.3.4 Pattern search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386.3.5 Combined solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

6.4 Global optimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396.5 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

7 Conclusions 40

Appendices 43

A Result tables 43A.1 Data for initial positions . . . . . . . . . . . . . . . . . . . . . . . . . . . 43A.2 Data for selection types . . . . . . . . . . . . . . . . . . . . . . . . . . . 43A.3 Data for polling types . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44A.4 Data for combined solvers . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5

Page 6: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

Terminology

EW - Electronic warfare

EA - Electronic attack

EP - Electronic protection

ES - Electronic support

C2 - Command and Control

SIGINT - Signal intelligence

DF - Direction finder

FU - Fusion unit

Skip zone - Region where transmission can not be received.

Direction of Arrival (DOA) - A radio or radar signal’s direction of origin,also called Angle of Arrival (AOA).

Jammer - Signal blocking device

Brute force search - Evaluation of all possible solutions.

Hyperparameter - Parameter for a method and not for the problem itself.

Meta-optimization - Optimization performed to find optimal hyperparametersof a method.

Cardinal points/directions - North, east, south, and west (N, E, S and W).

Intercardinal points/directions - NE, SE, SW and NW.

Positive basis - Positively independent vectors that span Rn.

GPS - Generalized pattern search

PSO - Particle swarm (optimization)

RRS - Recursive random search

GA - Genetic algorithm

6

Page 7: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

Acknowledgment

I would like to thank everyone that have been involved in this project and helpedme along the way. A special thanks to my supervisor at FOI, Petter Bivall, who hashelped me to stick to the main goal of the project and all the proofreading of this re-port. A big thanks to Magnus Dahlberg and Hanna Lindell who have helped me withthe simulation framework and their practical expertise, to Leif Festin for the practicalinsights into signal intelligence and to my subject reviewer Di Yuan at Uppsala Univer-sity for the constructive feedback concerning the report and the optimization methods.

Olle FrisbergLinkoping, May 2019

7

Page 8: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

1 Introduction

Electronic warfare (EW) is military operations that use the electromagnetic spectrumto discover, exploit, influence, obstruct or prevent the enemies usage of the spectrum.EW has been used since the early 1900s and is a increasingly important part of ourweapons and Command and Control (C2) systems[5]. EW is usually divided into threesubgroups: electronic attack (EA), electronic protection (EP) and electronic support(ES). One important part of ES is to be able to position the enemy’s transmitters,e.g. radio and radar, with signal intelligence (SIGINT). A receiver that is designed todetermine a signal’s direction of origin is usually referred to as a direction finder (DF).With one stationary DF it is possible to get a direction (or bearing) to the transmitterbut not a location. If the DF is moving (with for instance an airplane), it is possibleto determine a position with only one DF by doing several measurements at differenttimes and positions. With (at least) two directions, the transmitter location will be inthe intersection between these two. One common method to compute this intersectionwhen the directions are straight lines (bearings) is with triangulation[1].

1.1 Background

Today when SIGINT is used for locating transmitters, two to four DFs is positionedin patterns based on experience, often involving a lot of manual work. Considerationmust be taken to the type of terrain the DF should be placed on, a car antenna cannot be placed in a lake or in a forest with tall trees etc. It must also be possible totransport the DF to the desired location.

1.2 Requirements

The goal of this project was to develop an optimization model that in an automatic wayfinds the optimal positions of the DFs. The user should be able to input a placementarea AG where it might be possible to place the DFs, input a target area (AT ) in whichthe model should maximize surveillance capability, and also a region of interest area(AROI) with the stronger constraint that it has to be covered by the sensors and shouldbe prioritized. Another requirement was the ability to drop the constrained coveragein AROI if it could not be fully covered. The model should consider terrain type andaccessibility to the proposed location of the DFs.

1.3 About the project

The project was conducted at the Swedish Defense Research Agency (FOI) at thedepartment of Electronic Warfare Assessment (Telekrigvardering). All work was im-plemented as a new sub-module in the existing simulation framework called EWSim(Electronic Warfare Simulation interface model) developed by FOI. The scenario plan-ning part of EWSim is called NetScene which is a Geographic Information System(GIS). Screenshots from the GIS will be included to provide basic understanding ofthe problem and show the different test scenarios used.

Due to both information security and intellectual property rights, this report doesnot include any of the implemented code. However, all methods are described thor-oughly and should be possible to implement by the reader him-/herself.

8

Page 9: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

1.4 Determining a position

In order to position a transmitter the DFs must be able to communicate their bearingsto a common unit for fusioning, here called a fusion unit (FU). When the FU receivesinformation from the DFs the data is combined to determine the transmitter’s position.The level of accuracy depends on the angle (φ) between the DFs with respect to thetransmitter, and several properties of the DFs. The quality of the determined positionis represented with an uncertainty ellipse, see Figure 1. When the angle φ is 90 degrees,the uncertainty ellipse will be a circle with the smallest possible positioning uncertainty.When φ is 0 degrees, one of the uncertainty axis will be infinitely long, i.e. positioningis not possible[1].

Figure 1: The yellow line with bi-directional arrows shows that a two-way communica-tion link exists between the DFs. The blue lines shows the bearings on the transmitterin red.

9

Page 10: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

2 Problem description

2.1 Optimization problem formulation

When reformulating the assignment to a constrained optimization problem one obtainsthe following equation

maximizeX

C(X)|AT

subject to Xi /∈ ΩBT ,

C(X)|AROI= 100%,

Transportation is possible,

Communication exists.

(1)

where X is a vector with the DFs positions, C(X)|A the coverage in area A and ΩBT

bad terrain type.The coverage C(X) in an area A is calculated as

C(X)|A =

NA∑i=1

Ci(X) (2)

where Ci(X) is the coverage in one grid point in area A.Ci(X) was calculated from existing models for electromagnetic wave propagation

w.r.t. terrain, DF and transmitter equipment. That communication must exist sim-ply means that bearing data can be communicated via radio to the FU. The thirdconstraint in Equation 1 that transportation is possible means that the DFs can betransported to the desired locations via roads and approved terrain type that is con-nected to the roads. The different coverages and regions can be seen in Figure 2.

Figure 2: The top rectangular coverage shows where it is possible to place the DFs.Red color corresponds to feasible points and the polygon defines a constraining areathat the units have to be placed within. The bottom coverage shows the positioningquality in the target area. The outer polygon is AG and the inner polygon AROI .Red color corresponds to a successful positioning, green bearing, blue detection andtransparent no detection.

10

Page 11: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

The last constraint that communication must exists is indirectly fulfilled becauseotherwise C(X) will have a smaller value and can therefore be dropped. Since theapproved placement points does not change it is possible to precompute a list or table,Ω, with feasible points that has both accepted terrain type and the transportationpossibility. One could also make C(X)|AROI

= 100% a part of the objective function tobe maximized by assigning a large weight WROI to it, this will also make it possible todrop the constraint by setting WROI = 0 or control how much it should be prioritized.

With these three modifications to Equation 1 we have

maximizeX

f(x) = C(X)|AT+WROIC(X)|AROI

subject to Xi ∈ Ω(3)

where f(X) is the final objective function to be maximized.

2.2 Size of solution space

The geographical data that was used contained terrain type (forest/sea/marsh etc) andheight every 25 meters. It is not an unrealistic scenario that the approved placementarea of the DF is in a area of 50 ∗ 50 km corresponding to 4 000 000 placement points.Lets make an assumption that only 0.1% of these has an allowed terrain type andis close to a road. With three DFs, this would correspond to a feasible region of40003 = 6.4 ∗ 1010 positioning possibilities. If a low resolution coverage diagram takesone second to calculate, the exact solution with a brute force search would take 2029years to find.

2.3 Computing a result

A common metric in optimization is to measure how many function calls that areneeded before the algorithm converges. In the challenge of the present work, a functioncall corresponded to the computation of a coverage diagram in AT and AROI, seeEquation 3 and the bottom rectangle in Figure 2. Every time a DF is moved, thecoverage has to be recalculated. Every grid point in the diagram holds a value Ci(X)that represents the positioning quality with a decimal number of

Ci(X) =

0, if undetected

0.1, if detected

0.3, if bearing

0.3 + 0.6 ∗ e−v−t10000 , if positioned and v > t

0.9, if positioned and v ≤ t

(4)

where v is the positioning variance and t the positioning threshold. The values between0.3 and 0.9 are continuous and depends on the positioning variance, with high variancethe value is close to a bearing (0.3) and with low variance the value is close to the bestpossible positioning quality 0.9.

To compute the objective function value, i.e. summing up the coverage diagramvalues, can take from 0.1 seconds in a small area with a low resolution grid to minutesin a large area with high resolution. The positioning quality calculation was alreadyparallelized with CPU-threads so to parallelize an optimization method would probablynot utilize anymore cores, just give rise to more overhead of swapping in and out

11

Page 12: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

threads. One alternative to speedup the process could be to parallelize the coveragecalculation on the GPU, but such approaches are outside the scope of this thesis.

2.4 Test scenarios

Three different scenarios, labeled S1-S3 in Figure 3 and Table 1, were used in orderto compare the methods under different conditions with respect to problem size anddistributions of the constraining areas, topology, terrain types and road accessibility.To test in different scenarios is important in order to not over-fit the optimizationmethods parameters on just one specific terrain since it should perform equally well inall parts of Sweden.

Table 1: Specifications for the three different scenarios. Feasible points is how manyof the points in AG that are available for placement. Total points is how many pointsin AG that exists in total.

Feasible points Total points Percent feasible

S1 99306 486080 20.4%

S2 240961 448840 53.7%

S3 57652 335844 17.2%

In Figure 3a (S1) the terrain is close to the ideal case when the elevation is constant(or very smooth) everywhere with no concrete obstacles that are preventing the lineof sight to the different positions in AT and AROI. Hence the smooth non-noisy circlesof bearings in blue from the DFs and the intersecting regions from the bearings thatmakes up the successful positioning regions in red. For S2 and S3 the theoreticalbearing and positioning regions is not that easy to see anymore. The result is muchmore noisy and the positioning quality is varying in a non-predictable and discontinuesway. Bear in mind that only the coverage from one possible grouping per scenario isvisible in Figure 3.

12

Page 13: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

(a) Scenario 1 (S1)

(b) Scenario 2 (S2)

(c) Scenario 3 (S3)

Figure 3: The three different scenarios that were tested.

13

Page 14: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

2.5 Experienced based techniques

There are a few experience-based conditions that could be used to steer the optimiza-tion process.

Firstly, the angle should be 90 degrees between the DFs to the center of AROI.However, this is no guarantee that the solution will be optimal or not even good.For instance, it could be high forest or even a mountain between the DFs, preventingcommunication, or the DF could be positioned in holes which also results in a skipzone. In the same way, it could be obstacles between the DFs and the transmitter.However, solutions involving a too small φ, meaning a short distance between DFs,does not have to be tested as they would provide a result with too low accuracy, seeSection 1.4.

Secondly, in order to maximize range and reduce obstacles between a DF and thetransmitter, a grid with height of the terrain in meters (a.k.a. height coverage) is oftenused (today manually) to place the DFs at the highest possible altitude so the signalscan propagate as free as possible. Having a high antenna achieves the same result.

Thirdly, to get bearing on a maximum number of points in AT and AROI, theDFs should be placed as close to the target area(s) as possible. This is not alwayspossible but the user can choose the shape of AG freely and where it should be located.Practically, this means that the DFs in S1 should be positioned to the south, in S2 tothe south east and in S3 to the west.

14

Page 15: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

3 Theory

3.1 Derivative-free optimization

Derivative-free(/gradient-free/model-based/black box/direct search) optimization hasbecome a huge research area. It is used in many real world engineering problems whencomputing the gradient or hessian is too expensive or simply not possible to get at all.The objective function could e.g. be hidden inside a binary file, come from physicalexperiments or a complex computer simulation[7][14]. In this case, f(x) is very noisy(due to the heavy terrain dependency) and too expensive to be able to compute ∇f(x).

Derivative-free methods have been around since the early 1960s when Spendleyet al. proposed their simplex-based method (that later was refined by Nelder andMead) and the pattern search method that Hooke and Jeeves introduced in 1961[14].In 1997 generalized pattern search was coined by Torczon as a subfamily of derivative-free methods in order to distinguish between the deterministic pattern search methods(which has similar convergence properties) and stochastic methods like genetic algo-rithms, random search algorithms and others that was developed without convergenceanalysis in mind[16].

The advantage of pattern search methods is that they do not require a lot offunction calls and is under mild conditions guaranteed to converge to a stationarypoint[16][9]. However, other well-known derivative-free methods like genetic algo-rithms and particle swarm cover a greater part of the search space and often findsa better solution at a cost of a significant increase in the number of function calls[9].Many practical solvers are often a combination of multiple methods to get both globalcoverage and local convergence[14].

3.1.1 Generalized pattern search

Generalized pattern search methods includes all methods that polls points a step length∆ away from the current point according to some pattern in order to find a bettersolution. If a better solution is found, this will be chosen as the current point in thenext iteration and ∆ is increased with a factor k1 ≥ 1. The number of points is thesame as the number of directions. If a better solution cannot be found, the step sizewill be decreased by a factor k2 s.t. 0 < k2 < 1. A poll that computes the objectivefunction value for all points and choosing the best is called complete. A poll thatchooses the first evaluated point that is better is called opportunistic[14][12].

The pattern exists of multiple direction vectors vi that creates a positive basis inthe space Rn where n is the number of dimensions of the problem. The fact that thepattern is a positive basis means that no direction vector can be a positive combinationof the other direction vectors and that all points in Rn can be expressed as a linearpositive combination of the direction vectors. One can prove that the number ofvectors, |vi|, in the positive basis must satisfy the inequality n+ 1 ≤ |vi| ≤ 2n [3].Positive basis with the lower bound (n+1) is called minimal basis and the upper bound(2n) maximal basis. These two are the most commonly used bases in practice[14][12].

3.1.2 Random search

One of the most simple method for derivative-free global optimization is random search(RS), sometimes called pure random search. RS chooses a number of random points

15

Page 16: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

in the whole search space and returns the point that resulted in the highest functionvalue. Many other global optimization methods are trying to improve upon RS andthe method is therefore often used for comparison when bench-marking[19].

An alternative to RS is recursive random search (RRS), also called pure adaptivesearch. After a number of points in the whole search space have been evaluated andthe best point has been found, the search space shrinks itself in a hyper-rectanglecentered at the best point. RS performs exploration of the search space and RSS bothexploration and exploitation by reducing the search space to interesting regions[19][18].

3.1.3 Particle swarm

Particle swarm optimization was introduced in 1995 and works by the principal ofswarm intelligence[6]. A number of particles (NP) is generated that searches the n-dimensional problem space in order to find a better solution.

The velocity of a particle is updated by three main components in Equation 5. Thecurrent velocity vi, the local best solution known to the particle pi and the global bestsolution pg that the swarm has found so far[15].

vi = wvi + c1r1(pi − xi) + c2r2(pg − xi) (5)

The factor w is the inertia and controls what impact the last velocity of a particleshould have for the update. The particle will search more locally for smaller values ofw and explore more of the search space for larger values of w. The value is typicallydecreased during a run, from 0.9 to 0.4 according to [4] and from 1.4 to 0 accordingto [15]. r1 and r2 is two randomly generated variables s.t. r1, r2 ∈ [0, 1]. c1 and c2

steers how much influence the local and global best solution should have on the newvelocity. Low values on c1 and c2 will let the particle travel far away from the bestknown positions until it is pulled back[4]. Typical values is around 2 to have a meanof 1 when multiplied with r1 and r2[6]. The values mentioned above is just a rule ofthumb for general problems and should be meta-optimized for a specific problem.

Since each generation (or time iteration) of particle swarm evaluates NP objectivefunction values, the total number will be G*NP function calls where G is the numberof iterations. It has been shown that putting a maximum number of function calls toparticle swarm results in poor performance. A better stopping criteria is to analyzethe distribution of the particles. The maximum distance to the best solution is smallif the swarm has converged[20][8].

3.1.4 Genetic algorithm

Another commonly used heuristic derivative-free optimization method is the geneticalgorithm (GA) that dates back since the beginning of the 1950s when scientists firststarted to study artificial intelligence by trying to mimic natural reproduction andmutation[2][10]. Similar to PSO and RRS, GA also has an initial population of ran-domly generated solutions (referred to as individuals) and in every iteration (genera-tion) the method is trying to modify the population with the hope of finding a newbest individual. GA does this with the following steps in every generation[11]:

• Compute the objective function value (fitness) for each individual

• Copy some of the best individuals (referred to as elites) unchanged to the newgeneration in order to not forget the “best found so far”-solutions.

16

Page 17: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

• Select parents to the next generation based on the fitness values

• Fill the rest of the new generation by performing mutation on one parent ordoing a crossover with two parents

In Equation 6 one can see how many individuals of each type that a populationcontains

Ntot = Nelites +Ncrossover +Nmutations (6)

where the values on Ntot and Nelites are parameters chosen by the practitioner. If therest of the new population (when the elite individuals have been copied) should becrossover- or mutation-individuals is determined by the crossover probability Pcross.For example, Pcross = 0.8 means that 80% of the individuals will be crossovers and20% mutations on average.

One way of selecting which of the individuals that should be used as parents isto sort the population in descending order according to their fitness values and thengenerate a probability distribution according to Equation 7

P (Xi) =fitness(Xi)∑Ntot

j=1 fitness(Xj)(7)

where Xi is an individual. When the probabilities have been computed, a randomnumber 0 ≤ r ≤ 1 is generated for each new individual and the first individual i inthe old population that has a probability such that r ≤

∑ij=1 P (Xj) will be used as a

parent for the new individual. Observe that the sum of all P (Xi) should be equal toone since it is a cumulative distribution. This approach of selecting parents ensuresthat individuals with higher fitness are more likely to be selected[2].

What mainly distinguish GA from RRS and PSO is the crossover operation. Thecross-over philosophy is that one should combine good parts (genes) from the parentsto an even better child.

3.2 Surrogate model

In order to minimize the number of expensive function calls, a surrogate functioncan be used that has some insights of the real model and is more lightweight tocompute[17]. This surrogate can then be used to perform a global search for find-ing out approximately where the global optimum might be, but not exactly due to thelow accuracy[17][14].

3.3 Flood fill

A common strategy for finding connected components inside an image is with theflood-fill algorithm[13]. Given a starting pixel with a certain color, flood-fill finds allconnected pixels with the same color (or similar to some threshold). This is doneby adding surrounding pixels with the same color to a stack and then repeating thesame process again for each pixel in the stack. When the stack is empty all connectedpixels have been found. There are two variants of this algorithm, flood-fill4 when onlycardinal points are considered as surrounding points (N, E, S, and W) and flood-fill8when also the intercardinal points are considered (NE, SE, SW and NW)[13].

17

Page 18: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

4 Method

4.1 Flood fill for generating the placement grid

For a point to be feasible as a candidate for DF placement it has to be inside theplacement polygon, have an allowed terrain type and also be connected to a road viaallowed terrain type points (i.e. be accessible). One way of solving this problem is tofirst find all regions comprised of only road points or points with allowed terrain types,and then ignore regions not containing any roads as such regions would be inaccessible.

When applying flood-fill from Section 3.3 for merging roads with approved terraintype, a pixel corresponds to a grid point and the value is of binary type with either avalue of one or zero. A grid point has a value of one if the point represents an allowedterrain type or is part of a road, and the point is inside the placement polygon. If theconditions are not met the point is given the value zero. To find all regions insteadof just one, flood-fill8 was applied on every point that had a value of one and did notalready exist in another region.

4.2 Finding feasible points

Regardless of optimization method, the problem of finding a feasible grid point (xi, yi) ∈Ω for a DF i remains. The most simple way of only choosing grid points in Ω wouldbe to store the feasible points in a list. This method would, however, lose the wholegeometric connection between the points, a property which the objective function ishighly dependent upon.

4.2.1 Square neighborhood

One approach for finding a feasible point nearby is to search an area around the selectedpoint (x, y). For example in a square ranging from north west at (x−w/2, y−w/2) tosouth east at (x+w/2, y+w/2) where w is the square width. One question that arosewas how large the square size should be when searching for feasible points. Dividingthe grid into too many squares would result in very few feasible directions, especiallyin S3 that had very few approved placement points. On the other hand, dividing thegrid into too few squares would result in very few iterations and the same optimizationproblem had to be solved again inside the large squares.

Another approach is to dynamically increase the square width with a factor ks > 1if no feasible points were found in the area, see Algorithm 1. In this way a feasiblepoint will always be found with a high value on the maximum width wmax unless Ω is

18

Page 19: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

empty.

Algorithm 1: Dynamic square size

Input: Unfeasible grid point p = (xc, yc)Output: Feasible grid point pf = (xf , yf )if p is feasible then

return p;endw = 2;while w ≤ wmax do

S = Square with center at p and width w;for p ∈ S do

if p is feasible thenreturn p;

end

endw = ks ∗ w;

end

4.2.2 Choosing the highest feasible point

Instead of choosing the first feasible point in each square (see Algorithm 1), anotherpossibility could be to choose the highest feasible point in a square and in that aspecttry to mimic how the positioning was done manually by taking the terrain height in-formation into account (see Section 2.5). Notice the importance of forbidding multipleDFs in one square when choosing the highest feasible point, otherwise several DFswould have the same position.

4.3 Initial positions

Six different types of initial positions were tested to investigate their performance.The first approach of choosing initial positions was to generate a lot of random

groupings and the combination with largest minimum distance between all possiblepairs of DFs was chosen.

The second approach also employed random position generation, but selection fa-vored the largest minimum angle between the possible position pairs and the geometriccenter of the region of interest polygon AROI in order to use the first experience-basedcondition in Section 2.5.

The third and fourth approach used RS and RRS described in Section 3.1.2 alongwith the surrogate model.

In the fifth approach, particle swarm was used along with the surrogate model.Pattern search with the surrogate function was added for comparison and was

expected to perform worse compared to the more globally spanned search of particleswarm.

4.3.1 Particle swarm implementation

For the problem in Equation 3, a particle has two dimensions per DF. The particles’initial positions were chosen randomly from the available feasible points. The initial

19

Page 20: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

velocity for a particle i in the x-direction was chosen as vix in Equation 8

vix = kinitNx(2r − 1) (8)

where Nx is the total number of grid points in the x-direction, kinit ∈ [0, 1] and r is arandom number in the range [0, 1]. viy was chosen in the same way.

With two DFs, the position and velocity vector for a particle i was structured as

vi =(vi,DF1,x vi,DF1,y vi,DF2,x vi,DF2,y

)(9)

which makes it easy to compute the euclidean distance in Equation 10 between aparticle i and the best known solution vgbest in order to evaluate a distributed stoppingcriteria as

d(vi, vgbest) =

√√√√ N∑j=1

(vij − vgbest)2 (10)

where N is equal to the number of DFs times two. A maximum limit of the amount ofiterations without improvement (stall iterations) was set to 20 if the swarm still shouldnot converge.

4.3.2 Surrogate model implementation

A more light-weight model was implemented since the global methods like particleswarm requires a lot of function calls which would otherwise take too long time toevaluate with the real model. The surrogate model only considered the 2D geometryand ignored obstacles (terrain height) that might exist between the DFs and the targetand between the DFs and the FU. Neither did it take jammers into account. It did,however, take feasible points into account since such data were pre-computed. Figure4 presents the different regions along with the corresponding score. A bearing in region1 had one score and positioning in region 2 had a higher score. The score for everypoint in the target grid was summed in the same way as with the real wave propagationmodel.

DF1 DF2

21 1

0

Figure 4: The different score-regions for the surrogate model. Points in R1 provideda score of bearing. R2 a score of bearing plus a fraction of the positioning scoreconstant according to Equation 11. The bottom rectangle corresponds to the targetarea polygon (AT or AROI).

20

Page 21: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

The surrogate model also took the angle into account by computing the score inregion 2 as

ScoreR2 = ScoreBearing + ScorePosition ∗ φ90

(11)

where angles with φ > 90 were shifted to φ = 180 − φ. Equation 11 made sure thatwhen φ = 0 only a bearing score was returned. If there were more than two DFs, thebest score between all possible pairs of DFs was chosen. Notice that the surrogatemodel make use of both the first and third experience-based conditions in Section 2.5.

4.4 Recursive random search implementation

Algorithm 2: Recursive random search

Input: Ncandidates, kdecrease

Output: GroupingbestGrouping = RandomSearchProcedure(Ncandidates);w = winitial;while w > wthreshold do

for i ∈ Nfncalls dogrouping = RandomGroupingProcedure(bestGrouping,w);if grouping better than bestGrouping then

bestGrouping = grouping;end

endw = kdecrease ∗ w;

endreturn bestGrouping;

The recursive random search method was implemented as Algorithm 2 and starts ofby performing a regular random search (named RandomSearchProcedure) in the wholesearch space AG to get an initial grouping and then the square width w is initialized.The method then evaluate random feasible groupings with RandomGroupingProcedureby generating a random feasible point in a square with width w centered at the currentbest DF position. This is repeated Ncandidates times per iteration and then the squarewidth w is reduced with a factor 0 < kdecrease < 1 until some minimum threshold widthwthreshold is reached. An example iteration is illustrated in Figure 5.

21

Page 22: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

w

Figure 5: Example iteration of RRS with Ncandidates = 5 and three DFs. The whitecircles are the best DF positions from the current best grouping and the black dotsare random points that have been generated in a square neighborhood around the bestpositions. After the five groupings have been evaluated, the squares will shrink andfive new groupings will be generated inside the smaller squares.

Since kdecrease, winitial and Ncandidates are constant fixed parameters, the totalamount of function calls will always be the same and linearly proportional toNcandidates.It is also possible to see from Algorithm 2 that the total number of iterations will beequal to

#Iterations =#Function calls

Ncandidates=

log(wthreshold/winitial)

log(kdecrease)(12)

and since wthreshold < winitial, the number of function calls is proportional to

#Function calls ∝ −1

log(kdecrease)(13)

which can let the method automatically set kdecrease given a user-defined value on thetotal number of function calls or iterations.

An alternative to generate a random x- and y-value inside a square in Figure 5could be to generate a random angle and radius, and decreasing the maximum radiusinstead of the square width.

4.5 Pattern search implementation

With three DFs each having two search dimensions (latitude and longitude), the theo-retical number of direction vectors must satisfy 3∗2+1 = 7 ≤ |vi| ≤ 3∗2∗2 = 12 [3],see Section 3.1.1. In this problem however, each DF must have at least three directionsto be movable to all points, i.e. the number of directions must be at least nine. Inthis implementation, the maximal basis was used with all cardinal directions per DF,i.e. one direction per weather sign per DF which (with three DFs) corresponds to theupper limit |vi| = 12.

The initial value on ∆ was chosen according to Equation 14

∆ = k∆ ∗min(Nx, Ny) (14)

where Nx is the total number of grid points in the x-direction and similar for Ny. Thefactor k∆ had a value between 0 and 1. For low values on k∆ the method will beginthe search close to the initial grouping whereas high values on k∆ will initially makethe method search far away from the initial grouping.

22

Page 23: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

4.5.1 Pattern search with constant recursive square size

In Figure 6 one DF tries to find a feasible point a step size d = ∆ away from thecurrent solution by searching through a square of points for the first or highest feasibleone. If none of the four new solutions along the cardinal directions would yield animprovement, the step size would decrease and therefore also the square size. The DFmoved if a improvement was found, and the new position would be the current solutionin the next iteration. When ∆ = 1 the neighboring points was tested.

d

d/2

Figure 6: Example iteration of pattern search with constant recursive square size withone DF (circle). The squares are the new positions centered at d = ∆ points awayfrom the current solution to search for feasible points inside. Maximum one feasiblepoint in every square was tested, either the highest or the first one.

4.5.2 Pattern search with dynamic square size

Two different approaches of using a dynamic square size (see Algorithm 1) was tested,first with no limit on the maximum square size and later with an upper limit that wasequal to the static square size, marked d/2 in Figure 6.

4.6 Genetic algorithm implementation

An individual was structured in the same way as the position and velocity vector inPSO (see Equation 9). The probability for an individual in the old population to beselected was computed according to Equation 7.

The mutation operation was performed by selecting a random point in a squareneighborhood with the square center at the selected individual (chosen by Equation7) similar to how random nearby groupings were generated in RRS (see Section 4.4)and the width was also decreased by a factor kdecrease for every population in the sameway. The same stopping condition as for RRS was applied, i.e. when the square widthcondition w < wthreshold was fulfilled.

The crossover operation was implemented by cloning the DF positions from parent1 and then replacing one of these DFs (chosen randomly) with the closest DF in parent2. Both parents were chosen by Equation 7 but were forbidden to be the same. Thenew crossover-grouping was considered successful if the replaced point from parent 2

23

Page 24: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

was further away than some distance limit to the old points from parent 1 and if thechild was not equal to any of the parents. Mutation was performed on unsuccessfulcrossover-groupings. Two unsuccessful crossover-childs is shown in Figure 8.

This approach of producing a crossover-child could lead to a result shown in Figure7 in the best case if the scenario looks like in Figure 2. Notice that it is not possibleto do a crossover with only one x- or y-coordinate since this would yield an unfeasiblegrouping in most cases.

Parent 1 Parent 2

Crossover child

Figure 7: An example of how a successful crossover-grouping can be produced bycombining the DF-positions to the left in parent 1 (green circles) and the DF-positionin parent 2 to the right (red circle).

Parent 1 Parent 2

Crossover child

d < d_min

(a) Distance is smaller than limit.

Parent 1 Parent 2

Crossover child

(b) The child is equal to parent 1.

Figure 8: Two examples of failed crossover-groupings when the replaced DF in parent1 is the green circle to the right.

4.7 Combined solvers

Depending on the amount of resources that are available in terms of the number offunction calls one could combine the solvers and the models in two approaches, onefast combination and one slow.

The fast combination can first run a global search using the surrogate model, thetop 100 groupings can then be evaluated with the real model since maximizing thesurrogate does not necessarily mean that the real model is maximized. The groupingwith highest value for the real model can then be used as the initial grouping for alocal search. The result would most likely not be as accurate as the slow combinationbut the number of function calls would decrease drastically.

The slow combination can use the real model for both the global search and thelocal search, this would most probably lead to the most accurate result at a cost ofthousands of function calls.

24

Page 25: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

5 Results

The methods described in the last section have been compared and different valueson hyperparameters have been tested. Since all methods are stochastic or depends onrandomly generated initial groupings, many runs were executed per method and perparameter value to see how they performed on average and worst case. The value onWROI was set to 10 for both f(X) and the surrogate model in all tests.

5.1 Feasible placement points

(a) Approved terrain type (b) Feasible points

Figure 9: Approved terrain type points in (a) and roads in the area have been joinedto (b) using flood-fill8.

Note how the approved terrain type points in Figure 9a have been filtered with allroad-accessible areas to the result in (b). As required by the given constraints, thereare no feasible points outside the placement polygon.

25

Page 26: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

5.2 Surrogate model

0 200 400 600 800 1000Objective function value

0

500

1000

1500

2000

Surro

gate

func

tion

valu

e

(a) S1

200 400 600 800 1000Objective function value

1000

1500

2000

2500

Surro

gate

func

tion

valu

e(b) S2

50 100 150 200 250 300 350Objective function value

200

400

600

800

1000

1200

Surro

gate

func

tion

valu

e

(c) S3

Figure 10: 1000 possible groupings were randomly generated and evaluated on boththe real objective function and the surrogate function in S1 with a grid size of 20.

Based on the data presented in Figure 10, there seems to be a good correlation betweenthe lightweight model and the real wave propagation model in S1. In S2 and S3 thereis still a correlation but not as high. If the models would correlate perfectly, all pointsin Figure 10 would lie on a straight line.

Averaged over 4160 function evaluations, the surrogate model was 243 times fasterthan the real objective function when both of them was parallelized on CPU-threads.

5.3 Recursive random search

The result from when Ncandidates was swiped from 6 to 165 along with a constant valueon kdecrease = 0.9 (remember Algorithm 2) can be seen in Figure 11. The total amountof function calls increases linearly with more candidates, see Equation 12, but thereis not so much difference in the final result when Ncandidates > 40, only around 2%compared to 10% when Ncandidates = 10.

26

Page 27: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

0 20 40 60 80 100 120 140 160Number of candidates

0.88

0.90

0.92

0.94

0.96

0.98

1.00

Norm

alize

d su

rroga

te v

alue

S1S2S3

Figure 11: Average result for ten runs with recursive random search using the surrogatemodel when varying the number of candidates per iteration with a grid size of 50.

The number of function calls in Figure 12b increased according to Equation 13 whenvarying kdecrease and was the same for all three scenarios since winit and wthreshold wasthe same.

0.5 0.6 0.7 0.8 0.9Width decrease factor

250

300

350

400

450

500

Surro

gate

val

ue

S1S2S3

(a) Surrogate value

0.5 0.6 0.7 0.8 0.9Width decrease factor

500

1000

1500

2000

2500

3000

3500

4000

Func

tion

calls

S1S2S3

(b) Function calls

Figure 12: Average result over 20 runs with a grid size of 50 when varying kdecrease.

5.4 Particle swarm

The distributed stopping criterias from Equation 10 with the values in Equation 15

davg position(vi, vgbest) < 10

davg velocity(vi, vgbest) < 10(15)

was used to define when the swarm had converged to achieve a trade-off betweenexecution time and function value improvement. Values below 10 rarely improved theresult. A maximum number of function calls was also used if the swarm should notconverge.

27

Page 28: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

0 25 50 75 100 125 150Number of particles

0.4

0.5

0.6

0.7

0.8

0.9

1.0

Norm

alize

d su

rroga

te v

alue

S1S2S3

Figure 13: Average result for five runs with particle swarm using the surrogate modelwhen varying the number of particles with a grid size of 40.

With varying number of particles NP and fixed parameters kinit = 0.05, w = 0.5,c1 = 1, c2 = 1, the result in Figure 13 was achieved. The surrogate value seems tostabilize for NP > 25 and barely increase at all when NP > 80 for all three scenarios.

1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0Width increase factor

1750

2000

2250

2500

2750

3000

3250

3500

Surro

gate

val

ue

S1S2S3

(a) Surrogate value

1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0Width increase factor

4000

6000

8000

10000

12000

Tim

e (m

illise

cond

s)

S1S2S3

(b) Time

Figure 14: Average result over ten runs with a grid size of 30 when varying the squarewidth increase factor ks.

The value on ks does not affect the final result of particle swarm (see Figure 14a)but the execution time does (Figure 14b).

Particle swarm did not converge if a constant square size was used, enforcing theapplication of a dynamic square size.

Since particle swarm used the surrogate model that was very fast, it turned out thatfinding the highest point in each square took too long time (since it involves loopingthrough all points in a square to search for the highest point). Time that could bespent on more function calls and thus more generations instead of looping through thefull square.

28

Page 29: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

5.5 Genetic algorithm

For the measurements in Figure 15, the static parameters were set to Nindividuals = 60,Nelites = 10, kcrossover = 0.5 and kdecrease = 0.95.

20 40 60 80 100Number of individuals

0.90

0.92

0.94

0.96

0.98

1.00

Norm

alize

d su

rroga

te v

alue

S1S2S3

(a) Normalized surrogate value when sweepingnumber of individuals.

0 10 20 30 40 50Number of elites

0.965

0.970

0.975

0.980

0.985

0.990

0.995

1.000

Norm

alize

d su

rroga

te v

alue

S1S2S3

(b) Normalized surrogate value when sweepingnumber of elites.

0.0 0.2 0.4 0.6 0.8Crossover ratio

0.990

0.992

0.994

0.996

0.998

1.000

Norm

alize

d su

rroga

te v

alue

S1S2S3

(c) Normalized surrogate value when sweeping thecrossover-ratio.

Figure 15: Average result over 20 runs with a grid size of 50 when sweeping overdifferent parameters. Observe that the vertical axes are normalized and does not startfrom 0.

The number of function calls

• linearly increases with the number of individuals

• linearly decreases with the number of elites

• is independent on the crossover-ratio since mutation is performed after a failedcrossover-operation

Experiments were made to shrink the population size instead of performing mutationon a failed crossover but it lead to a large variance in the final surrogate function value.

29

Page 30: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

5.6 Initial positions

The different ways of choosing initial positions is summarized in Table 2. The methodshad a maximum time limit of three seconds per run which was always reached by MD,MA and RS. The other four methods may or may not have converged before reachingthe time limit.

Table 2: Initial position methods

MD Minimum distance Largest minimum distance between the possible DF pairs

MA Minimum angle Largest minimum angle between the possible DF pairs

RS Random search Largest surrogate value

RRS Recursive random search Largest surrogate value when shrinking search space

PSO Particle swarm Particle swarm with the surrogate model

GA Genetic algorithm Genetic algorithm with the surrogate model

GPS Pattern search Pattern search with the surrogate model and k∆ = 0.5

MD MA RS RRS PSO GA GPSInitial position method

0

100

200

300

400

500

Surro

gate

val

ue

S1S2S3

(a) Average result

MD MA RS RRS PSO GA GPSInitial position method

0

100

200

300

400

500

Surro

gate

val

ue

S1S2S3

(b) Worst case result

MD MA RS RRS PSO GA GPSInitial position method

0

2000

4000

6000

8000

10000

12000

14000

Func

tion

calls

S1S2S3

(c) Average function calls

MD MA RS RRS PSO GA GPSInitial position method

0

2000

4000

6000

8000

10000

12000

14000

16000

Func

tion

calls

S1S2S3

(d) Maximum function calls

Figure 16: Worst case and average result of 10 runs when choosing initial positionsusing the surrogate model and the different methods in Table 2. The minimum distancelimit was fixed to 1000 meters when generating the initial groupings. A grid size of 50was used and the data can be found in Appendix A.1.

30

Page 31: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

Figure 16 presents the surrogate value after an initial position was chosen using thedifferent methods. PSO had the highest function value in all three scenarios. The run-times for RRS and GA are highly adjustable by choosing different values on kdecrease.The parameter values for RRS were chosen to Ncandidates = 40 and kdecrease = 0.95and to get approximately the same average run-time as PSO and therefore a morefair comparison. The parameters for GA was set to Nindividuals = 60, Nelites = 20,kcrossover = 0.7 and kdecrease = 0.95. The initial delta factor for GPS was set tok∆ = 0.5 in order to make the method test solutions far away since the initial groupingwas randomized. Observe that MD and MA does not perform any surrogate functioncalls except one when computing the final value and that only a few function calls aremade for GPS compared to RS, RRS, PSO and GA.

31

Page 32: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

5.7 Pattern search

5.7.1 Different methods of choosing a feasible point

The different approaches of selecting feasible points was tested by first letting particleswarm run with the surrogate model for two seconds and the best grouping was set asinitial position to pattern search.

Table 3: Selection types

SH Static highest Highest point with square size proportional to the step length ∆

SF Static first First point with square size proportional to the step length ∆

DH Dynamic highest Highest point with least possible square

DF Dynamic first First point with least possible square size

SH SF DH DFSelection type

0

100

200

300

400

500

Obje

ctiv

e fu

nctio

n va

lue,

f(X)

S1S2S3

(a) Without wmax

SH SF DH DFSelection type

0

50

100

150

200

250

Func

tion

calls

S1S2S3

(b) Without wmax

SH SF DH DFSelection type

0

100

200

300

400

500

Obje

ctiv

e fu

nctio

n va

lue,

f(X)

S1S2S3

(c) With wmax set to half of the step length

SH SF DH DFSelection type

0

50

100

150

200

250

Func

tion

calls

S1S2S3

(d) With wmax set to half of the step length

Figure 17: The different methods listed in Table 3 for selecting a feasible point withpattern search. (a) and (b) is without any maximum width (wmax) for the dynamicmethod and (c) and (d) is when wmax was set to the same value as the static squarewidth. The result was averaged over 10 runs and a grid size of 30 was used. The datacan be found in Appendix A.2.

32

Page 33: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

5.7.2 Opportunistic run

Particle swarm was again used to find the initial positions to pattern search whenthe different polling types was tested. The complete run evaluated (at most) all 12surrounding points and chose the best improvement. The opportunistic run evaluatedone direction at a time and stopped when the first improvement was found (if any). Thesorted opportunistic run first evaluated all 12 surrounding points with the surrogatefunction and sorted them in descending order to then evaluate one at a time with thereal objective function and stopped when the first real improvement was made.

Complete Opportunistic Sorted opportunisticPolling type

0

100

200

300

400

500

Obje

ctiv

e fu

nctio

n va

lue,

f(X)

S1S2S3

(a) Average

Complete Opportunistic Sorted opportunisticPolling type

0

25

50

75

100

125

150

175

Func

tion

calls

S1S2S3

(b) Average

Complete Opportunistic Sorted opportunisticPolling type

0

100

200

300

400

500

Obje

ctiv

e fu

nctio

n va

lue,

f(X)

S1S2S3

(c) Worst case (minimum)

Complete Opportunistic Sorted opportunisticPolling type

0

25

50

75

100

125

150

175

200

Func

tion

calls

S1S2S3

(d) Worst case (maximum)

Figure 18: Average result (a) and (b) and worst case result (c) and (d) for the threetested polling types over ten runs per type and scenario with a grid size of 30.

5.8 Combined solvers

Since the real objective function was very expensive to compute, a decision was madeto only check the final result for the three most promising initial position-methods inFigure 16. The fast combination used the surrogate model for the global search andthen GPS for the local search with the real model with the sorted opportunistic pollingtype. The slow combination used the real model for both the global and local searchwith the complete polling type for GPS. The same parameter values as in Section 5.6were used.

33

Page 34: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

RRS PSO GAInitial position method

0

20

40

60

80

100

120

140

160

Func

tion

calls

S1S2S3

(a) Fast combination excluding surrogate functioncalls

RRS PSO GAInitial position method

0

500

1000

1500

2000

2500

Func

tion

calls

S1S2S3

(b) Slow combination

Figure 19: Average number of function calls for the fast and slow combination with agrid size of 50. The fast combination only did around 100 function calls with the realmodel and the slow combination around 2500 function calls.

Table 4: Average objective function value with the fast combination

S1 S2 S3

RRS 201.28 193.19 85.26

PSO 201.68 193.9 84.99

GA 201.27 195.17 87.07

Table 5: Average objective function value with the slow combination

S1 S2 S3

RRS 200.7 198.98 89.1

PSO 201.67 201.94 90.84

GA 200.88 200.52 91.36

Table 6: Average difference in percentage between the fast and slow combination

S1 S2 S3

RRS -0.3 3.0 4.5

PSO 0.0 4.2 6.9

GA -0.2 2.7 4.9

The results in Figure 19 and Table 4 and 5 was averaged over 10 runs per initialposition method and a grid size of 50 was used. All data (including minimum functionvalues and maximum function calls) can be found in Appendix A.4. Observe that allinitial position methods performed better in S1 with the fast combination even thoughit used the surrogate model, see the first column in the two tables.

34

Page 35: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

5.9 Global optimum in smaller scenario

A brute force search was made in a smaller scenario with only 65 feasible points andtwo DFs, i.e. 4225 combinations, to see if the optimization method could find theglobal optimum. A fast combination with particle swarm and the surrogate modelfound the global optimum before pattern search began 100/100 runs. When patternsearch was tested with random initial positions, it found the global optimum 5/100runs.

35

Page 36: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

6 Analysis

6.1 Feasible placement points

Generating a grid with feasible points given a placement polygon, terrain types androads with the flood-fill algorithm gave a good result when the input data was incorrect size and format. Some practical problems with converting the roads to thesame grid size as the height- and terrain type-data took some time and are still notcompletely free from error, see the missing road part in Figure 3c to the left in thegrouping area. In the future, one feasible grid per DF should be generated becausedifferent vehicles can travel across different terrain types and not all vehicles requireroads for transportation to be possible, e.g. boats can be positioned in water andair-crafts everywhere.

6.2 Surrogate model versus real model

By using a surrogate model for the global search resulted in a great speedup (see Section5.2). The huge speedup is probably because the real wave propagation calculation wasnot completely isolated from the rest of the environment. Every function call withthe real model requires that the DFs in the GUI actually moves. One alternative tothe surrogate model could be to separate the computations and the user interface toachieve a more fair comparison between the surrogate and the real model.

The outliers in Figure 10 (c) to the top-left was a problem because when a globalsearch found one of them as an initial guess to a local solver, the final result was faraway from the global optimum. The problem was solved by evaluating the real modelon the best 100 solutions from the global solver and pick the best one as an initial guess.Another problem with the surrogate model is that a maximum range has to be specifiedmanually on each DF such that the radiuses are (at least approximately) matching thecircles in Figure 4. It would be possible to set these radiuses automatically by bruteforce searching for the closest point to the DF that has a bearing or by using thesignal-to-noise ratio and checking when the distance yields some threshold value. Athird limitation is that the surrogate model does not take jammers into account.

6.3 Optimization methods

Comparing the different initial position methods turned out to be a real challenge. Thefirst three methods in Table 2 did not converge so a decision on how long they shouldbe allowed to run had to be made. Setting the time limit too far from the convergencetime for the last three methods would make them have different preconditions andtherefore the result would be unfair. Setting the same amount of surrogate functioncalls for the last four methods would also be unfair since PSO and GA (in addition tofunction calls) performs a lot more computations than RRS and GPS. Testing differentvariants on the local search also led to questions on how the measurements should beperformed since GPS is highly dependent on its initial grouping. A decision was madeto let PSO find an initial grouping since this is how the local search will be used inpractice, i.e. a global solver finds a good initial grouping to the local solver. Thedrawback with this is that the initial groupings were all very similar and therefore theresult may be strongly dependent on the tested scenarios.

36

Page 37: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

6.3.1 Particle swarm

The best method for finding an initial grouping with the surrogate model proved to beparticle swarm, see Figure 16 and Appendix A.1, but there was no large improvementcompared to RRS and GA. The convergence criterion for particle swarm in Equation15 was seldom fulfilled, i.e. the swarm often did not converge. The reason for thiswas probably that the particles could not move freely in the desired direction becauseno feasible point was found for small values on the square width and some of theparticles got stuck far away from the swarm. A better criterion was to analyze thedistance from the best top percent particles to the swarms global best solution insteadof computing the average distance from all particles (see MaxDistQuick in [20]) butthe swarm still did not converge in S1 during some of the runs. One could also let theparticles move to unfeasible points and penalize these solutions e.g. with a functionvalue of zero. But since there were so many unfeasible points this was never tried outsince the probability for a particle to find a feasible solution (i.e. a possible grouping)with this approach would be really low.

It was a big challenge to analyze how the swarm behaved and to see where theparticles were located in each time step, mostly due to the difficulty of visualizingone particle since it exists of multiple DF-positions, see Equation 9. One interestingapproach would be to plot the position of each particle as an average of the DF positionseach time-step to see approximately how the particles move.

Values on the parameters c1, c2, kinit and w were adjusted manually to achievea good trade-off between fast convergence rate and a high function value but morestructured tests remain to be done.

6.3.2 Recursive random search and genetic algorithm

The number of function calls in RRS and GA is proportional to the number of iterationsand increases in proportion to ∼ −1

logkdecrease, see Equation 12 and Figure 12b. Therefore,

the value on kdecrease can automatically be determined by the method but requiresthat the user input how much resources that is available for the method, in termsof time/function calls/iterations. The advantage is that the convergence rates aredeterministic (even though the methods are stochastic) and will be the same every rununlike PSO where the run-time varies a lot, see Figure 16d.

When the number of elite individuals was varied in GA (see Figure 15), the resultdid not change much, just 1-2% increase between choosing Nelites = 20 instead ofNelites = 2 but the amount of function calls per iteration decreases from Nindividuals−2to Nindividuals−20 since the fitness for elite individuals does not have to be recomputed.The crossover-operation barely improved the result at all (< 1%) and did not haveany impact on the number of function calls since mutation was performed on failedcrossover-groupings. The reason for the poor crossover improvement was probablythat the number of dimensions in the problem is too low and that the implementationin Section 4.6 was too restricted.

6.3.3 Function calls per iteration for RRS, PSO and GA

When increasing the number of function calls per iteration, the function value firstincreased a lot up to a certain limit. The number of function calls per iterationcorresponds to the number of particles for PSO in Figure 13, candidates for RRS in

37

Page 38: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

Figure 11 and individuals (minus elites) for GA in Figure 15a. After this limit, theresult did not change much but the total number of function calls increased linearly. Itis up to the practitioner to choose the exact parameters for a good trade-off betweenaccuracy and time.

6.3.4 Pattern search

The convergence value for pattern search seems to be independent of what selectiontype that is used as can be seen in Figure 17 (a) and (c). The small variance is morelikely to come from the stochastic behavior of particle swarm when the initial positionswere retrieved. It is also difficult to draw any conclusion regarding whether or not tochoose the highest position. Earlier experiments in the project when pattern searchalso was used as a global solver with random initial positions showed that choosing thehighest point in every square was preferable. But now when particle swarm alreadyfinds a relatively good solution, the advantage of choosing the highest point can notbe seen anymore. Instead it is better to choose the first point since it does not requirelooping through the full square of all points. With no upper limit on the dynamicsquare width, a lot more function calls are made (see Figure 17 (b)) but the resultis not improved. One explanation is that with too high square size, the chosen pointis far away from the wanted one which makes pattern search ignore both the desireddirection and the desired step length. With an upper limit the number of functioncalls is similar to the method with static square width (see Figure 17 (d)), except inS2 with highest point and dynamic square width which evaluates more function calls.The result does not differ much for the point selection methods in Table 3. However,I think static first is preferable due to the low complexity. Just one square is searchedand the first feasible point returned. If the objective function is faster to evaluate inthe future, a dynamic square width with the highest feasible point could be a bottleneck even though it is no problem today.

When an opportunistic polling strategy with pattern search was used, the numberof function calls required decreased dramatically for both the average and worst caseresult, see Figure 18 (b) and (d). However, the final objective function value wasworse if no sorting with the surrogate function was used. Even though there was nonoticeable difference in the average run (Figure 18 (a)), the worst case result for S1 andS3 (Figure 18 (c)) was substantial. For the third polling type, sorted opportunistic,the number of function calls required decreased at no noticeable negative change ofthe objective function value.

The step size increase factor k1 was fixed to k1 = 1 during all tests but highervalues should be tested as well in the future. The initial delta ratio k∆ was not testedunder structured circumstances either and need to be further investigated or suppliedby the user.

6.3.5 Combined solvers

Concerning the fast and slow combined solvers (in Section 4.7 and 5.8) and why thefast solver that used the surrogate model was better than the slow one in S1 for allthree methods could be that the parameters were optimized after the surrogate model.If this is the reason and a decision is made to choose the slow model for accuracy, newtests for the parameters on the chosen global method should be performed. Anotherreason may be that the surrogate model is more continuous than the real model and

38

Page 39: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

does not contain small areas without coverage like the blue dots close to AROI in Figure3b. The most probable cause is that the correlation between the surrogate model andthe real model was very high in S1 (see Figure 10a) and that too few runs were made tomake a statistically reliable conclusion about that the fast combined version actuallywas better.

When inspecting the results in Section 5.6, 5.8 and Appendix A.4, there is noclear answer to which one of the global methods to use. PSO did indeed yield abetter result for all scenarios when finding an initial grouping with the surrogate model(Section 5.6) and in S1 and S2 for the slow combination at a lower amount of functioncalls (Section 5.8) but the final result is strongly dependent on the choice of hyper-parameters for the methods so the accuracy of RRS and GA could be improved ifmore function calls were allowed. PSO also required workarounds for finding a feasiblepoint given a velocity as opposed to RRS and GA that did not have this problem sinceit only required that random feasible point inside a square area could be generated.Controlling the convergence rate for PSO proved to be hard since it involves adjustingmultiple parameters (c1, c2 and w) instead of just one for GA and RRS (kdecrease

that can be set automatically when the amount of resources is specified). Given thesedisadvantages with PSO and the fact that GA yields a better worst case result thanRRS with less maximum amount of function calls (see Appendix A.4), my suggestionwould be to choose GA as a global method.

6.4 Global optimum

In the smaller scenario that was tested with only 65 possible points, PSO found theglobal optimum 100/100 runs. This is no guarantee that the globally best groupingwill be found for real scenarios. The region was geographically very small, only apart of a road and approved terrain nearby. In real scenarios, the feasible points arenot connected in the same way and there are non-feasible regions in between thatthe methods have to pass by. The fact that the final result varied from the combinedsolvers (see Section 5.8) strongly speaks against that the final grouping will be globallyoptimal even though it may be good enough in practice. However, if it is important tofind the global optimum, one could still try to choose parameters in such a way thatmore accuracy is achieved at a cost of more function calls.

6.5 Future work

If the placement support that was developed should be used in the future, the followingneeds to be done:

• Extend approved terrain to different vehicle types.

• Either develop support for the surrogate model s.t. jamming and communicationlimits are taken into account or use a optimized version of the real model.

• Check the robustness of the methods for more scenarios and for scenarios withdifferent geometries on AG, AT and AROI.

• Find new parameter values that is optimal for the real objective function if theslow combined solver should be used.

• Compare GPS with other methods for local search such as Nelder-mead.

39

Page 40: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

7 Conclusions

The main objective of this project was to develop a placement support for SIGINT-units. A flood-fill8 algorithm for generating a grid with feasible placement pointswas successfully developed but it remains to be extended to handle different vehicletypes. Particle swarm turned out to be the best way to find a good initial groupingwith the surrogate model, and particle swarm also found the global optimum whentested on a smaller scenario. The global search methods became very time efficientwhen using the surrogate model. However, plots corresponding to Figure 10 shouldbe created and analyzed for future scenarios to ensure a good enough correlation withthe real model. When an initial grouping had been found, the result was fine-tunedwith a local search performed by pattern search and the real wave propagation model.An opportunistic polling strategy sorted by the surrogate value for pattern searchresulted in the same objective function value but required a lot less function calls.Finding feasible points given a desired direction and step length turned out to be ahuge challenge and required workarounds for both particle swarm and pattern search.Therefore the genetic algorithm, free of this problem, seems to be the best choice ofthe global methods.

40

Page 41: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

References

[1] Lars Berglund and Goran Kindvall. Telekrig. 2005.

[2] Jenna Carr. An introduction to genetic algorithms. 2014.

[3] Ian D. Coope and Christopher J. Price. Positive bases in numerical optimization.Computational Optimization and Applications, 21(2):169–175, 2002.

[4] Russell C. Eberhart and Yuhui Shi. Particle swarm optimization: developments,applications and resources. In Proceedings of the 2001 Congress on EvolutionaryComputation (IEEE Cat. No.01TH8546). IEEE, 2001.

[5] Forsvarsmakten. Telekrig : larobok for armen. Stockholm, 1997.

[6] J. Kennedy and R. Eberhart. Particle swarm optimization. IEEE, 1995.

[7] Oliver Kramer, David Echeverrıa Ciaurri, and Slawomir Koziel. Derivative-freeoptimization. In Computational Optimization, Methods and Algorithms, pages61–83. Springer Berlin Heidelberg, 2011.

[8] Tarig Faisala Mahmud Iwan, Rini Akmeliawati and Hayder M.A.A. Al-Assadi.Performance comparison of differential evolution and particle swarm optimizationin constrained optimization. Procedia Engineering, 41:1323–1328, 2012.

[9] Mathworks. Comparison of six solvers, 2019. https://se.mathworks.com/help/gads/example-comparing-several-solvers.html, last accessed on 2019-03-28.

[10] Mathworks. Find global minima for highly nonlinear problems, 2019. https:

//www.mathworks.com/discovery/genetic-algorithm.html, last accessed on2019-05-17.

[11] Mathworks. How the genetic algorithm works, 2019. https://se.mathworks.

com/help/gads/how-the-genetic-algorithm-works.html, last accessed on2019-05-17.

[12] Mathworks. Pattern search terminology, 2019. https://se.mathworks.com/

help/gads/pattern-search-terminology.html, last accessed on 2019-03-28.

[13] Eva-Marie Nosal. Flood-fill algorithms used for passive acoustic detection andtracking. In 2008 New Trends for Environmental Monitoring Using Passive Sys-tems. IEEE, oct 2008.

[14] Luis Miguel Rios and Nikolaos V. Sahinidis. Derivative-free optimization: a reviewof algorithms and comparison of software implementations. Journal of GlobalOptimization, 56(3):1247–1293, jul 2012.

[15] Yuhui Shi and Russell C. Eberhart. Parameter selection in particle swarm opti-mization. In V. W. Porto, N. Saravanan, D. Waagen, and A. E. Eiben, editors,Evolutionary Programming VII, pages 591–600, Berlin, Heidelberg, 1998. SpringerBerlin Heidelberg.

[16] Virginia Torczon. On the convergence of pattern search algorithms. SIAM Journalon Optimization, 7(1):1–25, feb 1997.

41

Page 42: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

[17] Felipe A. C. Viana, Christian Gogu, and Raphael T. Haftka. Making the most outof surrogate models: Tricks of the trade. In Volume 1: 36th Design AutomationConference, Parts A and B. ASME, 2010.

[18] Tao Ye and Shivkumar Kalyanaraman. A recursive random search algorithm forlarge-scale network parameter configuration. ACM Press, 2003.

[19] Zelda B. Zabinsky. Random search algorithms. Technical report, Department ofIndustrial and Systems Engineering, University of Washington, 2009.

[20] Karin Zielinski, Dagmar Peters, and Rainer Laur. Run time analysis regardingstopping criteria for differential evolution and particle swarm optimization. 042019.

42

Page 43: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

Appendices

A Result tables

A.1 Data for initial positions

Table 7: Worst surrogate value

S1 S2 S3

MD 201.26 280.45 53.68

MA 146.44 223.6 86.86

RB 441.89 493.74 246.94

RRB 470.72 519.67 262.85

PSO 482.2 528.0 266.29

GA 477.3 526.38 265.95

GPS 173.25 219.47 122.05

Table 8: Average surrogate value

S1 S2 S3

MD 245.52 332.59 136.05

MA 269.94 309.01 146.61

RB 452.92 503.03 253.1

RRB 477.13 524.3 266.04

PSO 482.44 531.65 268.22

GA 480.14 529.55 266.8

GPS 266.76 339.98 160.82

A.2 Data for selection types

Table 9: Objective function values with maximum dynamic square size

S1 S2 S3

STATICHIGHEST 475.192724609375 516.327880859375 178.16180419921875

STATICFIRST 480.127392578125 499.125439453125 175.14981689453126

DYNAMICHIGHEST 473.16796875 512.890234375 177.2662109375

DYNAMICFIRST 480.39716796875 513.538623046875 185.84488525390626

Table 10: Number function calls with maximum dynamic square size

S1 S2 S3

STATICHIGHEST 85.6 170.0 99.4

STATICFIRST 65.0 173.1 116.9

DYNAMICHIGHEST 107.5 248.2 108.0

DYNAMICFIRST 118.6 160.9 126.7

43

Page 44: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

Table 11: Objective function values without maximum dynamic square size

S1 S2 S3

STATICHIGHEST 480.987109375 516.07255859375 176.79981689453126

STATICFIRST 480.1703125 499.4400390625 171.89041748046876

DYNAMICHIGHEST 481.08232421875 504.737060546875 186.94041748046874

DYNAMICFIRST 475.538623046875 515.172021484375 194.84019775390624

Table 12: Number function calls without maximum dynamic square size

S1 S2 S3

STATICHIGHEST 94.2 169.4 120.8

STATICFIRST 90.3 168.4 104.3

DYNAMICHIGHEST 167.8 224.2 195.4

DYNAMICFIRST 169.0 283.0 223.0

A.3 Data for polling types

Table 13: Average objective function value

S1 S2 S3

COMPLETE 479.324755859375 499.44697265625 186.03438720703124

OPPORTUNISTIC 474.349267578125 493.3087890625 173.97115478515624

OPPORTUNISTICSORTED 479.470458984375 505.5677734375 182.82581787109376

Table 14: Average number of function calls

S1 S2 S3

COMPLETE 115.7 177.9 111.9

OPPORTUNISTIC 92.5 117.8 73.0

OPPORTUNISTICSORTED 59.9 159.5 80.5

Table 15: Worst objective function value

S1 S2 S3

COMPLETE 476.5274123954267 498.96407389632463 172.06654988244438

OPPORTUNISTIC 426.16769465667414 492.79610083136754 102.38588609595331

OPPORTUNISTICSORTED 476.5274123954267 504.67658175815336 170.79407804255675

Table 16: Maximum number of function calls

S1 S2 S3

COMPLETE 164.0 204.0 141.0

OPPORTUNISTIC 110.0 169.0 130.0

OPPORTUNISTICSORTED 83.0 167.0 120.0

44

Page 45: Placement support for signal intelligence units1327615/FULLTEXT01.pdf · en lokal s okmetod som utg ar ifr an en gruppering som ar geometriskt bra och sedan f ors oker f orb attra

A.4 Data for combined solvers

Table 17: Function values for the fast combination, minimum over 10 runs

S1 S2 S3

RRS 200.69 189.04 79.85

PSO 200.74 188.4 73.19

GA 200.7 189.34 81.54

Table 18: Function calls for the fast combination, maximum over 10 runs

S1 S2 S3

RRS 111.0 193.0 142.0

PSO 111.0 197.0 120.0

GA 101.0 196.0 138.0

Table 19: Function values for the slow combination, minimum over 10 runs

S1 S2 S3

RRS 197.58 192.55 83.0

PSO 201.61 200.33 86.83

GA 199.06 198.9 90.1

Table 20: Function calls for the slow combination, maximum over 10 runs

S1 S2 S3

RRS 2813.0 2780.0 2692.0

PSO 3452.0 3365.0 2336.0

GA 2717.0 2713.0 2674.0

45