collaborative autonomy for uv teams paper - … autonomy for manned/unmanned teams steve jameson and...

10
Collaborative Autonomy for Manned/Unmanned Teams Steve Jameson and Jerry Franke Lockheed Martin - Advanced Technology Laboratories [email protected] Robert Szczerba and Sandy Stockdale Lockheed Martin Systems Integration – Owego [email protected] Abstract UAVs offer tantalizing capabilities to the warfighter, such as tireless observation, quick recognition, and rapid reaction to today’s changing battlespace. These trends are important because they aid Warfighter in their duties. Today, unmanned systems exist that extend the vision and the reach of the Warfighter. However, they spend so much time managing these assets that they lose effectiveness as a Warfighter. This is a particular problem if the warfighter’s role is one demanding continuous sensory and mental workload, such as the Co-Pilot/Gunner (CPG) of an Apache Longbow attack helicopter. Autonomy, the ability of vehicles to conduct most of their operation without human supervision, can help relieve the burden of providing continuous oversight of the UAV’s operation. This moves the Warfighter’s role from control to command, enabling them to perform their duties more effectively and successfully. Collaboration, the ability of teams of vehicles to coordinate their activities without human oversight, moves unmanned systems to the level of a true force multiplier, giving a single human warfighter the power of multiple coordinated, intelligent platforms. Introduction 1 Lockheed Martin has developed a general architecture for Collaborative Autonomy that provides both the Autonomy and the Collaboration necessary to achieve this force multiplication. This architecture provides the capability for individual unmanned vehicles to operate with unparalleled degrees of intelligence and autonomy, and for groups of unmanned vehicles to operate together effectively as a team, providing greater effectiveness than an equal number of vehicles operating independently. Collaborative Autonomy allows the human warfighter to command the unmanned vehicles as an active member of a warfighting team, rather than as a detached controller (Figure 1). Central to the architecture are state-of-the-art software components for Mission Planning, Collaboration, Contingency Management, Situational Awareness, Communications Management, Resource Meta-Controller, and Vehicle Management. Lockheed Martin is currently employing and expanding this architecture to turn state-of- the-art unmanned vehicles into transformational warfighting teams. Background The U.S. Military relies heavily on the use of unmanned vehicles (UVs) for a variety of tasks, including surveillance and reconnaissance, explosive ordnance disposal, and to an increasing degree for strike against terrorist and other targets. In all cases, the unmanned vehicle operates under the direct supervision or control of a human warfighter. The goal of the unmanned vehicles is to provide a force multiplier for the human warfighter that enables the human 1 Presented at the American Helicopter Society 61 th Annual Forum, Grapevine, TX, June 1-3, 2005. Copyright © 2005 by the American Helicopter Society International, Inc. All Rights Reserved. Distribution Statement A: Approved for Public Release, Distribution Unlimited. and the unmanned vehicle – the Manned/Unmanned Team to perform tasks more effectively or more safely than a human warfighter can alone. This trend is certain to continue, since UVs have proven their effectiveness repeatedly in conflicts from Bosnia and Kosovo to Afghanistan and Iraq. For a variety of reasons, it is not likely that we will see unmanned vehicles operating with full autonomy in most military applications in the foreseeable future, and so the paradigm of Manned/Unmanned Teaming will continue to be the dominant approach to the deployment of UVs in military applications. One of the domains of particular interest is the teaming of Unmanned Air Vehicles (UAVs) with human pilots in a scout or attack helicopter such as an Apache Longbow. The US Army, US Navy, and DARPA have pursued the development of manned/unmanned teaming with human Figure 1. Unmanned Vehicle Teams on the digital battlefield can act as a force multiplier if they have the autonomy and collaboration capabilities necessary to operate in teams without extensive human supervision.

Upload: vudien

Post on 01-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

Collaborative Autonomy for Manned/Unmanned Teams

Steve Jameson and Jerry Franke Lockheed Martin - Advanced Technology

Laboratories [email protected]

Robert Szczerba and Sandy Stockdale Lockheed Martin Systems Integration – Owego

[email protected]

Abstract UAVs offer tantalizing capabilities to the warfighter, such as tireless observation, quick recognition, and rapid reaction to today’s changing battlespace. These trends are important because they aid Warfighter in their duties. Today, unmanned systems exist that extend the vision and the reach of the Warfighter. However, they spend so much time managing these assets that they lose effectiveness as a Warfighter. This is a particular problem if the warfighter’s role is one demanding continuous sensory and mental workload, such as the Co-Pilot/Gunner (CPG) of an Apache Longbow attack helicopter. Autonomy, the ability of vehicles to conduct most of their operation without human supervision, can help relieve the burden of providing continuous oversight of the UAV’s operation. This moves the Warfighter’s role from control to command, enabling them to perform their duties more effectively and successfully. Collaboration, the ability of teams of vehicles to coordinate their activities without human oversight, moves unmanned systems to the level of a true force multiplier, giving a single human warfighter the power of multiple coordinated, intelligent platforms.

Introduction1 Lockheed Martin has developed a general architecture for Collaborative Autonomy that provides both the Autonomy and the Collaboration necessary to achieve this force multiplication. This architecture provides the capability for individual unmanned vehicles to operate with unparalleled degrees of intelligence and autonomy, and for groups of unmanned vehicles to operate together effectively as a team, providing greater effectiveness than an equal number of vehicles operating independently. Collaborative Autonomy allows the human warfighter to command the unmanned vehicles as an active member of a warfighting team, rather than as a detached controller (Figure 1).

Central to the architecture are state-of-the-art software components for Mission Planning, Collaboration, Contingency Management, Situational Awareness, Communications Management, Resource Meta-Controller, and Vehicle Management. Lockheed Martin is currently employing and expanding this architecture to turn state-of-the-art unmanned vehicles into transformational warfighting teams.

Background The U.S. Military relies heavily on the use of unmanned vehicles (UVs) for a variety of tasks, including surveillance and reconnaissance, explosive ordnance disposal, and to an increasing degree for strike against terrorist and other targets. In all cases, the unmanned vehicle operates under the direct supervision or control of a human warfighter. The goal of the unmanned vehicles is to provide a force multiplier for the human warfighter that enables the human 1 Presented at the American Helicopter Society 61th Annual Forum, Grapevine, TX, June 1-3, 2005. Copyright © 2005 by the American Helicopter Society International, Inc. All Rights Reserved. Distribution Statement A: Approved for Public Release, Distribution Unlimited.

and the unmanned vehicle – the Manned/Unmanned Team – to perform tasks more effectively or more safely than a human warfighter can alone. This trend is certain to continue, since UVs have proven their effectiveness repeatedly in conflicts from Bosnia and Kosovo to Afghanistan and Iraq. For a variety of reasons, it is not likely that we will see unmanned vehicles operating with full autonomy in most military applications in the foreseeable future, and so the paradigm of Manned/Unmanned Teaming will continue to be the dominant approach to the deployment of UVs in military applications. One of the domains of particular interest is the teaming of Unmanned Air Vehicles (UAVs) with human pilots in a scout or attack helicopter such as an Apache Longbow. The US Army, US Navy, and DARPA have pursued the development of manned/unmanned teaming with human

Figure 1. Unmanned Vehicle Teams on the digital battlefield can act as a force multiplier if they have the autonomy and collaboration capabilities necessary to operate in teams without extensive human supervision.

helicopter pilots on several programs, including the Army’s Airborne Manned/Unmanned System Technology Demonstration (AMUST-D) and Hunter Standoff Killer Team (HSKT) ACTD, the Navy’s Intelligent Autonomy Future Naval Capability (IA-FNC) program, and the DARPA/Army Unmanned Combat Armed Rotorcraft (UCAR) program, with Lockheed Martin as a participant. To meet the demanding requirements of achieving a robust force multiplier capability while limiting human workload demands, Lockheed Martin has developed an architecture and a set of technologies for Collaborative Autonomy which provides:

• A high degree of autonomy for each individual vehicle, enabling robust and sophisticated capabilities with limited human intervention

• Collaborative team operations, enabling multiple vehicles to operate as a team with the human warfighter; allowing a single human to command multiple vehicles with no more workload than a single vehicle.

In this paper, we provide an overview of the Collaborative Autonomy approach, describe the details of the Collaborative Autonomy components, and describe a prototype implementation of the Collaborative Autonomy architecture as a Manned/Unmanned teaming demonstration.

Approach to Collaborative Autonomy Lockheed Martin has developed and demonstrated a revolutionary approach to Collaborative Autonomy for heterogeneous teams of manned and unmanned vehicles [1]. The approach is specifically oriented to allow a team of unmanned vehicles to be commanded by a warfighter such as the CPG (Co-Pilot Gunner) of an Apache Longbow, who

already has a demanding workload. This collaborative autonomy approach enables an unmanned vehicle team to be truly transformational by enabling the following five critical attributes (Figure 2):

• Intelligent – Autonomous Mission Planning and Execution rapidly finds and implements the best solution to complex tactical problems, ensuring mission success on a dynamic battlefield.

• Collaborative – Collaboration and Teaming capabilities produce a lethal warfighting team that shares information, responsibilities, and tasks. It interacts with human warfighters and other systems as a team and not as separate individuals.

• Aware – Comprehensive, shared, and predictive Situational Awareness overcomes the “fog of war” to enable precision engagements with precision information.

• Responsive – Holistic Contingency Management ensures survival and mission effectiveness of UVs teams in the face of the unexpected, including a “reflexive response” capability that allows intelligent “short-circuiting” of higher level planning functionality for rapidly changing battlefield conditions.

• Agile – Tactile Maneuvering exploits terrain and avoids obstacles, enabling the unmanned vehicle to survive and surprise.

Collaborative Autonomy is implemented by a Mission Management system that provides the high levels of intelligence necessary for autonomous and collaborative mission operations. Autonomy lets teams of unmanned vehicles operate with only top-level human guidance and no need for detailed supervision. Collaboration is essential for team effectiveness.

Figure 2. The Five Key Attributes of Mission Management.

Collaborative Autonomy Architecture The Collaborative Autonomy architecture is segmented into seven major components (Figure 3):

• Mission Planning – develops plans for the team and for individual vehicles

• Collaboration – manages team formation and interaction among team members

• Contingency Management – detects, assesses, and responds to unexpected events

• Situational Awareness – creates Common Relevant Operating Picture (CROP) for team

• Communications Management – Manages the interaction with the vehicle’s communications systems.

• Air Vehicle Management – Manages the air vehicle’s flight systems, sensors, and weapons.

• Resource Meta-Controller – Manages processing resources and dynamically allocates them to different components as necessary.

These components work in concert to achieve objectives without violating constraints. This system architecture offers substantial advantages over existing approaches, such as recognizing the need to partition components requiring distinct disciplines for analysis, development, and operation as well as the need for autonomy to be collaborative both with other autonomous systems of the team and systems

external to the team. This approach is more extensible and scalable.

The architecture is extensible because the components are decoupled; analysis and development can be performed by different disciplines with relative independence. Additionally, this means that novel algorithms can be added with a minimum disturbance of existing components. Collaboration is an integral part of the system architecture – that an entire component is dedicated to it and many components have collaborative concepts at their core. For example, mission planning is hierarchical in nature so teams can be formed and reformed with tasks allocated and reallocated to team members. Contingency management is hierarchical in nature, supporting the concept of issues being addressed at a team level. These team issues are addressed poorly in conventional approaches.

The architecture is scalable, because collaboration has been incorporated at the core of key components. The architecture is intended to work with multiple instances of itself so the team vehicles work synergistically and autonomously. Therefore, intermittent communication between instances or complete loss of an instance is handled gracefully.

DARRS025..ppt

Air Vehicle

Collaboration

Mission Planning

Situational Awareness

Contingency Management

Communications

Functional Modules

Intelligent Agents

Knowledge/DataModels

Resource Meta-Controller

Weapons/Sensors

Air Vehicle Management

Flight Control System

Collaborative AutonomyMission Management

Communications Management

Figure 3. Collaborative Autonomy is achieved through a Mission Management system made up of a set of intelligent components that implement higher-level functions on top of the vehicle systems.

Mission Planning Mission Planning onboard the autonomous system performs pre-mission and dynamic in-mission replanning for the collaborative team. Mission planning develops collaborative synchronized plans for sensor employment, flight paths, communications, and engagements.

Generally, most existing mission planning systems are monolithic in nature. These systems are very good at planning for specific situations that are predetermined, but are poor reacting to unforeseen events. Unfortunately, it is unrealistic for a mission planning system to have planners to handle all situations and all contingencies. To address these shortcomings, Lockheed Martin has developed a revolutionary approach to handle this problem via the Mission Planning Toolkit [2]. In this toolkit, planning algorithms are broken down into their smallest functional subcomponents, called “primitives”. The algorithm primitives are then collected into a library of modules, each with specific inputs, outputs and functionalities. The toolkit is used to construct a specific planning system on the fly, based on the current situational awareness. This allows for the dynamic construction of mission planners, as opposed to just mission plans, to handle unpredicted events. For anticipated or common mission scenarios, reconstructed planner templates can be used for an even faster response times.

During operation, the Mission Planning Toolkit works in a hierarchical fashion with mission plans at the highest level – such as Teams A and B recon area ZEBRA, team plans at the next level, and individual vehicle plans at the lowest level. These plans optimize and/or account for factors such as:

• High level mission objectives and constraints • Resource allocation for the number of vehicles • Payload configuration for different mission objectives • Collaborative use of onboard sensors and external ISR

assets to detect, identify, and geo-locate vehicle and dismounted infantry targets of interest

• Communication events that support the teams' information dissemination and synchronization requirements

• Routes that support the planned use of sensors and communications while minimizing threat exposure

• Target engagement planning and weapon deployment sequencing.

Mission planning accepts objectives and constraints for planning missions as well as alerts indicating that replanning is required. Geographic information (e.g. terrain, obstacle, and cultural), environmental information (e.g. weather), situational information (e.g. threat locations and

capabilities), and vehicle/team and external asset capability information (e.g. payload availability and mobility models) are used. Mission planning generates mission plans including travel plans, sensor plans, communications plans, and weapon plans. At the team level, task objectives and constraints are generated for lower level mission planning to honor. It then accepts, combines, and deconflicts those plans when lower level mission planning responds.

Collaboration Collaboration, i.e. the ability of multiple vehicles to interact to carry out a team mission, is inherent in the Collaborative Autonomy architecture. Most components, including Mission Planning, Contingency Management, Situational Awareness, and Communications Management are designed to facilitate the collaborative operations of a team of vehicles.

The Collaboration component embodies several functions (Figure 4) that are uniquely required in order to support this operation. These include:

• Sharing Information and tasks • Allocating Roles and Responsibilities • Coordinating Task Execution • Dynamically forming teams • Interacting with external assets • Interacting with human Warfighter.

Two main technology elements of the Collaboration component are the Grapevine information sharing technology, and the Dynamic team formation and management.

Grapevine information sharing [3] handles the aspects of collaboration that deal with information sharing and coordination between unmanned team members, between unmanned systems and the human Warfighter, and between the unmanned team and external systems such as C4ISR and Networked Fires. On every unmanned vehicle, the Collaboration component sets up intelligent agents known as Proxies to represent each other manned or unmanned entity that vehicle needs to communicate with. Each Proxy agent contains a set of criteria that are used to select and prioritize information for dissemination to the entity represented by the Proxy, known as the Client. The set of Proxy agents are continually evaluating the information available to the Collaborative Autonomy system and selecting and prioritizing information for dissemination to other manned and unmanned team members. The Proxy agent’s criteria are updated in response to changing conditions, such as new team members, changes in team member roles, or changes in mission tasking, and can also be updated to reflect explicit requests for information from a human Warfighter or external system.

One important aspect of the Grapevine is the sharing of Situational Awareness information to form a Common Relevant Operational Picture (CROP) across the team. The Collaboration component handles the information sharing operations needed to construct the CROP, and the Situational Awareness component does the information fusion and deconfliction necessary to assemble the shared information into a CROP.

Dynamic Team Formation accommodates the formation and reformation of unmanned vehicle teams as required to meet the mission requirements. At the beginning of the mission, the Collaboration component identifies the set of team members required to meet the mission requirements, and these vehicles exchange information to set up a team. Setting up a team includes determination and distribution to all team members of the team membership, team structure, and allocation of roles within the team. An important element of the team is the allocation of roles to team members to perform responsibilities on behalf of the team, such as coordinating interaction with the human warfighter. When a team member is lost or damaged, new team members become available, or when the mission changes, the team members interact to reform the team and reallocate roles. Reforming the team can include splitting the team into two smaller teams to accomplish separate mission tasks, or merging two or more teams into a single combined team.

Contingency Management A key challenge to successful autonomous operations is detection and reaction to unplanned events that affect the execution of the vehicle system’s mission. Contingency Management watches for unexpected influences that affect team plan success, such as payload failure, modified orders, new operational constraints, changing environmental conditions and other unexpected changes in the battlespace (see Figure 5). It works with the Mission Planning component to generate an effective response to the contingency so the mission can be continued.

The Contingency Management component is implemented based on Lockheed Martin’s MENSA technology [4]. MENSA takes each new or updated mission plan and applies algorithms to identify plan dependencies and constraints. Based on these dependencies and constraints, it sets up monitoring agents to check for conditions that violate those dependencies and constraints. During execution of the plan, these agents continually monitor available information to determine if their assigned conditions are met. If the conditions are met, the agent signals that the contingency has occurred and the reasons about the impact of that contingency on the mission plan. If necessary, the Mission Planning component is requested to modify the plan to take into account the contingency.

For example, vehicle health updates are related to vehicle operational capabilities (such as maximum endurance) before being compared to the requirements of the executing

DARRS024..ppt

Cooperative Engagement

• C4ISR• Netw orked Fires

Assigning Team Responsibilit ies

Coordinating Tasks

• Information Sharing

• Command Response

• C2 Handoff

Sharing Information

Warfighter Interaction

External System Interaction

External Asset

Gatew ay

SA Lead

Team Lead

Warfighter Gatew ay

• Status• Plans• CROP

Figure 4. Collaboration performs the functions necessary to enable a team of unmanned vehicles to function as a team of human warfighters.

plan to determine if the vehicle can perform its mission as planned. Pop-up threats are assessed with respect to their influence on the planned route that the vehicles will take through threatened territory. This mission-centric approach to contingency management focuses computational resources toward those problems that have real mission impact and reduces the number of false alarms and unnecessary replans that occur.

Contingency Management implements contingency monitoring and plan impact analysis for most contingency types, including air vehicle flight capability degradation, pop-up threats and targets of opportunity, friendly and neutral movement within the battle space, loss of team members, and mission equipment failures. Contingency Management can also determine when an emergency mission abort is required and provides the controlling element with control over the level/type of contingency monitoring performed. Contingency Management takes in mission plans and information regarding the changing situation (e.g. new objectives, new constraints, new obstacles, new threats, new targets, and changes in vehicle/team capabilities). It issues alerts when plans will no longer satisfy objectives and constraints. At the team level, it takes in alerts of contingencies that cannot be handled at a vehicle level and issues alerts to team mission planning for replanning.

Our contingency management approach features a team-wide contingency resolution escalation process where

Contingency Management detects a contingency, assesses the impact and identifies a plan violation, then:

1. The affected vehicle locally performs a replan which may resolve the problem

2. If there are tasks that could not be re-planned locally, contingency management then collaborates with other team members to reallocate tasks

3. If there is a reallocation failure, a team replan is triggered

4. If a team replan cannot resolve the situation, contingency management alerts the controlling element (typically a manned asset) of a team planning failure and awaits updated guidance.

Situational Awareness The Situational Awareness (SA) component gathers data on the external tactical and environmental situation and processes it into a CROP, which the other Mission Management components use to make their decisions. A pilot or crewmember needs good situational awareness to perform effectively in a manned system. Intelligent autonomous systems also require complete, timely, specific, and relevant information to make good “decisions”.

The Situational Awareness (SA) module is implemented by leveraging Lockheed Martin’s technology for Level 1 Data Fusion [6] and Battlefield Assessment, originally developed on the Rotorcraft Pilot’s Associate program. SA performs multiple levels of assessment of the data [5] from onboard

DARRS026..ppt

Vehicle System or Payload

Failure

Failure by Needed External Asset

Loss/Failure of Teammate

Unexpected Developments in the Battlespace

Weather and Other Environmental Factors

Changes in Orders and Operational Constraints

Loss of Contact With Operator(s)

Figure 5. Contingency Management Handles Unexpected Influences that Affect Mission Plan Success.

sensors and external data sources (Figure 6) to produce the CROP. Level 1 Object Assessment consists of fusing data from onboard sensors, teammate sensors, and external data sources such as C4ISR networks to produce a set of tracks representing friendly and threat entities in the battlespace. In addition to this fusion, SA deconflicts data from each of the teammates to ensure that each vehicle’s CROP is consistent.

Level 2 Situation Assessment consists of evaluating the fused track picture in the CROP to assess friendly and threat sensor coverage and intervisibility, potential threat organizations, and the priority associated with different threats. Level 3 Predictive Battlespace Awareness [7] determines likely threat mobility and future locations, and assesses likely threat intent. Level 4 Process Refinement determines when the information being produced by Situational Awareness does not meet the requirements of the mission, and takes action to generate additional information such as requesting Mission Planning to task sensors, from other vehicles, or from the C4ISR networks.

In addition to this multi-level processing of sensor information, Situation Awareness collects and maintains other types of information such as weather data, environmental information, and obstacle maps. This information is also used by Mission Planning and other

components to make autonomous decisions that guide vehicle behavior.

Air Vehicle Management Air vehicle management (AVM) provides the link between the Collaborative Autonomy components and the vehicle systems. It translates tasks from the Mission Planner into commands for the vehicle sensors, weapons, and flight systems and acts as the point of entry for information from these vehicle systems into Mission Management.

AVM refines route plans to minimize overall exposure to threats factoring in terrain masking, collision risks and vehicle dynamics. AVM provides reflexive obstacle and threat response capability to enhance overall system survivability. AVM quickly maneuvers the vehicle out of harm’s way while the more deliberative system autonomy generates a re-plan to achieve mission objectives. AVM generates trajectory commands based on a library of maneuver primitives, including agile maneuvers that fully span the available flight envelope, providing enhanced maneuvering effectiveness for survivable threat response. AVM accepts travel plans (e.g. flight plans), threat warnings from onboard sensors, and obstacle warnings from obstacle sensors and generates maneuvers to the vehicle actuator systems.

DARRS023..ppt

SA Lead

C4ISR Netw ork

Level 1Object

Assessment

Level 3Predictive

BattlespaceAwareness

Level 4Process

Refinement

Situational Awareness

CROP With Selected Images

13

2

Clustering ID

2

13

2

Clustering ID

2

Sensor Coverage

• Threat Intervisibility

• Threat Priorities

Threat Relationships

• Mobility• Intent

Environmental SA

• Obstacles• Weather

Level 2Situation

Assessment

Team Shared CROP

Cues, Sensor Requests, Tracks

Figure 6. Situational Awareness provides a comprehensive assessment of all battlespace information to enable the Collaborative Autonomy functions to operate with precision information.

As part of plan refinement, the Mission Planning generates routes between mission “hard points” that minimize the total exposure to known external threats by factoring in the threat type, location, lethality radius, terrain elevation profiles, and the vehicle’s exposure given its position, speed, and attitude. The route plan is then refined within the route plan constraints to further reduce threat exposure, generating detailed flight trajectories that adjust speed, aspect angles, and altitude above the local terrain to reduce the total risk of exposure to external threats. The plan refinement component of AVM produces trajectories that meet the route plan goals and constraints, while reducing the exposure to risk from external threats and terrain collisions.

Communications Management Communications Management provides and manages data links to connect team members with each other and with external assets (e.g., ISR and Networked fires) over battlefield networks. Communications software manages this system by: implementing the communications plan provided by mission planning using available system data links, predicting and monitoring communication Quality of Service (QoS) and optimizing performance of the data links. Communications may also request Mission Planning to modify plans to keep QoS at effective levels. The relationship between Collaboration and Communications

Management is shown in Figure 7.

Resource Meta-controller Resource Meta-controller (RMC) is a software infrastructure component providing processing and memory resources for other components. RMC operates in concert with operating system level resource management functions. RMC performs system management functions such as processor switchover, memory zeroize, pre- and post-mission data exchange and fault isolation. RMC manages computational resources by performing resource utilization monitoring, resource allocation to agents, resource reclamation and reallocation, resource tracking, and resource scheduling and optimization. The RMC Agent Supervisor manages agents by agent creation and destruction, agent registration and monitoring, job assignment and status reporting, and agent suspension and resumption. RMC provides other components with access to data by managing publication/subscription interchanges, managing data retention and performing structured queries upon request.

Implementation Mission Management is the component that provides the intelligence for the unmanned team to make collaborative and autonomy decisions. The architecture for the Mission Management segment was instantiated in the

DARRS022..ppt

Comm HW CommManager SW

Collaboration Services

Mission Management

• Encryption• LPI/LPD Waveforms

• Modulation/ Demodulation

• Signal Transmission

• BIT

• Message Formatting

• Netw ork Management

• Flow Control• Hardw are Control

• Hardw are Interface

• Error Management

• Team Management

• Information Dissemination/ Routing

• Information Prioritization

• Proxy/Gateway Protocols

• Negotiation Protocols

Mission Planning

Situational Awareness

Contingency Management

Figure 7. Communications Management and Collaboration interact to ensure that needed information is exchanged among team members and with external systems.

Manned/Unmanned (MUM) Teaming Demonstration that verified the high levels of intelligence necessary for autonomous and collaborative mission operations can be achieved. Autonomy lets the vehicle operate with only top-level human guidance and no need for detailed supervision. Collaboration is essential for team effectiveness.

The MUM Teaming demonstration showcased critical concepts including autonomy and collaborative operations, Human Machine Interface (HMI) and workload management approaches and technologies, and manned unmanned system interaction. The MUM Demonstration created a high fidelity simulation-driven test bed to develop and evolve MUM concepts and to mature the autonomy and collaborative technology. The simulation showcased the robustness of the MUM approach by responding to contingency behaviors and by providing a new level of situational awareness and operational flexibility. The simulation was an evolutionary development of increasing fidelity over time to demonstrate autonomous and collaborative operations and assess the technical feasibility of achieving this capability.

The MUM Teaming demonstration contained a team of simulated rotary wing vehicles commanded from either a ground command console or a manned aircraft, represented as a simulated Apache Longbow. The testbed also incorporated command and control nodes, Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance (C4ISR) network, and threats, in addition to the human machine interfaces and information exchanges between these simulated components. The simulation environment provides a

representative system for evaluations and performance analysis, forming a robust test bed that accommodates a number of changes into the environment and assesses the capability of the manned/unmanned team to respond to the changes. The demonstration showed that the combination of manned and unmanned team attack assets provided a new level of situational awareness and operational flexibility not currently available.

A representation of the MUM teaming demonstration is shown in Figure 8.

The demonstration culminated in an exercise where an independent team injected a number of changes into the simulation environment and assessed the capability of the manned/unmanned team to respond in an autonomous and collaborative manner to the changes. The results of the MUM teaming demonstration were exciting and convincing. The HMI approach, including spoken language systems voice command and response, provided a natural communication modality for the MUM team commander. The team-based mission management, collaborative autonomy algorithms, and system implementation clearly showed the advantages and possibilities in team based operations with highly intelligent vehicles. The Advanced Tactical Combat Model (ATCOM) simulation results utilized the dynamic re-planning and contingency response available in the MUM Teaming Demonstration. Finally, the demonstration showed that the combination of manned and unmanned team attack assets provided a new level of situational awareness and operational flexibility not currently available.

HQ

Team of UnmannedVehicles

Threats

Ground TeamCommander

C4ISR NET / GIG

Air Team Commander

Figure 8. MUM Teaming Demonstration.

The Air Team Commander is a member of a team containing unmanned vehicles (UVs) and provides mission redirects to the UVs, receives/dispositions target engagement requests, and receives vehicle status/mission plans. The team commander had high fidelity controls and on-board displays to control the UV team. These capabilities include: tactile vest, spoken language system, and multi-purpose display pages with a digital map. The Ground Team Commander is used with the unmanned vehicles to ensure that the vehicles are prepared for the mission and execution of the initial and final portions of the mission.

The simulation included software-in-the-loop execution of critical team-based mission planning, autonomy, collaboration and contingency management functions. The system included a six degree-of-freedom air vehicle model. The system implemented reflexive maneuver elements of the air vehicle management system and collaborative team searches and target engagements with weapons release authority provided by the manned element. The system explored key aspects of the HMI solution, which included spoken language system voice command and response, tactile vest for alerting, mission controls and displays integrated with existing systems, and workload management functions, including negotiated intervention.

A distributed team was responsible for the demonstration: Lockheed Martin Systems Integration - Owego, Lockheed Martin – Advanced Technology Laboratory, Draper Laboratory, Lockheed Martin – Simulation Training and Support, Naval Aeromedical Research Lab (NAMRL), and Cepstral.

Conclusion Through this work, an architecture and a set of technologies have been developed with the potential for tremendous warfighter benefit as it matures and transitions into operational use. Our analysis work and the MUM Demonstration have shown that our approach to Collaborative Autonomy is highly effective in demanding military scenarios. Our work with Army Subject Matter Experts has validated the appropriateness of the concept operationally and the feasibility for the warfighter with the limited workload required in a demanding role such as copilot/gunner of an Apache attack helicopter.

Both the architecture and the technologies are scalable and extensible, in terms of the size of the unmanned teams, the degree of capability of the unmanned vehicles, the type of manned platforms involved, and the required interaction with external systems. This extensibility has the potential to provide significant benefit in a wide range of domains, including both rotary and fixed wing UAV’s, Unmanned Ground Vehicles (UGVs) teamed with humans, and heterogeneous air/ground teams. We are currently pursuing numerous avenues to extend, mature, and transition this technology so that America’s warfighters can get maximum benefit from the promise of intelligent unmanned systems.

Acknowledgments Some of the work described in this paper was funded under the OTA portion of the DARPA Unmanned Combat Armed Rotorcraft (UCAR) program (MDA972-02-9-0011). The authors wish to recognize the following individuals who have made recent contributions to this system:

Draper Laboratory – Brent Appleby, Mark Homer, Leena Singh, Lee Yang, and their team

Lockheed Martin Advanced Technology Laboratory - David Cooper, Rich Dickinson, Chris Garrett, Adria Hughes, Mike Orr, Brian Satterfield, Mike Thomas, and Vera Zaychik

Lockheed Martin Simulation Training and Support – Ken Stricker and Brian Vanderlaan

Lockheed Martin Systems Integration – Owego – Erin Accettullo, Rick Crist, Steve DeMarco, Dave Garrison, Carl Herman, Adam Jung, Ateen Khatekhate, John Moody, Donn Powers, Greg Scanlon, Mike Scarangella, Keith Sheppard, Tom Spura, Peter Stiles, and Joel Tleon

UCAR Government Team - Bob Boyd, Marsh Cagle-West, Steve MacWillie, Steve Rast, Randy Scrocca, CW4 Matt Thomas, and Don Woodbury

References [1] R. Szczerba, D. Garrison, and, N. Ternullo,

"Autonomous UAV Team Planning for Reconnaissance Missions", Proceedings of the 59th Annual Forum of the American Helicopter Society, Phoenix, AZ, May 2003.

[2] R. J. Szczerba, P. Galkowski, I. Glickstein, and N. Ternullo, "Robust Algorithm for Real-Time Route Planning," IEEE Transactions on Aerospace and Electronic Systems, Vol. 36, No. 3, pp. 869-878, July 2000.

[3] "Information Sharing in Teams of Self-Aware Entities," J. Franke, B. Satterfield, S. Jameson, 2003 Multi-Robot Systems Workshop, Naval Research Lab, Washington, DC, March, 2003.

[4] "Self-Awareness for Vehicle Safety and Mission Success," J. Franke, B. Satterfield, M. Czajkowski, S. Jameson, 2002 Unmanned Vehicle Systems Technology Conference, Brussells, Belgium, December, 2002.

[5] White, F.E., “A Model for Data Fusion”, Proc. 1st National Symposium on Sensor Fusion, 1988

[6] "RPA Data Fusion," Don Malkoff and Angela Pawlowski, 9th National Symposium on Sensor Fusion, Vol.1, Infrared Information Analysis Center, September 1996.

[7] "Situation and Threat Refinement Approach for Combating the Asymmetric Threat," Angela Pawlowski, Sergio Gigli, and Frank Vetesi, Military Sensing Symposia, National Symposium on Sensor and Data Fusion 2002, San Diego, CA, August 13-15, 2002.