an ontology-based approach to improve the accessibility of ros-based robotic systems
TRANSCRIPT
An ontology-based approach to improve the accessibility of ROS-based robotic systems
Ilaria Tiddi, Emanuele Bastianelli, Gianluca Bardaro, Mathieu d’Aquin, Enrico Motta
Knowledge Capture (K-CAP2017) Austin, Texas, USA 5/12/2017
@IlaTiddi
Robots (because they are cool)
Capabilities (because we want to play with robots)
and a bit of ontologies (because we want make our life easier)
Today’s talk
Robots are becoming more popular‣ advances in Computer Vision/AI/Navigation&Planning
‣ new hardware and software components
‣ cheaper platforms (roomba, drones…)
New users approach robots‣ no interest in low-level capabilities (drivers, controllers…)
‣ interest in high-level capabilities (NLG, navigation, vision…)
Context
Providers do not fully expose a robot’s capabilities‣ robots become end products (unless being a robot developer)
e.g. drones for photography / roombas for cleaning
MK:Smart++ [1]: integrating robots in cities ‣ data collectors (drones for parking monitoring) ‣ data consumers (adaptive self-driving cars)
Example : team of robots for green space maintenance‣ available capabilities : e.g. teleoperation, video recording ‣ expertise required to program trajectories/object recognition ‣ different platforms require different experts
Motivation
[1] www.mksmart.org
How to ‣ exploit high-level capabilities of heterogeneous platforms ‣ reducing development costs?
Can we use ontologies? ‣ they allow interoperability ‣ they allow domain abstraction
Can an ontology of capabilities ‣ help non-experts in programming robots ‣ facilitate the integration of robots in various (city) applications?
Research questions
[2] : the Robot Operating System
‣ collaborative middleware
‣ management of low-level components (share, reuse)
‣ need a fine-grained understanding of the robot architecture
Robot Operating System
[2] www.ros.org
Assisting non-experts for robot development through : ‣ creating an ontology of robots high-level capabilities ‣mapping of low-level ROS functionalities to high-level capabilities ‣ a system that can understand what a robot can do based on these
Steps 1. Understanding and formalizing ROS 2. Mapping capabilities to ROS 3. Defining a taxonomy of capabilities 4. Wrapping these in a system
Proposed approach
Tools, libraries&conventions for collaborative robot development ‣ open and shareable ‣ promoting robust general-purpose robot softwares
Understanding ROS
A network of data processes**
Understanding ROS
**simplified version
Understanding ROS‣ nodes : low-level functions
move_base (navigation), kobuki_node (wheel control), map_server (map management)
‣ messages : exchanged data
move_base&kobuki_node exchange a Twist message move_base&map_server exchange an OccupancyGrid message
Understanding ROS
‣ topics : communication channels (asynch)
Twist is exchanged via the topic /cmd_vel
Understanding ROS
‣ services : communication channels (synch)
OccupancyGrid is exchanged between move_base and map_server via the service /map
Understanding ROS
Formalizing ROSRepresenting a general communication
Formalizing ROSRepresenting topics and services
Hypothesis
‣ identify capabilities through sets of { nodes, topics/services, messages }
{ move_base, /cmd_vel, Twist } —> directional movement
Problem
‣ nodes, services and topics are not standard…but messages (sort of) are
Solution
‣ focus on messages to identify capabilities
Twist message evokes a directional movement
‣ and the modality they are exchanged (by publishers or subscribers)
a publisher of Twist evokes self perception
a subscriber of Twist evokes autonomous navigation **
Mapping capabilities
**simplified version
Mapping capabilitiesMessages and components evoke capabilities
Taxonomy of capabilitiesSpecifying capabilities
An ontology-based system ‣ robot : where ROS is running ‣KB : where ROS components are mapped into capabilities ‣server : bridge
Analyzer (at boot) : ‣ translates robot components into capabilities
Dynamic node (upon user input) : ‣ translates capabilities into robots components
The system
User-based evaluation ‣ UI to wrap the system ‣ a basic imperative language
capabilities+constructs (if-then-else, repeat…)
‣ 14 users without robot expertise ‣ 2 robots, different capabilities (ground,
flying) ‣ 2 settings (1 simulated, 1 real) ‣ 4 exercises
single command command sequence condition-based halt object recognition
Evaluation
‣Few mins to understand what a robot can do and how to use it (possessed capabilities and invocation) ‣VS hours of practice to master ROS
(nodes implementation, pub&sub management, specific platforms) ‣Compared with the effort of an expert
(lines of code, message types, ROS components required)
EvaluationSimulated ground robot Real flying robot
#1 #2 #3 #4 #1 #2 #3 #4
users
progr.blocks 1 2 4 9.5 1 2 4 8
capabilities 1 1 1 2 1 2 4 4
time 1’22’’ 1’04’’ 1’15’’ 6’52’’ 1’16’’ 1’16’’ 4’05’’ 5’47’’
var(time) ±42’’ ±23’’ ±16’’ ±1’46’’ 1’46’’
±3’’ ±8’’ ±15’’ ±1’’49’’ 1’46’’
expert
#lines 34 39 56 59 34 39 56 59
#ROScomp 1 2 4 4 1 2 4 4
#msg 1 2 3 3 1 2 3 3
Wrapping-up… ‣robots are cool, but we do not know how to use them properly
‣ontologies can allow non-experts to access different robots effortlessly
‣an ontology-based approach deriving capabilities from ROS components
Conclusions
Future work ‣Refine/improve the taxonomy
autonomous navigation=sensing+localization+planning
‣ include robots with manipulators grasping, moving objects (fine-grained capabilities)
‣expose the system as APIs in a development workflow to allow reusability!
Conclusions
Bloopers