project title here ieee ucsd overview robo-magellan is a robotics competition emphasizing autonomous...

1
Project Title Here IEEE UCSD Overview Robo-Magellan is a robotics competition emphasizing autonomous navigation and obstacle avoidance over varied, outdoor terrain. At the start of the competition, a set of GPS waypoints are programmed into the robot, and it must find its way from start to finish without human intervention. Because of the irregularity of the terrain, the robot must use a combination of its sensory capabilities including GPS, machine vision, optical sensors, and ultrasound sensors to navigate around the obstacles. The number and complexity of the subsystems makes power and integration two of the largest challenges. The variety of sensor data being fed into the robot requires the use of sophisticated control techniques including fuzzy logic and artificial intelligence. Systems Integration The interdisciplinary nature of this project and the complexity of each of the subsystems presents a management and integration challenge. Each subteam - mechanical, electrical, software - started working on their part of the problem and were allowed to meet on different days. Now that the subsystems are more developed, we have moved to a single weekly meeting to bring the different components of the robot Results Reverse engineered the stock motor controller so that we can manually give the car commands of how fast to drive or at what percent to turn. We analyzed each channel of the controller and figured what frequency determines what operation by the controller. Wrote code in C to get a hardware interface. Code usually involves opening a connection to a device via serial or USB, retrieving and parsing raw data output by the hardware, and giving our own commands to the hardware. So far we have accomplished this for a CMOS camera and a GPS unit. The code works well for the CMOS camera, however raw data sent from GPS is harder to parse and is still being developed. Designed the car itself and how we plan to encapsulate all of the components of the robot. We have a CAD design of the robot, which we continuously update as components and plans are modified. Work to be Done Determining how we will power the car. Each component right now runs on either a laptop computer or its own external power source. Designing circuits to deliver the correct voltage and current to each component will probably be our biggest challenge. Writing the main program that will synchronize all hardware together. From an Object Oriented approach, each component of the robot can be seen as an object: camera, gps, sensors. Interaction with these hardware will be in C. C++ will be used to encapsulate each component into an object and integrate the different subsystems. Sonar sensors will determine distance to obstructions: cones, trees, buildings, etc. Since we will be using multiple sonar sensors around Object Tracking Problem: Track arbitrary colors over time and position. Solution: Programmed a CMUcam2 real- time camera over RS232 to track colors. First, we tested the camera in different scenarios by modulating room lighting, RBG color formats, and object color. We were able to get correct tracking using the camera's demo mode. Next, we created our own program to interact with the camera. We first set up a terminal connection to determine the serial port settings, then after researching the Linux serial port C APIs, wrote a program that allowed us to give the camera our own commands and get the correct response. We are still working on decreasing the camera's jittery movement while it tracks a given object. http://ieee.ucsd.edu | [email protected]

Upload: arturo-tessler

Post on 29-Mar-2015

219 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Project Title Here IEEE UCSD Overview Robo-Magellan is a robotics competition emphasizing autonomous navigation and obstacle avoidance over varied, outdoor

Project Title HereIEEE UCSD

OverviewRobo-Magellan is a robotics competition emphasizing autonomous navigation and obstacle avoidance over varied, outdoor terrain. At the start of the competition, a set of GPS waypoints are programmed into the robot, and it must find its way from start to finish without human intervention. Because of the irregularity of the terrain, the robot must use a combination of its sensory capabilities including GPS, machine vision, optical sensors, and ultrasound sensors to navigate around the obstacles. The number and complexity of the subsystems makes power and integration two of the largest challenges. The variety of sensor data being fed into the robot requires the use of sophisticated control techniques including fuzzy logic and artificial intelligence.

Systems IntegrationThe interdisciplinary nature of this project and the complexity of each of the subsystems presents a management and integration challenge. Each subteam - mechanical, electrical, software - started working on their part of the problem and were allowed to meet on different days. Now that the subsystems are more developed, we have moved to a single weekly meeting to bring the different components of the robot together.

Results

• Reverse engineered the stock motor controller so that we can manually give the car commands of how fast to drive or at what percent to turn. We analyzed each channel of the controller and figured what frequency determines what operation by the controller.

• Wrote code in C to get a hardware interface. Code usually involves opening a connection to a device via serial or USB, retrieving and parsing raw data output by the hardware, and giving our own commands to the hardware. So far we have accomplished this for a CMOS camera and a GPS unit. The code works well for the CMOS camera, however raw data sent from GPS is harder to parse and is still being developed.

• Designed the car itself and how we plan to encapsulate all of the components of the robot. We have a CAD design of the robot, which we continuously update as components and plans are modified.

Work to be Done

• Determining how we will power the car. Each component right now runs on either a laptop computer or its own external power source. Designing circuits to deliver the correct voltage and current to each component will probably be our biggest challenge.

• Writing the main program that will synchronize all hardware together. From an Object Oriented approach, each component of the robot can be seen as an object: camera, gps, sensors. Interaction with these hardware will be in C. C++ will be used to encapsulate each component into an object and integrate the different subsystems.

• Sonar sensors will determine distance to obstructions: cones, trees, buildings, etc. Since we will be using multiple sonar sensors around the robot, synchronizing all of them will be a challenge.

Object TrackingProblem: Track arbitrary colors over time and position.

Solution: Programmed a CMUcam2 real-time camera over RS232 to track colors. First, we tested the camera in different scenarios by modulating room lighting, RBG color formats, and object color. We were able to get correct tracking using the camera's demo mode. Next, we created our own program to interact with the camera. We first set up a terminal connection to determine the serial port settings, then after researching the Linux serial port C APIs, wrote a program that allowed us to give the camera our own commands and get the correct response. We are still working on decreasing the camera's jittery movement while it tracks a given object.

http://ieee.ucsd.edu | [email protected]