Skip to content

Update 2/7/13

February 8, 2013

In this post I’d like to give an overview of what we’ve done up to this point. I’ll talk about each in terms of what mechanical, electrical, embedded, and software development we’ve completed, while giving a taste of things that we are working on right now. I will also describe what we plan on testing tomorrow afternoon.

Computer

We were given $3,000 from Intel for use on this project under the condition that we use their Intel Atom processor. Although we haven’t rigorously profiled it, the Atom appears significantly slower than our i5 processor used in the 2011 competition (at least when compiling packages). We bought an Atom motherboard from Zotac with an on-board GPU. We hope that by offloading some of the more intense processing to the GPU, such as our vision algorithms, the slowdown will not be visible when actively running code. We have also built a custom Lexan case to contain the computer, and a 128 GB solid-state drive.

Hardware

Our physical platform consists of a differential drive wheelchair base with a detachable custom-made aluminum frame. We’re still working on a mast/stand for our camera. It is currently made out of a wooden board and aluminum bar, which, according to our tests last week, is incredibly unstable. We’ve put our Hokuyo sensor on a pivoting platform located at about half a foot in front of the rest of the frame, due to its 270 degree range. The pivoting platform is currently under development, but should be completed by tomorrow afternoon.

Power Distribution

When it isn’t plugged into the wall, the robot’s electronics can be powered by two heavy 12 V lead-acid batteries sitting near the driving wheels of the wheelchair (which gives the robot a low center-of-gravity). We have a circuit to allow the robot’s computer and sensors be hotswappable between a power source plugged into the wall and the batteries. The wheelchair manufacturer intended for the batteries to only power the motors and wheelchair circuitry. The power circuit for the robot’s other components is optimal when given 12 V, so we’re currently only using one of the batteries to power all of our components. It would be possible (though less efficient) to use both batteries in order to better distribute the power draw between them. Instead, we are considering a load-balancing circuit to draw 12 V equally between the two batteries. Apparently this is a nontrivial task. We have included in the power distribution system a physical and remote kill switch.

Components

Our current sensors include the following: Hokuyo LIDAR sensor and Ublox 4 GPS receiver and antenna, both passed down to us through RAS and used in our 2011 IGVC entry, a Polulu UM6 IMU, which was used in a previous senior design project and donated to us from our faculty mentor, an HD USB webcam, which we got a while back in RAS, and two encoders, which were purchased early last semester. After doing tests in the past couple of weeks, we quickly realized that our IMU, GPS, and camera were very low quality. The Ublox GPS is generally accurate to within 5 meters, but the distribution is by no means Gaussian, and it seems to become especially inaccurate when we’re not moving. The IMU’s roll/pitch/yaw output drifts and “flickers” significantly even while at rest (this may be a result of incomplete calibration–we would need to do more tests). Though we tested in mostly shadow at around 4 PM, the sun glare off of the grass, barrels, and cones was still enough to ruin the quality of the camera’s data.

So, we’ve been looking into getting some better sensors. We plan to get a VectorNav VN200 integrated IMU, a NovAtel  GPS-702L antenna and receiver, and an OmniSTAR differential GPS subscription. Combined, these should give us submeter positional accuracy, assuming that teams correctly recorded their results in the 2012 IGVC design reports. We have also been able to borrow a Handycam camcorder from one of our team members. We will soon be working on getting it to publish data in a usable format. Currently, we have only two barrels and a few traffic cones for testing, but we plan on buying a few more traffic obstacles from Grainger.

Embedded

We’re using a PSoC microcontroller as our embedded platform. It takes in motor commands from the computer (in the form of angular and linear speeds) and sends back data describing actual movement derived from the encoder feedback. It uses PID to match the encoders in order to follow the computer’s commands. The PSoC will also be responsible for controlling a servo to tilt the LIDAR mount up and down and communicate to the computer what the LIDAR’s current pitch is.

Software

We are building our software in the ROS platform. This allows us to easily modularize programs and data streams using its node/topic/service  paradigm. ROS also allows us to easily record data from tests and play it back later for analyzation. I believe this in particular will be invaluable. ROS also has integration with Gazebo, a 3D simulator, which we have been trying to get working during the last few weeks (we’re still trying to understand the TF framework). We’re also looking into the newly open-sourced V-REP simulator, which boasts support for ROS.

For software development, we have written all drivers for our current sensors, an extended Kalman filter for fusing our sensors and determining a state, and basic proof-of-concept vision algorithms. We are using OpenCV and CUDA to write our vision algorithms. We are currently able to do a perspective transformation on images (using a chessboard), clearly detect the orange barrels and cones, and somewhat detect white lines. We have written code that will convert the readings from the Hokuyo to a point cloud, but it remains to be adequately tested with the Hokuyo tilting. We are also using the main computer as a server to broadcast data over the internet. This is in the form of a “web portal” which consists of a single page with a Google map showing raw GPS readings, a canvas showing raw LIDAR data oriented with the IMU’s raw yaw value, an MJPEG image streaming data from the webcam, and a canvas showing perceived odometry read from the extended Kalman filter. This data is presented all in one page, viewable from any computer that has an internet connection to the robot’s computer. We made it with the hope that it will aid in the debugging process.

Testing

Tomorrow, we hope to test the robot out in the sun, bag image data of painted white lines on grass, test the stability of our camera and IMU mount, and observe the Hokuyo panning. Testing will involve remote-controlling the robot around a soccer field with traffic cones and barrels strewn about. If we get the new Handycam working by tomorrow, we will test it as well.  We will also be testing the PSoC’s ability to publish how the Hokuyo is pitched, as well as our code to convert LIDAR data to a 3D point cloud. Before the run we will also test how quickly it takes us to find the perspective transformation. It may also be worth plugging in the old Ocean Server IMU (which we used before getting the UM6) in order to better test the Kalman filter’s odometry output.

If anything, we will at least test the logistics of taking the robot out of the lab and onto an outdoor field.

Advertisements

From → Uncategorized

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: