Skip to content

Update 2/16/13

February 17, 2013

Our state

We’re still working on getting our GPS sensor situation figured out. We’re looking to get a VN200 integrated GPS/IMU sensor from VectorNav. The NovAtel antenna might not work with it, but it should be able to decode stuff from OmniSTAR’s L-band subscription service.

We’re looking to get this camera. There are no drivers for Linux yet, but hey, after writing our own drivers for the GPS, PSoC, Ocean Server, and UM6 IMU sensors [all in python, btw], how hard could it be?

We’ve been trying to figure out the best way to translate the image from a camera into an image with perspective connection. This is slightly challenging to do because the camera mount can’t be assumed to be fixed until moment before we run the robot. Apparently doing this is called “homography” and there’s a couple of OpenCV calls that do it for us. We just need a big-enough checkerboard. Here’s an example of it working nicely.

homography

notice how the floor tiles look more like squares in the transformed image

Testing

Yesterday, we took the robot out to a soccer field to test the new camera mount, bag some data with traffic barricades, cones, barrels, and white stripes, and test some reactive decision-making code written the night prior. Here are a couple pictures from that day thanks to Kristen, our awesome historian/photographer:

Image

Frank, left, and Robby (me), right. Frank is debugging the remote kill switch. This picture was taken a few moments before Frank took a big electric shock and decided to take the day off from robots.

josh_robby_robert_lucas_cruz

From left to right: Josh B., me, Robert, Lucas, and Cruz. We’re pushing the robot because the controller was jerky. In hindsight, this was a silly idea, because since the motors weren’t moving we got no encoder data. We did get some GPS data, though.

A few weeks ago we read some of the IGVC reports from 2012. The one from the Naval Academy stuck out to me. They claimed to do everything in MatLab, with a purely reactive agent (no memory from iteration to iteration). They ended up winning 2nd at the competition (src). The simplicity–and apparent success–of their approach was inspiring, especially since at the time we were trying to work out a method of mapping out the course, and were spending many hours arguing over how to do it. Instead, we decided that we would build a purely reactive agent first, and if we get that working, we’ll move on to a more deliberative approach.

For fun, I wrote a reactive agent that mimicked the one described by the Navy guys in a browser-based simulator, seen here. This past Thursday night we ported this decision-maker to a couple of ROS nodes. See the details of that implementation in the figure here. The jist of their method is to run a loop, and on each iteration, find all viable directions, and then act on the best one.

There are some problems with this approach. First, the robot can get stuck when its LIDAR scan shows no viable directions. The best thing to do at that point is try turning around, at which point it might decide the best thing to do is to move right back into a place where it can get stuck. I’d call that thrashing. It can also thrash at a smaller level. For example, say the robot heads towards a goal, and then encounter an obstacle in the middle of its path. It turns a little bit to the right, only to immediately discover a lane, and realize the better way to go is the left. But after turning a little bit to the left, it discovers a different lane there, and thinks that the best way to go is really the right… and so it keeps thrashing between these options, inching forward, and eventually hits the obstacle in front of it. Another problem is getting the robot to move in a direction towards the goal when the goal isn’t in the current directional field of view. Luckily, these problems are solvable. Right now we’re working out the best solutions.

We tried to test our code out on some obstacles yesterday, but it didn’t work too well, and we lacked the ability to debug it properly (namely sleep, a chair, a graphical display of the viable directions, and thicker coats because it was freezing.) We hope to perfect & test it more this coming week. Hopefully we’ll have a real simulator up some time soon so we don’t have to test the structural integrity of barrels and cones as much.

Advertisements

From → Uncategorized

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: