Skip to content

Update 5/27/13

May 28, 2013

A few things have happened in the past month. Since we’re not all in Austin together DoloRAS will likely not see much progress between now and the last few hours before the competition (although I sincerely hope that won’t be the case). Here is an unorganized mess of some thought and updates:

1. Here’s the video of us navigating autonomously outside when we were testing the EKF: http://vimeo.com/64577047

2. We have switched to using the Dynamic Window reactive technique. During the Senior Design Project open house, I noticed that Granny performed pretty badly in cluttered environments that contained obstacles other than barrels. Many of its errors (hitting things) seemed to stem from the incorrect clearance calculation it uses to evaluate gaps in the obstacles around it. So, I did some research and discovered that the Dynamic Window technique is a robust alternative. It is described in detail in a paper by Sebastian Thrun, which can be found on his website. Basically, it consists of sampling angular and linear velocities around the robot’s current state, evaluating those samples based on a series of weights, and then taking the best trajectory. I will make an additional post soon that describes several implementation details that we had to figure out and summarize the advice Joshua James gave us on the subject.

In the meantime, here is our prototype/simulation of the DW technique:

https://googledrive.com/host/0B-U60Ca9V5jnOVZWMWFMZnlReWM/test.html

3. We probably won’t be able to use the GoPro camera to stream image data because the HDMI capture device we bought does not have sufficient Linux support. If it were the beginning of the semester, we might be ambitious enough to write our own drivers, but unfortunately it is not the beginning of the semester and we are out of time and out of money to buy an additional capture card. This is a huge bummer, because for a good part of this semester we had be blaming the poor output of our image processing code on the quality of our webcam.

4. The realization that we will probably have to stick with our current webcam, coupled with the fact that we are quickly running out of time, has led us to make a lot of progress in vision. In particular, Frank and Sagar have been hard at work on two new nodes, one to threshold images based on red, blue, green, hue, saturation, and value, and one to run Hough line detection. Their efforts seem to be fruitful. One important step they made was to create a GUI to easily find the best thresholds for certain obstacles.

5. Cruz has been working on getting the EnterpRAS up and running, which would be really great since it has an Intel i5 processor, capable of handling a larger load than the Intel Atom processor that DoloRAS uses. Unfortunately, it seems likely that he will not be able to finish getting everything up and running before the competition.

6. Just before I left for the summer a week and a half ago, I was able to briefly test our DW implementation using our vision pipeline on a simplified environment. It seemed to perform as I would expect. I wanted to eliminate errors induced from any other system, so I went to the 3rd floor of ENS and only used lane detection with our white planks (so as not to induce any errors from lighting conditions) to generate obstacle-avoidance scans. Here is a video of that:

<video I will eventually upload>

7. Unfortunately, I didn’t have time before I left to give the code memory, so it is still purely reactive, making decisions only on the most current scan. This means that Granny is likely to hit obstacles or run over lanes that have sharp corners, since they will end up being out of its field of view. We had not focused on this problem, since we assumed that we would be able to use the GoPro, which has a 180 degree view (as opposed to the webcam, which has a 30-40 degree viewing angle). It turns out that depending on something that hasn’t been seen to work is a bad idea.

8. One major implementation choice we made having to do with the DW code was to run it in Javascript in a browser, rather than as a node in ROS. This is partly because of my recent obsession with using Javascript. It also has to do with the fact that Cruz will probably not be able to get our code on EnterpRAS working, so we need to be able to offload as much processing as possible (much of our vision code has been offloaded to the GPU using OpenCV’s CUDA support). By using a package called rosbridge, and a Javascript library of the same name, we can enable a Javascript client to subscribe and publish to topics, as well as interface to services running on DoloRAS.

This means that rather than running the DW code in a browser on DoloRAS, we can connect another computer over the network and use its browser to run the code. Which computer we use doesn’t matter. It just needs to be able to connect to DoloRAS over a network, and have a browser–nothing else is required. Just before I left, I unified all of the services that the DataServiceProvider node provides, making it only necessary to make one service call to obtain all of the most recent data. I wasn’t able to directly measure the latency after I made that change, but I did observe the decision loop publishing back to DoloRAS at around 40 Hz, which is certainly sufficient.

Personally, I think this method of using a browser to control our robot is really quite cute. It has numerous benefits. It makes absolutely no assumptions about the computer connected. For example, we could use a laptop that can run off of its own power. If one laptop dies, we can replace it with another very quickly. Another benefit is that our prototype code (in the demo I linked to above) and the code that actually makes the robot move is exactly the same. This benefit would also come if we had ever gotten VREP or Gazebo integrated with our code. It also makes visualization easy. From our experience, graphics libraries in Python or graphing tools in ROS incur an unpleasant amount of load on the system. However, depicting data in a browser is easy, and most modern browsers optimize graphics on the HTML5 canvas. By offloading the DW code to an external browser, we can afford to render a verbose, realtime depiction of the decision-making process.

9. We’ve started to use the Hokuyo range finder again. Chris H. took it apart a little while ago to see what was inside, and when he put it back together and plugged it in, it worked on the first try. It’s still being temperamental, but I think it might just last until the competition. I have become very frustrated with the sonar array because of the noise it induces into the system. (See a relavent note in the “Things to do next year” page.)

10. Our IGVC design doc is here:

https://docs.google.com/document/d/1DESwco5fg9WB3DKzQ51f1nYTJshM9ttxalOdC05EGjQ/edit

Advertisements

From → Uncategorized

Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: