Skip to content

Update 4/9/13

Here’s some details on recent events.

Last week we received a product donation from GoPro: a Hero 3 camera (Black Edition). Some of its notable features are the “ultra” wide angle lens, water-proof case, and attachable lenses. Unfortunately, there doesn’t seem to be much support for streaming pictures over USB, or much of anything for Linux. All we’ve been able to do so far is stream it to GoPro’s iPhone app and take pictures and video using a Micro SD card. Since the camera is set up to stream over HDMI, our current plan is to get an HDMI capture device and plug it into our PCI Express slot. We’re currently in the process of contacting companies that sell HDMI capture devices, since we’re pretty much out of money at this point.

We finished some initial work on getting the sonar array publishing into the system–a node has been created to act as a driver for an Arduino Omega board that is connected to the 12 sonars. I’m not sure if I’ve explained our reason for using a sonar array explicitly. We conceived of the idea originally when the Hokuyo started to malfunction, so we already had a plan to execute when it finally died. Inspired by what RAS did for IGVC in 2009, we have connected a bunch of sonar sensors together in a half-ring. There were about $3 each, but are actually supposed to be pretty good. The data isn’t scaled properly at the moment, so we haven’t been able to bag anything to analyze yet. Here’s a picture of the sonar array taken with the GoPro:

DCIM100GOPRO

The VN 200 does not have support for OmniSTAR’s subscription service. The GPS data we’re getting back from it is accurate to within 5 meters when staying still, and about 2 meters when moving, but this will probably not be good enough. The competition requires that we reach waypoints within 2 meters. So, we’ve begun to contact different companies that produce GPS receivers that are explicitly compatible with OmniSTAR to see if we can get another product donation.

We’ve also been able to analyze the results of using messages from the VN 200 Inertial Navigation System (INS) in our Extended Kalman Filter (EKF). Unfortunately, from looking at some data we recorded the other day at the intramural fields, our EKF’s orientation estimates are better when ignoring these INS messages and instead working directly with the accelerometer and magnetometer messages (even with proper covariances for each). The INS yaw value appears to drift over time, whereas from using our calculations of roll, pitch, and yaw (using this as a reference), we do not observe any drift. We have not yet done any hard & soft iron calibration.

This weekend we hope to be able to go back to the intramural fields to see if we can autonomously navigate to GPS waypoints using both the sonar array and the camera. This would be a big milestone for us, since all we’ve done so far is navigate to local waypoints around obstacles using only the encoders (feedback from the wheels) as input to the EKF for localization. We haven’t ever navigated to actual GPS waypoints before, and we haven’t done anything autonomously while incorporating GPS and IMU sensors. Using the Hokuyo, the robot was able to navigate around obstacles very well; the same code run with the camera has proved decent, but it still scrapes the edges of orange obstacles. Integration of the sonar array scans and the camera image scans remains as yet untested and there’s still some work that needs to be done for it to work. So, it would be a huge step forward if we’re able to observe robust navigation with all these different components running at once.

Here is a dataflow diagram with the nodes that are currently running in the system:

NodeDataflow

click to view data flow

 

Advertisements

Meeting Notes 4/7/13

By the way, we’ve got 59 days before the competition!

Discussed:
* V-REP
    – Installed on granny, needs ROS integration
* Hareware stuff
   – A few things were finished (see spreadsheet), but still needs a lot of work
* Sonar array
   – Cruz working on driver for Linux
* Vision
   – Lucas: still no progress
   – Orange filters need some work, see the “barrels_cones_4-5-13_*” bags
* Camera
   – Can take a video with micro SD card and view it on a computer
   – Can’t stream
* VN 200
   – Now integrated with EKF
   – Initial tests show that GPS is disappointing (see this report for more details)
   – Emailed VectorNav about OmniSTAR compatibility
* PATW competition on the 27th
   – Cancel it. Instead, we’ll go to A&M the next weekend to test, as practice for going to Michigan
* Sparkfun competition in Boulder, Colorado
   – Wth, it only costs $30 to enter. Letths get em.

To do:
* Acquire tall orange cones
* Get micro SD cord and HDMI recorder
* Test VN 200 in an open field
* Hareware stuff!!!!!!!!1111
* Get a GPS receiver that is compatible with OmniSTAR

Updates and Comments 4/4/13

We have been testing in the past few days, and some things have come up. Here are some updates/comments relating to them:

1. Running PSoC_Listener after restarting the computer always fails on the first time, and must be restart. Here is what’s displayed on the console:

[INFO] [WallTime: 1365072413.598298] PSoC Listener is running on /dev/PSoC
[INFO] [WallTime: 1365072413.609293] Info message from PSoC: INVALID START CHARACTER:

2. On the walkway the robot can only make it over the ramp if it goes at around max speed. It might be easier to do this on grass. It also has issues trying to make it up the hill between RLM and ENS, but this is due to it slipping on the wet grass.

3. The remote kill switch is ON if it doesn’t have power. This means that if the remote kill switch gets unplugged or dies, the robot could go charging forward, and the only way to stop it will be to hit the emergency stop. Speaking of which…

4. The emergency stop is still mounted on the front of the robot. This makes the situation of the robot getting out of control even more fun!

5. The vision scanning now can go at a comfortable 10 Hz. Our robot can navigate autonomously around red/orange things using only the camera. However, since the decision making code is still reactive, it sometimes hits cones when they drop below the view of the camera.

6. We really need to have fences and colored drums to test with.

7. We’re blocked on:

  • No bagged VN 200 INS data to test EKF integration. Apparently it won’t publish INS data without a GPS lock, and we haven’t been able to get a lock around ENS.

  • No sonar driver written for Granny yet. The sonar array also needs to be mounted.

  • No driver yet for the GoPro camera.

8. We really need to mount the monitor. We also need to mount the power converter for it.

9. I added a page to the wiki on GitHub describing the reactive decision making nodes. We should add more pages on that wiki that describe other sets of nodes and parts of the project. It would be good to have a single place for all documentation.

Things to do 4/1/13

I can see us finishing this project on the horizon. We’re close. Everything I can think of that needs to be addressed before we should consider ourselves about ready for Michigan:

1. Hardware stuff, see spreadsheet

2. Vision

  • Need to port logpolar plot processor to C++
  • Convert rays to sonar’s coordinate frame
  • Need to move from detecting orange/red to detecting everything but grass [& shadow]
  • Finish lane detection

3. Sonar

  • Get node working to publish data into ROS
  • Publish data as a LaserScan

4. VN 200

  • Need to publish GPS data and yaw in local area reference frame
  • Need to incorporate into EKF (just try bare minimum x/y/yaw at first)
  • Need to complete EKF compass fix
  • Activate OminSTAR subscription

5. Infrastructure for Reactive agent

  • Need to combine rays from vision from sonar into /scan
  • Should test simulator for benefits of converting rays into robot’s coordinate frame
  • Needs to be able to handle LaserScans that are not 180 degrees wide
  • Need to define waypoints in local reference frame in GoalMaker or parameter server

6. Testing infrastructure

  • Bag camera data because of the new mount
  • Bag VN 200 data to see GPS/compass accuracy
  • Bag sonar data to visualize and confirm accuracy as well as robustness of mount
  • Calibrate sonar & camera under conditions where they should give almost equal results

7. Testing reactive navigation on grass getting to multiple waypoints…

  • Using only encoders for telemetry and no obstacles
  • With most sets of constraints from the cross product of the following three sets of constraints:
  1. {red/orange cones/barrels/fences, ramp, all cones/barrels, white painted lanes, everything}
  2. {sonar, camera, both sonar and camera}
  3. {encoders, VN 200, both encoders and VN 200}

8. Test navigation in simulation under scenarios that we can’t construct in real life (like complex barrels arrangements, etc.)

Update 3/26/13

A few updates, summarizing our meeting on Sunday:

1. The Hokuyo is dead for all intents and purposes. I’ve send an email to the SICK Group requesting a donation. I will also be emailing the guy that the RAS IGVC group from 2010 talked to get their Hokuyo fixed. In the meantime, Frank will be assembling an array of inexpensive sonar sensors in case we can’t get the Hokuyo fixed or replaced. It has been shown in simulation that the robot does not require very many beams to perform well.

2. Cruz sent me a list of five companies to talk to about getting a camera. Of those, GoPro and Allied Vision have responded with a hopeful note. Still talking with them.

3. There is a number things to do that don’t involve software. I’ve compiled a spreadsheet of them. The three people who are mainly contributing to these are Chris Davis, Cruz, and Josh Bryant. Some other folks have started helping, including Blake and Han. If anyone else would like to help please check out that list and then ask Chris D, Cruz, Josh B, Frank, or I for details.

4. For the last two weeks Lucas has made no progress on vision, as seen here

5. I’ve been testing out some vision-only obstacle avoidance using the same reactive agent that was used to process the Hokuyo scan. The jist is to take a binary image that contains obstacles (output from Frank and Lucas’s work in vision), transform it to correct for perspective, transform it again into a log-polar image (which is basically a plot of angles versus ln(distance)), and divide that into zones and determine the distance from the front of the robot to obstacles detected in the image. The results are published to /scan. Originally we were considering using ray tracing to simulate scans (inspired by these guys), but our method is easy to code and can use optimized OpenCV/CUDA function calls.

The process of taking an image and creating 10 simulated Hokuyo beams is currently very slow, running at 2 Hz. It was written in Python, so hopefully porting it to C++ will improve this. From testing, the reactive agent still performs reasonably well despite image processing slowness, although its overall performance is very dependent on the camera’s mounting and angle of the lens. The reactive agent has no memory, so the robot may run over an obstacle if it falls out of the camera’s view. The camera needs to be pointed almost directly downward, with the Hokuyo just below the picture.

Update 3/8/13

Hey guys. I want to update everyone before we disperse into the wind for Spring Break.

0. We will not have a meeting this Sunday, but there will one on the 17th.

1. Looks like we’re not getting any discount on the $450 camera we were looking at. Time to start looking for a decent camera, and fast. We really can’t put this off. It’s probably best to distribute the process. Everyone, do research and find cameras that might work with our application, and then send me links. I don’t want one option. Give me ten, and I will email every one of the companies that sell them.

2. I’ve been experiencing some issues with the Hokuyo. Sometimes it starts up, runs for a few seconds, but then crashes with an error code that isn’t documented. This is very concerning. I will probably begin emailing companies for a replacement over the break.

2b. Turns out that the Hokuyo spins inside if it is being powered at all. From now on, we must keep it powered off unless it’s being used. We really need a switch on the power line to make this more convenient.

3. Tested the robot outside yesterday. Here’s a video:

 

4. Unfortunately, the VN 200 driver node was not complete yesterday, so we couldn’t test it. We need to get that done soon!

5. For those not in the senior design group, see our Testing & Evaluation Plan report. Feel free to comment if something isn’t clear.

6. Lucas has begun to document his explorations into vision, mostly trying to fix issues we’ve observed in bagged data that are not a function of the quality of our current camera. Frank has also begun to diagram our developing vision pipeline.

Meeting Notes 2/17/13

To-Do:

  1. Vision Pipeline clean-up
  2. Print Checkerboard. Nice and big -> Neel
  3. Image -> features.
  4. LIDAR -> features.
  5. Raytrace perspective transform for the image by comparing to hokuyo scan.
  6. Get IMU drivers.
  7. Test reactive agent and tweak.
  8. Kalman filter compass update.
  9. Verify GPS offseter.
  10. Google maps doesn’t work for some reason. Fix it.