Skip to content

Things to do next year

We’ve made a lot of progress on this project, and we’ll continue to make improvements, but there’s also some stuff that probably won’t get done before the competition this June.

Naming

Right now, we’ve got all sorts of names for our ROS nodes and topics. We probably use every possible naming convention. It’s gotten a bit out of hand. It would be nice if we used the naming convention, along with a standard nomenclature for nodes and topics. This isn’t something we really need to worry about, because it doesn’t prevent any functionality. It’d just be kinda nice to have defined ways of naming something. It makes it a lot easier to read nodes and topics and understand what they’re for. It saves time during debugging because it takes less brain power to realize what isn’t running or being published to.

SLAM Integration

We may implement some kind of mapping before the competition this June, but it probably won’t be very fancy. It’ll probably just be something that us humans will look at off-line so that we can add waypoints or tweak stuff between runs. It would be really cool if we could use an existing SLAM implementation, like the gmapping package. Integration like that would likely require us to use the TF package extensively. Right now we’re practically ignoring it. By itself, this isn’t particularly required for IGVC, besides having the robot’s perspective on the field.

Deliberative Agent

At the moment we’re exclusively using a reactive agent for navigation. This has inherit limitations–it is not guaranteed to take a fast path to the goal, and it might actually never take the robot to the goal. If we were mapping out the field, and we used that map in making decisions, the robot could become very smart. What we envision here is the GoalMaker node acting as a deliberative agent and setting extra goals to get the reactive agent to follow a path. This contrasts our current approach of setting goals statically ahead of time.

Computer-System Integration

On numerous occasions, we have hit the kill switch, forgetting that the computer isn’t aware that the motors don’t have power, and that it is still sending commands to the PSoC. The PSoC isn’t aware that the motors don’t have power either, and so its PID loop is going nuts, trying to tell the motor controllers to go forward, but not reading any forward motion from the encoders, so it pushes harder. And then, we ignorantly hit the kill switch again, and the robot goes flying forward. Another flavor of this problem occurs when the wheelchair’s motors are not engaged. There are two levers in the front of the robot that engage the left and the right motor. When they are not lifted, the motors may spin, but the wheels will not turn. The computer and PSoC of course have no knowledge that the robot isn’t actually moving forward.

So, it would be nice if the computer and the PSoC knew when the robot was capable of movement. But the general problem here is that there are things about the robot that the computer is completely blind to. For example, the robot has no concept of the underlying power system. Ideally, the robot would alert nearby humans when any of its batteries are low. It would initiate a shutdown if they become too low. It would monitor its own power consumption and be able to detect connection problems, like if the safety light isn’t plugged in, or if the kill switch isn’t wired properly. These are things that don’t block the progress of the project, but that would add a lot of safety and convenience while testing and debugging.

Graphing Data

Visualizing data has turned out to be a majorly important part of this project. Amazingly, ROS does not facilitate viewing data on graphs that do not have an axis for time. The fact that we had to spend at least an hour before being able to view raw magnetometer output (on a plot using Pygame made from scratch) to visualize hard and soft iron calibration errors speaks to this.

Up till now we have relied on custom-made charts. We’ve literally coded plot interfaces in Javascript and Python, using HTML5 canvas in the browser and Pygame in real time. It would be much better if we had a convenient, generalizable MatLab-like way of displaying things. Think a terminal command that accepts X and Y axis topics and message types and plots them in real time, NOT a function of time. This would be generic enough to be very useful, and there must be some ROS package that does this.

Combining Environment Data

I’ve come to realize that our method of using a unified scan as the structure for merged data about the environment has a couple of flaws. First, since we’ve switched to the DW technique, we don’t use a scan in the decision-making loop. Converting the scan to a point cloud is one of the first things done by the DW code. Second, if we used memory in the DW loop, it wouldn’t be made up of scans, it would be made of a point cloud. Third, our method can’t do with environment data what we do with localization data: include belief in their combination. At the moment, our process of combining scans is a simple “union” operation. The equivalent naive method of combined point cloud data would also suffer from not being able to incorporate belief.

If we had time, I would switch to creating an occupancy grid in a window around the robot. Basically, a belief counter indicating that a cell is occupied with an obstacle is incremented when data from any given sensor claims that it is occupied, and decremented when data from a sensor suggests that it is not occupied. How much the belief in a cell changes depends on predefined trust in that sensor’s data. This would account for all three flaws in our current setup. It essentially boils down to generating a local map of world around the robot, which could be used in developing a more deliberative approach to obstacle avoidance using path finding.

Of course, this occupancy grid is very reminiscent of Joshua James’ method, as is the DW technique. It seems this whole project has been slowly, inadvertently veering towards being a simple duplication of his work. In hinesight, it would have been a much better idea to try and build on what he had already finished. At the time, we didn’t fully understand it, and wanted to do our own thing.

Advertisements
Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: