Ravi Mohan's Blog
Saturday, October 06, 2007
Snapshots from my Robot Simulator
I've built a tiny robot simulator for my ongoing research. Like the (absolutely brilliant) Stage Simulator, my simulator enables the simulation of robots in a 2D world. Of course my simulator is much simpler than Stage, primarily because it doesn't try to do as much as Stage does. On the other hand it has a couple of important differences that are important to my research. For e.g. it has a built in sensor and motion error model and (being implemented) a sophisticated Sonar Model. Anyway, here are some early snapshots of a simple environment. A simple world for a robot to navigate - The world is 10m X 20m with a resolution of 2 cm. The corridor is 2.5 m wide. A robot (the little blue dot)stands in the middle of a corridor. The world overlaid with a quadtree that overlays and recursively decomposes the 2D world and enables efficient ray projection. The world illuminated with a laser. The illuminated points are rendered in yellow. The scan has a resolution of 1 degree,a "pan" of 360 degrees and a range of 3 m. I was a bit concerned that this would be slow but it takes no appreciable time, even when the angular resolution is 0.1 degree. All hail the quadtree! Of course the robot sees only the illuminated points, not the complete length of the walls etc. This is what the robot "sees". There are other sensors simulated (e.g. an odometer) but they are not rendered here. You'll notice that the laser sensor senses the environment "perfectly", i.e the range detected is the actual range to an obstacle that intercepts the laser beam. No real sensor works like that. To simulate this imperfection in a simulator, a probabilistic error model is added to filter the reading so the robot sees an approximation to the real range. Here we've added a bivariate gaussian error (with a standard deviation of 10 cm) to the laser sensor. If you squint, you can see that the readings now only approximate the wall surface. To see this more clearly, we'll remove the unseen parts of the corridor walls and the overlaid quadtree to get a picture of what the robot sees, with the simple error model in place. This is what the robot "sees" through the error filter. Note the "scatter" of the readings. Now the robot doesn't know for certain where the walls (or whatever objects obstruct the laser - different parts of the world react differently to different sensors - some surfaces absorb light for e.g returning no readings to the laser sensor) are. The robot simulator is not the central thrust of my research effort, but it helps to have an environment in which one can plug in various sensor models, motion models and error models and see what the robot would see. I thoroughly enjoyed coding this up. The next step is to replicate a realistic sonar (vs laser) sensor. There are a few interesting issues to be resolved to get this to work properly. Sonar works differently from lasers - cones of ultrasound have different propagation and reflection characteristics from rays of light. Stage simulates sonar with the same "ray trace" approach as is used for lasers. For my research I need a more accurate model (and hence the need for a custom simulator). Early next week, I will be visiting the Robotics Lab of the DRDO (The Defense Research and Development Organization - For non Indians reading this, they are the local equivalent of DARPA) to talk to some scientists working on similar problems. I have a basic sonar model working already, the key challenge of which was to layer different continuous probability distributions and sample data from them in reasonably efficient fashion, but to describe that would make an already long post longer, so I'll stop now.