Ravi Mohan's Blog

Saturday, October 06, 2007

Snapshots from my Robot Simulator

I've built a tiny robot simulator for my ongoing research. Like the (absolutely brilliant) Stage Simulator, my simulator enables the simulation of robots in a 2D world. Of course my simulator is much simpler than Stage, primarily because it doesn't try to do as much as Stage does. On the other hand it has a couple of important differences that are important to my research. For e.g. it has a built in sensor and motion error model and (being implemented) a sophisticated Sonar Model.

Anyway, here are some early snapshots of a simple environment.

A simple world for a robot to navigate - The world is 10m X 20m with a resolution of 2 cm. The corridor is 2.5 m wide.

A robot (the little blue dot)stands in the middle of a corridor.

The world overlaid with a quadtree that overlays and recursively decomposes the 2D world and enables efficient ray projection.

The world illuminated with a laser. The illuminated points are rendered in yellow. The scan has a resolution of 1 degree,a "pan" of 360 degrees and a range of 3 m. I was a bit concerned that this would be slow but it takes no appreciable time, even when the angular resolution is 0.1 degree. All hail the quadtree!

Of course the robot sees only the illuminated points, not the complete length of the walls etc.

This is what the robot "sees". There are other sensors simulated (e.g. an odometer) but they are not rendered here. You'll notice that the laser sensor senses the environment "perfectly", i.e the range detected is the actual range to an obstacle that intercepts the laser beam. No real sensor works like that. To simulate this imperfection in a simulator, a probabilistic error model is added to filter the reading so the robot sees an approximation to the real range.

Here we've added a bivariate gaussian error (with a standard deviation of 10 cm) to the laser sensor. If you squint, you can see that the readings now only approximate the wall surface. To see this more clearly, we'll remove the unseen parts of the corridor walls and the overlaid quadtree to get a picture of what the robot sees, with the simple error model in place.

This is what the robot "sees" through the error filter. Note the "scatter" of the readings. Now the robot doesn't know for certain where the walls (or whatever objects obstruct the laser - different parts of the world react differently to different sensors - some surfaces absorb light for e.g returning no readings to the laser sensor) are.

The robot simulator is not the central thrust of my research effort, but it helps to have an environment in which one can plug in various sensor models, motion models and error models and see what the robot would see. I thoroughly enjoyed coding this up.

The next step is to replicate a realistic sonar (vs laser) sensor. There are a few interesting issues to be resolved to get this to work properly. Sonar works differently from lasers - cones of ultrasound have different propagation and reflection characteristics from rays of light. Stage simulates sonar with the same "ray trace" approach as is used for lasers. For my research I need a more accurate model (and hence the need for a custom simulator). Early next week, I will be visiting the Robotics Lab of the DRDO (The Defense Research and Development Organization - For non Indians reading this, they are the local equivalent of DARPA) to talk to some scientists working on similar problems. I have a basic sonar model working already, the key challenge of which was to layer different continuous probability distributions and sample data from them in reasonably efficient fashion, but to describe that would make an already long post longer, so I'll stop now.

8 comments:

Anonymous said...

Interesting stuff ...

Can we have a look at the sources ?

Ravi said...

@anonymous

>Can we have a look at the sources ?

I have no plans at present to open source the code if that is what you are asking.

As I mentioned in the blog entry, the simulator is (a) not the main thrust of the research effort but (b) very customized to my specific demands and thus unlikely to be of use to others.


If you need a generic robot simulator, get Player/Stage. The only scenario in which I can imagine the code being open sourced is if my research results in a published paper. That is a little ways off at present (but getting there).


Having said that, there is nothing particularly "secret" about the code. I've already shown the code to the "Coding Bodies" (our monthly "iteration planning" + dinner type events) folks.

If you know me personally (say, you've worked with me before) feel free to give me a call and set up a coding session (read that sentence carefully :-D) .


If you are working on similar problems and believe we should correspond/collaborate, send me email.

Anonymous said...

what language is this written in ?

Ravi said...

@anonymous(2).

Java + a custom variant of Scheme I am working on. The final version will be in C (bootstrapping from java).

Anonymous said...

Hello,
I'd written to your email id. Since I didn't get a reply thought I'd try here. Could you give me a good reference for Quadtrees? The material available on the internet isn't very satisfying.

Thanks in Advance

Ravi said...

Joe,
Sorry for the non reply. I had planned to reply but things were so hectic that I didn't get around to it.

The reference I used was "Analysis of Spatial Data Structures by Hanan Samet. (See Sections 2.3 - 2.9 for QuadTrees).

daneel said...

Really interesting stuff, Ravi!

A small question. You mentioned motion models. Are you planning to model the dynamics of the robot (with its controls and its associated noise), or are you mainly focusing on sensing and planning?

i am interested in knowing more about your sonar model. You mentioned layering of different continuous prob. distributions. Do they represent uncertainities at various stages of the sonar sensor (eg. transmitter, reflection model, receiver, etc...)?

incidentally, the aquatic boat i am working on has a pretty expensive aquatic profiling sonar onboard. You can check out some bathymetery maps we generated for a local marina and a lake.

http://robotics.usc.edu/~namos/Sonar/Data/RedondoHarbor/20070613_22/

and

http://robotics.usc.edu/~namos/Sonar/Data/JamesReserve/20070724_26/
You will find the raw datasets, along with correspoding GPS and IMU data as well. The work is of Amit Dhariwal, a PhD student in our lab.

Ravi said...

JD
I already talked to you on Ymessenger but for anyone else reading this blog,

"Are you planning to model the dynamics of the robot (with its controls and its associated noise), or are you mainly focusing on sensing and planning?"

I am modelling the dynamics of the robot explicitly. I find that most existing sims have shaky/non existent support for this modelling. I am planning to implement sensing/planning on top of an explicitly laid out motion/sensor model.

"You mentioned layering of different continuous prob. distributions. Do they represent uncertainties at various stages of the sonar sensor (eg. transmitter, reflection model, receiver, etc...)?"

yes but not only sensor uncertainties but also things like people moving a round the world, increased sensor failure towards the extreme range of the sensors etc.

Thanks for the links to sensor data. I am now in the middle of some compiler hacks but will get back to the simulator soon.

Regds,