A team of researchers led by Professor Tim Barfoot (UTIAS) has applied a new strategy for robots to predict the future location of dynamic obstacles, allowing them to navigate spaces without colliding with people.
The project, which is supported by Apple Machine Learning, will be presented at the International Conference on Robotics and Automation in Philadelphia at the end of May. The results from simulation have also been published in an article on arXiv.
“The principle of our work is to have a robot predict what people are going to do in the immediate future,” says Dr. Hugues Thomas (UTIAS), a postdoctoral fellow in Barfoot’s lab. “This allows the robot to anticipate the movement of people it encounters rather than react once confronted with those obstacles.”
To decide where to move, the robot makes use of Spatiotemporal Occupancy Grid Maps (SOGM). These are 3D grid maps maintained in the robot’s processor, in which each 2D grid cell contains predicted information about the activity in that space at a specific time. The robot choses its future actions by processing these maps through existing trajectory-planning algorithms.
Another key tool used by the team is light detection and ranging (lidar), a remote sensing technology similar to radar, except that it uses light instead of sound. Each ‘ping’ of the lidar creates a point stored in the robot’s memory, and previous work by the team has focused on labeling these points based on their dynamic properties. This helps the robot recognize different types of objects within its surroundings.