Researchers at the University of Michigan are studying various aspects of humans in motion to teach self driving cars how to recognise and predict pedestrian movements with greater precision.
Data on gait, body symmetry and foot placement of humans was collected using cameras, LiDAR and GPS, and then recreated in 3D computer simulation. Using this, the researchers have created a ‘biomechanically inspired recurrent neural network’ that catalogues human movements. This information is used to predict poses and future locations for one or several pedestrians up to about 50 yards from a vehicle.
According to Ram Vasudevan, assistant professor of mechanical engineering at the University of Michigan, only still images were previously used for research in this area, with studies not really concerned with people moving in three dimensions.
However, if vehicles are going to operate and interact in the real world, it’s important to ensure that predictions of where a pedestrian is going don’t coincide with where the vehicle is going next, he added.
For vehicles to have the required predictive power, the network needs to dive into the smallest details of human movement including the pace of a human’s gait, the mirror symmetry of limbs, and the way in which foot placement affects stability during walking.
Matthew Johnson-Roberson, associate professor in the naval architecture and marine engineering department, explains that the system is being trained to recognise motion and make predictions of where that pedestrian’s body will be at the next step.
For instance, a pedestrian playing with their phone will be distracted. How they are moving and where they are looking will reveal a lot about their level of attentiveness – this information will also tell you a lot about what they’re capable of doing next.
The new system improves upon a driverless vehicle’s capacity to recognise what’s most likely to happen next.
Source: University of Michigan