- New machine learning-based model makes driverless vehicles safer.
- It detects instances in which AI can learn from examples that could cost dangerous errors in the real world.
Recent advances in the field of artificial intelligence have made self-driving vehicle and autonomous robots smarter. Though still in its infancy, driverless cars are becoming increasingly common and could radically transform our transportation system in the coming years.
Recently, researchers at MIT and Microsoft developed a model that can uncover ‘blind spots’ of autonomous systems with the help of human input. It identifies instances where these autonomous systems learn (from training examples or simulations) when they could make mistake in real-world environments.
The AI powering self-driving cars, for instance, is extensively trained in simulation to prepare the vehicle for almost all possible scenarios on the road. However, the system sometimes makes errors in the real world: it doesn’t change its behavior (where it should) in certain scenarios.
For example, if a driverless car (not extensively trained) is cruising down the highway and an ambulance flicks on its siren, the car may perceive ambulance only as a big white car and may not pull over or give way to the ambulance or other emergency vehicles.
Researchers want to bridge the gap between simulation and the real world by integrating human input and help autonomous systems better know what they don’t know.
How The Model Takes Human Feedback?
The autonomous system is initially trained on a virtual simulation where it maps each situation to the best action. It is then deployed in the real world, where humans interrupt the system whenever it takes incorrect actions.
Humans can fed data either by corrections or demonstrations. To provide corrections, a person can sit in the driver’s seat while the vehicle drives itself along a planned route. If the system takes inappropriate action(s), the human can take the wheel and this sends a signal to the AI that it was taking incorrect actions and what it should do in that particular situation.
Reference: arXiv:1805.08966 | MIT
Alternatively, humans can train the system by demonstrating/driving the vehicle in the real world. The system analyzes and compares every human action to what it would have done in that condition. Each mismatch (is there is any) points out system’s unacceptable action.
Handling Blind Spots
Once the manual training is over, the system essentially has a list of acceptable and unacceptable actions. The goal is to detect the ambiguous situations (or blind spots) that AI finds difficult to differentiate.
Courtesy of researchers | MIT
For example, the autonomous system may have cruised alongside a big vehicle several times without pulling over. However, if it does the same with an ambulance (that appears exactly the same to the AI), it receives a feedback signal representing unacceptable action.
To handle this type of situation, the team used a machine learning method known as Dawid-Skene algorithm. It takes all blind spots labeled as ‘acceptable’ and unacceptable’, aggregates them, and uses probability calculations to detect patterns in those labels.
The algorithm then yields a single aggregated blind spot along with a confidence level for each situation. It also generates a heatmap that shows low to high probability of being a blind spot for each situation.
Read: New Self-Driving Vehicle Algorithm Can Change Lane Aggressively
In the real world, if the model maps a situation as a blind spot with high probability, it can ask a human for appropriate action, enabling safer execution. This type of model can also help autonomous robots predict when it might take inappropriate actions in novel conditions.