- Self-driving cars are 5% less accurate in detecting dark-skinned people.
- This happens because the majority of object-detection algorithms had mostly been trained on datasets containing images of white people.
The machine learning models have begun to find homes throughout our everyday lives. The field of autonomous driving, in particular, has gone from ‘may be possible’ to ‘now commercially available’ in the last decade.
However, these advances in automated systems have raised a lot of concerns about self-driving cars in recent years, and it seems that the list of concerns has just got longer. Along with worrying about their safety and ability to tackle obstacles on roads, it’s also necessary to worry about whether self-driving vehicles can harm people of color.
Now, researchers at Georgia Institute of Technology have conducted a study in which they concluded that algorithms used in autonomous driving systems are 5% less accurate in detecting dark-skinned pedestrians.
Higher Error Rates For Certain Demographic Groups Than Others
The team began by investigating the accuracy of state-of-the-art object-detection models that are mostly used in autonomous vehicles. They wanted to find out how exactly these models detect people from different demographic groups.
They analyzed a massive dataset containing pictures of pedestrians and split the people based on their skin tones. They then looked at how often such models precisely identified the presence of people in the dark-skinned group as well as people in the light-skinned group.
Reference: arXiv:1902.11097 | Georgia Institute of Technology
The researchers discovered that these models were 5% less accurate, on average, in detecting dark-skinned group. This inconsistency remained the same even after adjusting a few crucial parameters like the frequently obstructed view of pedestrians and the time of the day in pictures.
The study only considers models used for research purposes, which are trained on publicly available datasets. It did not analyze any model that is actually being used by commercial autonomous vehicles. However, it doesn’t mean the findings are invaluable: studies like these provide strong insights into real flaws and risks.
Reasons Behind Biased/Racist Algorithms
This is not the first time someone has published a report on biased algorithms. Last year, a study uncovered that 3 face-recognition systems developed by tech giants (Microsoft, IBM, and Megvii) were likely to misidentify the gender of dark-skinned people more often than light-skinned people.
Image credit: Iyad Rahwan
Since artificial intelligence models — especially machine learning and deep learning algorithms — learn from the training datasets they are fed, if you do not provide enough variety of data, these models will not work accurately when deployed in the real world.
The same goes for self-driving cars: the object-detection algorithms had mostly been trained on datasets containing images of white people. Also, these algorithms didn’t put much weight on learning from limited datasets (people with dark skin tones).
Researchers believe that these models can be improved by including racially diverse examples and giving more emphasis on limited examples during training.