Originally published on WilliamNakulski.org
The recent Neural Information Processing Systems conference in Long Beach, California drew in more than 8,000 attendees. Amid discussions about new technologies and coming trends, the topic of artificial intelligence (AI) brought the conference to a more somber note.
The Moral Issues Surrounding AI Tech
Kate Crawford delivered the keynote speech, drawing on her experience as a Microsoft researcher, and her chosen topic concerned the direction that AI technology is headed. Already, she noted that AI has had unexpected and negative impacts on people’s lives and suggested researchers have an obligation to address this growing problem. She pointed out that AI interfaces have already caused harms, either accidentally or intentionally.
By way of an example, Crawford pointed to a 2015 incident in which black people were labeled as gorillas by Google’s photo AI service. Since then, the problem of AI interfaces learning and applying stereotypes has increased in frequency, suggesting this is a growing problem. These incidents are especially troubling, considering the fact that organizations in every sector from finance to the criminal justice system have started to use AI technology.
Ms. Crawford, who also co-founded the AI Now Institute at NYU, is using her resources to look more deeply at how AI relates with society.
More Shared Their Concern Over AI Learning
While artificial intelligence is intended to learn and adapt by its very nature, researchers from Cornell and Berkeley Universities also raised alarms. They noted that AI systems will infer race and gender, even when those criteria are not fed into the system. They extrapolate a person’s ethnicity and economic status based on that individual’s residence and other factors, suggesting that data analysis may be flawed.
Currently, professionals in the field of AI research admit that the main problem is that AI has become the new black box. By this, they mean that they know it works, but they can’t explain exactly why it works. The key may be in the way AI is programmed, suggest some. By bringing a more diverse selection of programmers to the task, the artificial intelligence in question may also be more diverse.
In essence, the perspectives of the developers, either deliberately or subconsciously, interpret who the AI system views society. In view of the learning capabilities of AI systems, it seems everyone in society and in government is responsible for what AI learns about our society. This suggests AI is only mirroring our own shortcomings.