On Friday, November 2nd, Newhouse hosted a panel called Artificial Intelligence & Journalism: Consequences and Opportunities in Emerging Tech. The panelists discussed how artificial intelligence (AI) is moving the technology industry to media organizations. This is seen in the launch of magazine chatbots and daily news organizations use of AI to automate articles.

As an Information, Management and Technology major in the iSchool, this topic not only piqued my interest. However, it did expose me to research, researchers, and systems that I am excited to explore.

What is a Techno-Chauvinist?

In her presentation titled Artificial Unintelligence, Meredith Broussard highlighted the effects of homogeneity in the tech industry, which is mostly white males. This majority determines society’s ideas about technology and embed their biases into technology.

Although homogeneity is a problem across countless industries, she referenced a very important term that I have never heard before when talking about autonomous technology vs. human-in-the-loop. She mentions that oftentimes white males in the tech industry are “techno-chauvinists.” A techno-chauvinist is a person who think an autonomous system is better than a human-in-the-loop system because there is “less” error.

The Tradeoff Between Accuracy and Agility

The iSchool’s own Daniel Acuna presents on the Biases in Artificial Intelligence Models. Acuna says that agility is everything for customers, companies, and the government. Accuracy of systems has increased dramatically, but at the cost of interpretability.

I didn’t understand how interpretability relates to AI until Acuna explained the fundamental tradeoff of accuracy and agility. In other words, the more accurate the computer system becomes, the harder it is for a human to understand or interpret. And when these systems make mistakes, it is hard to understand why.

Techno-chauvinists prefer less error and no human-in-the-loop, but this means that systems will be harder to fix if it breaks. Even the most accurate systems have gender and racial bias relating to tasks, occupations, or simple detection.

Artificial Intelligence IS Political.

One of the most important concepts I learned was from Rediet Abebe’s presentation: that AI is very political. It is political in the sense that it is determined by the question you want to answer and the community you want to help.  I was so excited when she highlighted systems and projects that were created by people of color in AI.

Ubenwa is a Nigerian startup by Charles Uno, that has developed a machine learning system to detect child birth asphyxia earlier. The founders say the AI solution has achieved over 95% prediction accuracy in trials with nearly 1,400 pre-recorded baby cries.

GenderShades.org by Joy Buolamwini worked with the three major facial recognition systems and used the dataset from dermatologists to determine bias by gender and shades. Bias in this context is defined as having practical differences in gender classification error rates between groups.

Her results were not shocking, but disheartening. By intersectional subgroups – darker males, darker females, lighter males, lighter females – she found that all facial recognition systems performed worst on darker females.

Solving Bias in AI

Overall, I learned that while techno-chauvinist believe that algorithms don’t have bias, the literature and research still shows that there is a bias against women and people of color in these algorithms.

This shed light on three key points: First, Abebe calls out the lack of diversity within the AI community. Second, she mentions that issues of greed and incentive misalignment remain within the industry. Finally, Broussard pushes a call-to-action for computer scientists to discuss and spend more time with social scientists in an effort to improve AI.