Artificial intelligence can do so many things these days – suggest videos you’d like to watch on YouTube, scour the internet for helpful websites based on your searches and even tell you what kinds of music you’d probably love. 

Lizhen Liang was struck by artificial intelligence’s ability to analyze his interests and recommend things he’d like, so he decided to lend power from AI to study problems that interest researchers from fields such as social science. Currently a doctoral candidate at Syracuse University’s School of Information Studies, Liang is majoring in information science and technology, focusing on computational social science. He plans to graduate in 2025. 

Long before his interest in AI, Liang was struck by how companies could use AI, machine learning and big data to recommend wonderful music he wished he knew. Such discovery inspired him to use computational methods to get better insight about our society and make the world a better place. His goal is to detect and reduce inequality in science.

“Using scientific methods such as natural language processing, statistical analysis, and network science, I strive to understand the intricate relationships between papers, researchers, institutions, and groundbreaking ideas,” he said. 

For his dissertation, he is studying the difference between academia and companies using artificial intelligence. He says the recent advancement of AI has brought significant impact to society beyond scientific research communities. 

His assumption that there has been inequality in science in the field of AI, especially between industry-led research teams and university-led teams, was inspired by his previous research investigating social bias hidden in AI models.

In a 2020 TEDx Talk at Syracuse, Liang explained his process of how to identify social bias such as gender-related bias in AI models. 

“Governments, corporations and other organizations have been using AI to get available information, actionable insight and applications that can change our life,” he said. “We have autopilot cars, auto transcribers that grant more individuals access to information and AI models that can be the world champion of chess. It can change how we interact with and work with other people.”

Large companies, such as Google, Meta and Microsoft, have invested heavily in resources for AI research, Liang says, but researchers from the public sector and resource-limited research teams are unable to develop, recreate and criticize those research contributions.

“The absence of public participation in the development of these products could make conversations in AI safety and ethics challenging, obscuring the vision that AI could be beneficial to humankind and society,” he said. 

Liang’s study is investigating how the industry dominance in the field of AI is driving out resource-limited research teams and whether the field’s trend is hindering resource-limited teams from working on a variety of research topics. 

“This study is meaningful to me because, as a researcher from academia, I am concerned about what industry could bring to the community and how that might impact our life as researchers,” he said.