SILS researcher rethinks AI responsibility
By challenging bias and teaching ethics, Francesca Tripodi puts people at the heart of artificial intelligence.

As a socio-technologist, Francesca Tripodi studies how society and technology shape one another.
Tripodi is an associate professor at the UNC School of Information and Library Science and lead faculty at the UNC School of Data Science and Society. She studies how artificial intelligence is reshaping search engines like Google — and how it can amplify the biases already present in the data used to train these systems. She also teaches a master’s-level ethics course at the data science school. Through her research and teaching, she unpacks how AI is changing how we access and understand information.
AI is rapidly expanding, with the market projected to reach $1.34 billion by 2030. It’s already being used in self-driving cars, surgical tools and health apps. And while AI offers benefits like increased efficiency and decision-making abilities, it also raises serious concerns about its energy use, data privacy, algorithmic bias and workforce disruption.
UNC Research Stories sat down with Tripodi to discuss these issues and why ethics must be at the heart of AI development.
You teach a master’s-level course on AI ethics. How do ethical considerations shape the way we collect and use data in AI systems?
Ethics are messy. You can’t just “do” ethics; you have to keep incorporating them. Plus, ethical frameworks are often at odds with one another. In AI and data science, there’s this idea of creating unbiased automated decision-making. But I try to teach students that everything — from how you define the problem to the data you use — is shaped by human choices….
And so what concerns me is how are we getting the data? Are we getting access to data from places with more lax consent procedures? Are we creating agreements with other countries where citizens don’t have the same data rights? What are the larger societal consequences?
AI research at Carolina
See how researchers are working across disciplines to use the technology for the greater good.
What are the pros and cons of using AI tools in everyday life?
I think all tools have the potential to help or harm. Take ChatGPT. I used to make camping lists and always forgot something. ChatGPT generated a checklist in seconds and saved me hours. AI can save time and increase clarity.
But let’s look at other applications. For example, there are new e-commerce tools being used to determine when someone sees a doctor. In theory, they reduce bias — patients might otherwise be seen out of order due to how they look or act, or because of underlying social biases. But those same biases may already be embedded in the AI’s training data. Nurses, doctors, and patients cannot really override the algorithm if their experience or “gut instinct” tells them otherwise.
What worries me is that we’re investing heavily in machines and not in people. These systems are marketed as neutral, but they’re really about cutting costs — and what’s being cut is investment in human beings. For every task we automate, could we instead invest in human infrastructure?
What role should private companies, governments and universities play in setting the ethical boundaries for AI development and deployment?
The corporate development of AI is key. Companies have a responsibility to approach it with integrity and caution — not just rush to monetize it without understanding the long-term impacts.
Governments also have a role to play. It’s disappointing that we still lack real legislation around data privacy and governance. The federal budget reconciliation bill is especially concerning. It removes states’ abilities to regulate data, which goes against the U.S. federal structure.
At the education level, we need to teach students how to use these tools responsibly, think ethically and help improve them.