Stanford researchers focus on fairness and trustworthiness in artificial intelligence development

Stanford researchers focus on fairness and trustworthiness in artificial intelligence development
Jonathan Levin, President — Stanford University
0Comments

Stanford University researchers are working to design artificial intelligence systems that are not only intelligent but also fair and trustworthy. As AI becomes more widely used, concerns about bias, reliability, and safety have come to the forefront.

Mykel Kochenderfer, associate professor of aeronautics and astronautics at Stanford Engineering and director of the Stanford Intelligent Systems Laboratory, highlighted the risks posed by rare but possible “edge cases” in AI applications. He said, “For many applications of AI, whether it is for aerospace, medicine, or financial systems, there are unlikely but possible ‘edge cases.’ If these edge cases are not anticipated, the system could fail, and any one failure can be catastrophic. A lack of trust in these AI systems often stands in the way of their deployment.”

Stanford researchers aim to go beyond identifying such issues. Their goal is to create AI that improves reliability and fairness compared to traditional approaches.

Sanmi Koyejo, assistant professor of computer science at Stanford Engineering and leader of the Stanford Trustworthy AI Research Lab, emphasized a scientific approach: “AI’s capabilities aren’t magic, they’re measurable phenomena that we can study scientifically. Understanding this helps us make more informed decisions about AI deployment rather than being swayed by hype or fear. Scientific measurement, not speculation, should guide how we integrate AI into society.”

Daniel E. Ho, William Benjamin Scott and Luna M. Scott Professor of Law at Stanford Law School and director of Stanford’s RegLab, recounted his personal experience with discriminatory property deed language: “When we purchased our home, I had to sign a document that said that the ‘property shall not be used or occupied by any person of African, Japanese or Chinese or any Mongolian descent.’” Ho explained how RegLab partnered with Santa Clara County to address this issue: “For Santa Clara County, this meant revising about 84 million pages of records dating back to the 1800s. We developed an AI system to enable the county to spot, map, and redact deed records, saving some 86,500 person hours.” California law requires counties to remove racially restrictive language from property deeds—language unenforceable since 1948—prompting efforts like those led by RegLab.

RegLab has also worked on reducing procedural inefficiencies in government using AI tools such as its Statutory Research Assistant (STARA) system. Ho noted bipartisan agreement on government inefficiency: “One of the odd areas of consensus right now is that government processes aren’t working terribly well,” he said. San Francisco City Attorney David Chiu introduced a resolution based on RegLab’s work aimed at cutting obsolete requirements.

“Implemented responsibly, I think AI has tremendous potential to improve government programs and access to justice,” said Ho.

Koyejo’s lab addresses medical bias by developing algorithms ensuring diagnostic tools work across diverse populations: “My lab’s algorithms help ensure that AI systems diagnosing diseases from chest X-rays work equally well for patients of all racial backgrounds, preventing health care disparities,” he said. The team has also created techniques allowing AI systems to forget harmful training data without losing performance.

In addition to healthcare research domestically Koyejo’s group collaborated internationally; graduate student Sang Truong worked with Vietnamese researchers fine-tuning large language models for Vietnamese speakers—a step toward making technology accessible for underserved communities.

“These applications matter because AI failures can lead to harm and exclude entire populations from technological benefits,” Koyejo stated.

Kochenderfer’s research focuses on safety-critical decision-making in domains such as air traffic control and automated vehicles where real-world failures could be severe. He explained challenges related to efficiency demands and sensor limitations: “Ensuring the safety of AI systems that interact with the real world is harder than many people realize,” Kochenderfer said. His ongoing projects include a forthcoming book *Algorithms for Validation* (to be available online) and a new course featuring free lectures on validating safety-critical systems.

“I am very excited to understand to what extent language models can help monitor safety critical systems,” Kochenderfer added. “Language models seem to be able to encode a wealth of commonsense knowledge. If so, they could enhance safety when automated subsystems…fail in unexpected ways.”

Research continues into how certain communities may be excluded from current generations of AI tools—raising concerns over missed opportunities as well as risks stemming from bias or misinformation.

Koyejo summarized his view on responsible progress: “Perfect AI is neither possible nor the right goal… Instead we should aim for AI systems that are worthy of the trust placed in them by society.”

Ho holds additional roles as professor in political science at Stanford School of Humanities & Sciences; courtesy professor in computer science at School of Engineering; senior fellow at both Stanford Institute for Human-Centered Artificial Intelligence (HAI) and Stanford Institute for Economic Policy Research (SIEPR). Kochenderfer is also courtesy associate professor in computer science; senior fellow at HAI; member Bio-X; Wu Tsai Human Performance Alliance; Wu Tsai Neurosciences Institute; while Koyejo affiliates with HAI as well as Bio-X groups.



Related

Ro Khanna U.S. House of Representatives from California's 17th district

Ro Khanna calls attention to SNAP funding and healthcare coverage risks

Representative Ro Khanna raised alarms about upcoming disruptions to SNAP benefits and potential losses in health insurance coverage if ACA subsidies expire through posts on October 30 and October 31, 2025.

Luca Bluett, Player

Santa Clara men’s tennis exits ITA Regionals after quarterfinal finishes

Santa Clara University’s men’s tennis team concluded its participation in the ITA Regional Championships on Sunday, with two players reaching the singles quarterfinals and two doubles teams advancing to the same stage at the Eve Zimmerman Tennis…

Alex Chang, player

Chang and Razeghi reach semifinals at ITA Regionals; Stanford wraps up fall tennis event

Stanford men’s tennis concluded its participation at the ITA Northwest Regional Championships in Stockton, California, with several players advancing deep into the tournament.

Trending

The Weekly Newsletter

Sign-up for the Weekly Newsletter from South SFV Today.