Quantcast

South SFV Today

Friday, January 10, 2025

Stanford outlines principles for responsible use of artificial intelligence

Webp lw9kgvt1d34kt9bktq07xn0ak8x0

John Taylor, Professor of Economics at Stanford University and developer of the "Taylor Rule" for setting interest rates | Stanford University

John Taylor, Professor of Economics at Stanford University and developer of the "Taylor Rule" for setting interest rates | Stanford University

Stanford University has released a report from its AI at Stanford Advisory Committee, which emphasizes the need to balance innovation with responsibility in the use of artificial intelligence (AI) across education, research, and administration. The report outlines guiding principles aimed at encouraging experimentation while addressing challenges such as plagiarism and ethical use.

Provost Jenny Martinez stated, "The growth of AI technologies has huge implications for higher education, from the classroom to the research lab." She highlighted the importance of assessing current AI usage at Stanford and identifying any policy gaps. The advisory committee was tasked with evaluating AI's role in various university functions and recommending responsible practices.

Committee Chair Russ Altman emphasized the importance of encouraging safe experimentation while establishing areas where caution is necessary. He stated, "We wanted to first encourage experimentation in safe spaces to learn what it can do and how it might help us pursue our mission."

The report also stresses that existing laws and policies should apply to AI use. Altman warned against "AI exceptionalism," where people assume traditional regulations don't apply to AI. He advocated for an "AI golden rule": using AI with others as one would want it used with them.

In education, the committee examined how students' adoption of AI tools like ChatGPT affects academic integrity. Dan Schwartz, dean of the Graduate School of Education (GSE), noted that faculty may lack experience with AI compared to students. He suggested frameworks tailored to classroom needs could help clarify permissible uses.

The GSE has established the AI Tinkery within the Stanford Accelerator for Learning to explore educational applications of AI. In research, concerns include authorship credit for AI and potential copyright issues in large language model outputs.

The report identifies areas needing more guidance on administrative uses of AI, such as hiring and admissions processes. It also recommends expanding computing resources to maintain Stanford's leadership in productive technology use.

Altman acknowledged that not all campus uses of AI have been identified but expressed hope that the guiding principles will aid in evaluating new opportunities: "In the end, these principles are probably more useful, and general purpose."

Stanford continues integrating AI into its operations through resources like University IT's Stanford AI Playground and support from initiatives such as the Stanford Institute for Human-Centered Artificial Intelligence.

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS