Professor Morehshin Allahyari, an assistant professor of art and art history at Stanford University’s School of Humanities and Sciences, is using her work to highlight the cultural biases present in artificial intelligence tools. Allahyari specializes in creating art with technology while also critiquing the very systems she employs.
In a recent interview, Allahyari discussed her project “Moon-faced,” which explores concepts of gender and beauty from Iran’s Qajar dynasty. She explained: “I did a project called Moon-faced that was informed by the history of gender, gender orientation, and the concept of beauty in Iran during the Qajar dynasty, which existed 350 years ago.
When you look at portraits from this time, you cannot really tell if the subject is a woman or a man. The notion of beauty was not what we have now. For instance, women who had mustaches or unibrows and men who had thinner waists and looked more feminine were considered more beautiful. That shifted toward the end of the 19th century, when Iran was going through Westernization, which basically ended this queer visual culture.”
Allahyari attempted to train AI systems to recreate these historical portraits but encountered limitations: “First, the machine didn’t understand the whole Qajar dynasty, which was a very important era within Iranian history and the Middle East in general. It wouldn’t create material from that time. That has changed now somewhat with the newer programs. But then, the other problem was the word queer. When I used that word, it would always create images with the rainbow – which obviously is not from that dynasty.”
She described her process as collaborative: “It was a bit of collaboration with the machine. I had to find ways to outsmart this tool, to find shortcuts or other ways to get to what I wanted. It’s always about understanding what kind of digital libraries these tools are pulling information from. I often talk in my classes about introducing your own materials, creating a type of ‘decentralized’ library, one that is not from the dominant culture. This is one of the examples where I was trying to introduce other material to AI.”
The resulting images reflect both restoration and imperfection: “If you view the images in this work, you can see that they look restored; the image is not a crystal-clear image that you often get with AI now. I like this notion that things are not perfect. You also get to see how the machine is thinking through something, and that imperfection is part of that process.”
In her teaching at Stanford University on video and AI technologies, Allahyari emphasizes critical engagement: “In my class, there are two sections. One is to study an AI tool like Midjourney… There’s no technology that is neutral… So all injustices… are reflected within these technologies.” She encourages students to build their own archives for use with AI tools as a way to challenge dominant narratives.
Allahyari believes artists can play an important role in shaping future developments in artificial intelligence but notes challenges remain: “There have been some attempts with things like artists-in-residence… but we’re not sitting in meetings where tools actually get created.” She highlighted concerns about inclusivity efforts being co-opted for surveillance purposes.
“My hope as an educator is to get younger students…to really think about these issues,” she said.
This article first appeared via Stanford School of Humanities and Sciences.

