Hollywood director Valerie Veatch tried OpenAI’s new Sora video-making tool and discovered something troubling. The AI seemed to have strong biases about what people should look like, creating videos that reminded her of harmful historical ideas about human genetics.
Veatch joined online communities where artists share their AI-generated videos, expecting to explore creative possibilities. Instead, she found patterns that made her uncomfortable about how these tools work.
AI’s Beauty Standards Problem
Sora and similar AI video tools learn from massive amounts of existing content to create new videos. But this means they also absorb whatever biases existed in their training material. When Veatch experimented with the tool, she noticed it consistently generated people who fit narrow beauty standards and demographic patterns.
The director realized these AI systems aren’t neutral creative tools. They’re trained on decades of media that already contained biases about race, gender, and physical appearance. Now these same biases are being amplified and automated through AI.
This connects to a darker historical pattern. The same ideas about genetic superiority that drove eugenics movements are now embedded in AI systems that millions of people use daily. The technology may be new, but the underlying assumptions about human worth aren’t.
What Creators Are Doing Next
Some artists are pushing back by deliberately creating content that challenges these AI biases. Others are calling for more diverse training data and better oversight of AI development. As these tools become mainstream in entertainment and social media, the stakes for getting this right keep getting higher.
The creative community is now grappling with whether to embrace these powerful new tools or demand they be fixed first.

