OpenAI video-generator Sora risks fueling propaganda and bias, experts say

SHARE NOW

(NEW YORK) — A sunbathed Dalmatian tiptoes across a windowsill, a Chinese New Year parade engulfs a city street, an archeologist digs up a chair from desert sand.

Videos posted online display these events, but none of them happened. They make up the first publicly available work created by OpenAI’s newly unveiled video-generation tool Sora.

Sora composes videos, lasting up to one-minute long, based on user prompts, just as ChatGPT responds to input with written responses and Dall-E offers up images.

The video-generator is currently in use by a group of product testers but is not available to the public, OpenAI said in a statement on Thursday.

These products carry the potential to improve and ease video storytelling, but they could also supercharge internet misinformation and enhance government propaganda, blurring the already-faint line between real and fake content online, experts told ABC News.

AI-generated videos, meanwhile, threaten to reinforce hateful or biased perspectives picked up from the underlying training materials that make their creation possible, they added.

“The clarity of truth we thought we had with recorded photography and video is gone,” Kristian Hammond, a professor of computer science at Northwestern University who studies AI, told ABC News. “We’ve inadvertently built a world of propaganda engines.”

In response to ABC News’ request for comment, OpenAI pointed to a webpage that outlines measures taken by the company to prevent abuse of Sora.

“We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products,” the company website says. “We are working with red teamers  —  domain experts in areas like misinformation, hateful content, and bias  — who will be adversarially testing the model.”

The company plans to use some safety features already in palace for its image generator Dall-E, the website says, including a tool that polices text prompts to ensure they do not violate rules against “extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others.”

Experts who spoke to ABC News emphasized the difficulty of evaluating a demo product that has yet to be released to the general public. They sounded alarm, however, over the opportunities for misuse of the video generator and the challenges of implementing fully effective safeguards.

“Realistic images of events play into people’s assumptions about what’s going on in the real world and can be used to deceive people,” Sam Gregory, executive director of Witness, an advocacy group that aims to ensure the use of video to protect human rights, told ABC News.

The risks posed by AI-generated content have stoked wide concern in recent weeks.

Fake, sexually explicit AI-generated images of pop star Taylor Swift went viral on social media in late January, garnering millions of views. A fake robocall impersonating President Joe Biden’s voice discouraged individuals from voting in the New Hampshire primary last month.

Experts commended the steps taken by OpenAI to prohibit abuses of Sora along these lines. They warned though of the product’s likely capability to create deep fakes and the difficulty of preventing such videos.

“They can probably put in a filter that says, ‘Don’t generate any videos with Taylor Swift,’ but people will find ways around it,” Gary Marcus, an emeritus professor at New York University and author of the book ”Rebooting AI,” told ABC News.

Sora, like other generative AI products, is trained on troves of online data, leaving it susceptible to widely reproduced biases, such as racial and gender stereotypes.

“There are biases in society and those biases will be reflected in these systems,” Hammond said.

In addition to moderating video prompts and the resulting content, OpenAI plans to implement a “detection classifier” that can identify when a video has been produced by Sora, the online statement said. The company said it will also include a popular recognized digital tag, which essentially amounts to a digital watermark.

Such precautions drew applause from experts, though they warned that videos could potentially be reproduced or altered as means of removing the labels.

“People will be trying to get around the guardrails put in place,” Hammond said. “It’s an arms race.”

Copyright © 2024, ABC Audio. All rights reserved.