ChatGPT might be helping you with your homework, but artificial intelligence (AI) overall is progressing way too fast for legislation to keep up, according to US lawmakers. A new AI video technology named Sora, created by OpenAI (the same company that created ChatGPT), stands as yet another example of how the artificial intelligence industry is evolving beyond our ability to regulate it.
Sora is able to create videos up to 60 seconds in length based on text prompts and prompts with both text and images. Unlike other video generators, Sora is capable of producing complex scenes with visual and character continuity. In addition, Sora can include multiple actors within a scene and visualize them conveying deep emotions. Its ability to produce great detail and fluidity is impressive but also troubling to many.
For one, Sora presents a threat to many digital artists and filmmakers, many of whom are concerned that Sora (and other similar technologies that will follow) will place them out of a job. Even further, designers worry that originality will cease to exist in the creative industries. Not only are those who create films and other visual arts at risk, but those who are employed to animate and illustrate such works are also vulnerable.
Beyond the field of visual arts, others are disturbed by the likelihood of harmful ramifications of AI technology, including AI-driven deepfakes. Deepfakes are artificial images or videos “generated by a special kind of machine learning called ‘deep’ learning.” This kind of technology can create synthetic media that is manipulated to appear incredibly convincing (for example, replacing someone’s likeness with another person’s). Deepfakes are used in many acts of fraud, such as fake election campaigns, and in the porn industry, victimizing countless people, even celebrities. As this technology progresses, it’s getting increasingly harder to spot the difference between real and deepfake videos. Many are worried that Sora will only amplify these types of counterfeit media, especially because of how detailed the program can be.
For now, Sora is only available to “red teamers” (professional hackers and cybersecurity experts tasked with identifying potential areas of exploitation in the program) and a select few visual artists, designers, and filmmakers. In addition, OpenAI plans to incorporate “identifying metadata” in their public releases using Sora (essentially like a watermark). The company will also implement safeguards to reject user prompts that include sexual content, extreme violence, and celebrity likenesses. Despite these safety measures, it is clear that AI is progressing much faster than our ability to create legislation that protects citizens and companies. Only twelve states so far have implemented legislation ensuring that companies utilizing AI are in compliance with industry rules. Compared to how fast AI companies are implementing new technologies like Sora, the bureaucracy of state and federal institutions is far too slow. Unless lawmakers move faster in ratifying new AI regulations, AI will continue to outpace us in multiple dimensions of our lives.
The Student Movement is the official student newspaper of Andrews University. Opinions expressed in the Student Movement are those of the authors and do not necessarily reflect the opinions of the editors, Andrews University or the Seventh-day Adventist church.