By now, you might have heard about Open AI’s new darling, Sora, a prompt-based text-to-video creator that is set to change the entertainment industry, content creation world and all industries that use design and video.
I had been playing with Runway, Pika and Invideo for the past year, each with its own advantages in video creation, but they pale in comparison to the end results of Sora shown on OpenAI’s website. The amount of realism and ability to extend, reverse or change the background of a scene is the first of its kind.
Remember Will Smith eating spaghetti? Well that was only a year ago.
Though Sora didn’t release an updated version of Will eating spaghetti, it did generate some other pieces of footage that was just as alarming.
Prompt: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.
Apparently Sora had achieved this quality of video already in May 2023 the only thing is OpenAI never released it to the public because of fear of safety tests. And this makes sense since Google just announced they are pulling back on Gemini’s image creator after users started sharing images that featured people of color, including scenes from history that only involved white people.
I had been writing about how the amount of bias in AI generators for a while on how there is a lack of training data of colored people. Big tech companies acknowledge this problem but might have just tried to reverse this trend a little too hard. I think we might have this happening frequently in the future – the release and pull back of all types of AI tools because it is very hard to predict how the model will present itself in front of millions of queries.
Where will Sora be used?
In the realm of content creation, both individuals and businesses can utilize Sora to generate engaging video content for social media platforms like Instagram, TikTok, and YouTube, bypassing the need for extensive editing expertise or expensive equipment. This can benefit influencers and creators by allowing them to experiment with diverse content formats, explore different styles, and potentially increase their output and audience engagement.
Marketing and advertising professionals can leverage Sora to develop captivating video ads, product demonstrations, and explainer videos efficiently, potentially reducing production costs and enabling faster iteration. Marketing materials can be personalized for specific audiences or regions by generating variations of the same video with different visuals or languages.
Beyond content creation, Sora holds potential in the media and entertainment industry. Filmmakers and animation studios can utilize Sora for storyboarding and pre-visualization, generating visual representations of scripts and storyboards to streamline the pre-production process.
Sora’s impact on Hollywood
But the media has been saying that Sora will obliterate Hollywood. I would correct that thought. I don’t think Sora will make Hollywood obsolete, it’ll simply enable more common people without film knowledge to be able to create new works. My money is the content creation space will explode with many more Youtubers, Tik Tokers coming online, and with it, probably new types of platforms like them too, potentially something in between a UGC (user generated content) platform and a PGC (professionally generated content) platform like Netlfix.
This situation mirrors the impact of Twitter on journalism. When Twitter emerged, it fostered the concept of "micro-journalism," enabling individuals to share news and insights directly. This led to a shift in attention, with audiences increasingly favoring the voices of individual reporters over traditional media outlets. Similarly, by democratizing the ability to create high-quality videos, this platform empowers individuals to build their own audiences and establish themselves as independent creators.
It’s also similar to what Canva has done to the design world. By providing millions of templates, color schemes to the hands of non-design oriented professionals, Canva has provided power to the masses, instead of relying on designers. As long as you have the vision and copy to how a pitch deck, one-pager or document should look, you had the ability to produce a high quality piece of work. We can’t say that Canva has cause designers to lose their jobs. Instead, designers have started to focus more on creativity, problem-solving skills, and ability to deliver strategic solutions beyond simple aesthetics.
What can we do to prepare before this new tool is released?
If you want to use this for work, personal content creation or just to test the technology, I would start thinking about what areas you can use this in your day to day life. What places can benefit from becoming more visual and story-like where before video was too expensive to implement.
Get better at describing things aka prompting, but at a deeper level. Get better at asking questions and describing commands. If you were bad at asking questions before, you’re going to be at even more of a disadvantage in the future. Get better at writing. Sometimes we might exasperate with frustration at a piece of AI tech because it’s not doing the thing we intended it to do, but actually, it might just be we didn’t give the right command we intended to give.
The internet is going to become a whole lot more visual.
Comments