Google announced a 5-second AI video generator that’s scary good
Better AI diffusion models also means potential for better deepfakes.
Google’s Lumiere is one of the most advanced diffusion models out there.
Credit: Google
Google’s new AI video generator is its most advanced yet — and that may lead to a rise in more convincing deepfakes.
Google Research just unveiled Lumiere, an AI video generator capable of creating five-second photorealistic videos from simple text prompts. What makes it so advanced, according to the research paper, is a “Space-Time U-Net architecture” that “generates the entire temporal duration of the video at once, through a single pass in the model.”
Previous AI models created videos by generating individual images, frame by frame.
Lumiere will in theory make it easier for users to create and edit videos without technical expertise. Prompts such as “panda playing ukulele at home” or “Sunset timelapse at the beach” generate detailed photorealistic videos. It can also generate videos based the style of a single image, such as a child’s watercolor painting of flowers.
The editing capabilities are where it gets crazy. Lumiere can animate targeted parts of an image, and fill in blank areas from image prompts with “video inpainting.” It can even edit specific parts of the video using follow-up text prompts, like changing a woman’s dress or adding accessories to videos of owls and chicks.
“Our primary goal … is to enable novice users to generate visual content,” the paper concludes. “However, there is a risk of misuse for creating fake or harmful content with our technology, and we believe that it is crucial to develop and apply tools for detecting biases and malicious use cases in order to ensure a safe and fair use.”
What the paper doesn’t mention is the tools Google has already developed, and supposedly put in place.
At Google I/O last May, the company put its safety and responsibility measures front and center. Google DeepMind launched a beta version of an AI watermarking tool called SynthID in August, and in November, YouTube (owned by Google) announced a policy forcing users to disclose whether videos have been AI-generated.
Lumiere is just research at this point, and there’s no mention of how or when it could be used as a consumer-facing tool. But for a company that claims “being bold on AI means being responsible from the start” — presuming the start includes research — this is a surprising omission from the Lumiere team.
Google has not yet responded to a request for comment.
Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master’s degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on Twitter at @cecily_mauran.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.