As Toys”R”Us continues to seek a successful post-brick-and-mortar future, it’s hoping a new AI-generated video helps portray the dreams and magic of its founder.
Using Sora, OpenAI’s text-to-video platform, Toys”R”Us created a video depicting an AI-generated version of a young Charles Lazarus, who founded the company in 1948, and his father in an old-time bicycle shop. The one-minute film, which debuted this week, transitions to a dreamscape filled with toys and the brand’s mascot, Geoffrey the Giraffe. As Lazarus falls asleep, he is shown walking through the dreamscape as a narrator says, “Toys ‘R’ Us was the dream of Charles Lazarus. May all of your dreams come true too.”
The goal was to a modernize the brand’s while also honoring Lazarus, who was born a century ago and died in 2018, said Kim Miller Olko, Toys”R”Us global CMO. Following its private premiere of the AI video during last week’s Cannes Lions Festival, it’s unclear if Toys “R” Us plans to use the video in any paid media. For now, the video appears on various online channels including YouTube and Instagram. The financial details of the partnership with Sora were not made available.
“We’re in the 2.0 of what I call ‘same magic, new method’ era,” Miller Olko, who is also president of Toys”R”Us Studios, told Digiday. “Where we want the emotion and all the excitement that everybody has always felt for Toys ‘R’ Us and all the millennial parents, remember from childhood and now share with their kids.”
The efforts are part of a broader turnaround plan for Toys “R” Us, which filed for bankruptcy in 2018. After shuttering its stores, the brand was acquired in 2021 by WHP Global, a private equity firm also owns brands like Express, Bonobos and Rag & Bone. Since then, it’s opened new flagships stores in Minnesota’s Mall Of America and in New Jersey’s American Dream mall. According to Toys “R” Us, the brand now generates more than $2 billion in annual retail sales through 1,400 stores and e-commerce sites. (The new AI film also mentions Toys “R” Us having a presence in every Macy’s store.)
This isn’t the first time Miller Olko has been an early adopter of new platforms; she was among the first to try Facebook Live and helped create Martha Stewart’s potluck dinner party with Snoop Dogg. When asked about the decision to use AI for the film, Miller Olko said the team wanted to push the boundaries of tech in ways that would not have been possible in a traditional studio lot — especially with a limited budget and lack of a time machine. However, she added there’s still room for human creativity, emotion and traditional storytelling.
“It’s that visceral experience throughout the whole thing that still needs to connect it and still needs to also just have the logic of the storytelling,” she said. “There were some things in some of the iterations that were exactly logically what we said, but when you see it played out, it didn’t experientially fit into the human experience like it would in real life.”
The film was created by Native Foreign, a creative agency that got early access to Sora and invited Toys”R”Us to be the first brand to use it with them. To create the film, the team created hundreds of AI-generated video shots with Sora before narrowing them down to a few dozen.
For each scene, the team wrote lengthy prompts for Sora, according to Nik Kleverov, chief creative officer of Native Foreign, who described the tech as “like a new camera and a post [production] in one.” That required telling Sora who was in each scene, what people and objects should be doing, when it took place, where it took place and why something was happening. To tie the past with the present, they also had to make sure Sora focused early scenes to look like the 1920s and 1930s but later scenes appear more modern so kids today wouldn’t be confused or put off by older toys from nearly a century ago.
“As with anything, if you just come into something and you don’t have the context of how things are supposed to work, you may get a somewhat less than desirable — dare I say maybe bland — result,” Kleverov said. “But once you start to pepper in all of the filmmaker terminology and you’re really setting the scene, setting the mood, you start to get a better result.”
This doesn’t mean Toys”R”Us plans to use Sora for all of its video production. Miller Olko said she doesn’t plan to use it to supercharge content volume, but also mentioned the companies’ parent, WHP Global, has embraced AI as one of its key initiatives. However, she does see the potential of using AI to create content for a sister brand, Babies R Us. That could be helpful with placing babies — especially newborns — in different settings and different products beyond a single photo shoot.
Sora, revealed in February, isn’t yet publicly available, but OpenAI has been working with filmmakers to test the platform and recently collaborated with the Tribeca Film Festival. Despite all the publicity — both negative and positive — that Sora attracts, it’s not the only AI video tool on the market. Others include Runway’s newly released Gen-3 Alpha model, Stable Diffusion and others. It’s also important to look at the full landscape of AI tools across various types of content, noted Forrester analyst Rowan Curran, who added it’s still “very early” for AI video.
“These tools are very, very good at creating some sort of visual content,” Curran said. “But there are still going to be a significant amount of human hands needed to turn this into something that’s commercially ready and brand safe.”
Since its release, the video has received mixed reviews, with some citing a more “creepy” aspect while some noticed inconsistencies in the AI-generated versions of Lazarus. Others saw it as an innovative and creative way to tell stories that aren’t possible without AI. Greg Swan, senior partner at FINN Partners, acknowledged Sora’s challenges with creating life-like videos and its inconsistency with visual styles. But by taking an imaginary approach, Swan said Toys”R”Us invites viewers to suspend reality — not fall into the uncanny valley.
“The fact that the Toys ‘R’Us concept is rooted in a child’s dream leveraged that weakness as a strength — knowing audiences expect ethereal experiences to be more fluid and, well, dreamlike,” Swan said. “Smart. And that’s really what made it tell the story so well, despite the unnatural realism of the main character. If this was a more lifelike concept, it simply wouldn’t have worked. At least not with the tools available today.”
There are still plenty of unresolved issues related to AI-generated content, including ethical standards, legal requirements, training data transparency and AI disclosures. Swan said those are all areas that are a mutual responsibility for brands, tech companies, educators and governments. (He also noted the importance of keeping humans in the loop for editing, ethics and promotion.)
“Toys’R’Us kids don’t have to grow up,” Swan said. “But as the AI industry matures, early examples like this one serve as good fodder for all of these stakeholders to test, practice, and learn from.”