How I fooled 25M people into believing Disney+ made a Harry Potter remake

Tony Aubé
Artificial Intelligence in Plain English
4 min readNov 29, 2023

--

For 3 years, I’ve been pushing the limits of AI art on TikTok and Instagram. In my latest project, I challenged myself to recreate the trailer for the first Harry Potter movie, shot for shot, in the style of Pixar.

The video went hyper-viral, and even had multiple websites fact check it. Here’s a full guide on how I made the video! 👇

1. Generating the Images

The images were made with Midjourney using prompts with the subject + in the style of Pixar, adorable cute 3D render. Also, cosy light helped get a consistent dark and warm mood and bring the images together.

Midjourney nails the Pixar style, but it’s not great at following instructions. For specific shots with action, text, or complex poses, OpenAI’s DALL-E 3 works better. However, it tends to generate a cartoonish style that doesn’t look like Pixar.

So you need to be strategic about which tool you use and when. Sometimes you can combine both tools, by creating a first image in one of them, and then referencing it in the other tool.

Finally, Adobe Photoshop Generative Expand and Fill features were a godsend to fine tune and fix details in images as needed. I used it to change the wands, fix deformed hands, and re-framed some of the shots.

2. Animation

The images were animated with RunwayML. This was the most time intensive part of the process as it still requires a ton of takes to even get viable shots. Thankfully, they now have an unlimited tier.

The talking characters we’re made with D-ID. You can upload the image of the character, and an audio file extracted from the movie, and it will animate the character saying the audio.

D-ID is great to animate photos, but getting it to animate characters is tricky. You might get an “Error no face found”. After some trial and error, I found that a cropped square close-up works best. Once I get the animation, I then combine it back with the larger background in After Effects.

You can watch the final result in HD here!

But as you can see by the low views, we still need a viral component to wrap up the project.

3. Making it Viral

The key to making it viral is to get people talking about it, and wondering if it is real. Here, I was inspired by the hyper-viral Shmonguss video, where someone pretends there are a bunch of weird new shows on Netflix. I assumed this was made as a Figma prototype.

The best part about this idea: the distance with the TV, along with the camera movement hides a lot of the AI generated imperfections in the video.

So I set out to recreate the Disney+ UI in Figma!

To do so, I Photoshopped screenshots of the UI, added videos extracted from the Disney+ website, and turned it into an interactive prototype.

👉 You can play with the prototype here.

Finally, I brought it all together with friends in a living room, to create this incredibly viral masterpiece. Enjoy! 👇

If you enjoyed or learned from this, drop a follow as I plan to push the limit of these AI tools in 2024. I’ll share more tutorials along the way!

Also, for anyone working at the companies mentioned in this guide: as a product designer, I’d love to provide UX design advice on how to improve these tools for complex projects like this one!

PlainEnglish.io 🚀

Thank you for being a part of the In Plain English community! Before you go:

--

--