How NFTs will save us from the AI Fake News Apocalypse

Tony Aubé
3 min readFeb 17, 2024

With this week’s latest advancement in AI-Generated Videos, a lot of people are now waking up to the fact that yes, fairly soon, anyone will be able to generate videos of you doing anything they want. Including committing crimes and disgraceful acts. And this is going to cause massive issues for political misinformation, and also the justice system. If someone generates a perfectly realistic video of you committing a crime, how can you prove you’re innocent?

As far as I know, I believe I may have been one of the first person to bring up this issue in the tech industry, 7 years ago, where I addressed a lot of these concerns in this article, in this post on Medium, and in my tech talks (in French, unfortunately). Back when I was at Google, this is also something we discussed frequently.

Today, I have a few more answers for you.

At this time, all the big tech companies have entire departments working on this. In the last few years, they have agreed to technological standards to ensure the verifiability and trustfulness of content.

One of them is the Content Authenticity Initiative. It works for both AI generated and real content.

AI Generated Content:

Tech companies have developed invisible encoded watermarks within AI-generated content to label it as such. Any AI content generated by Google, Microsoft, OpenAI, or TikTok, should be verifiably labeled as AI. To be fair, these companies have huge biases in developing this technology, as they want to be able to disregard fake AI content, for example, in their Search Engine results, or in the data they hoard, so they can prevent feedback loops while training their AIs.

Real Content:

This is the best part IMO. How do we prove real content is real? The funny answer is Crypto, and more specifically, NFTs.

Believe it or not: the answer to all of our problems.

That’s right, the same technology y’all have been dunking on for the past few years might be the key to saving us from the AI fake news apocalypse.

Camera makers like Apple, Canon and Nikon have agreed to standards so that when you take a photo, record a video, or some audio, these will be cryptographically encrypted to your devices. So the same way that, say you buy a Bitcoin or a NFT, you can cryptographically prove it is yours, in the future, you will be able to prove that a piece of content has been taken from a real, specific camera or phone.

You can watch a little video about this here:
https://www.youtube.com/watch?v=Xd6vtHMlse4

It’s interesting how technologies that cause problems tend to grow alongside technologies that resolve it, isn’t it? La vie est bien faite dans ce sense.

Ultimately, we’ll need to adapt as a society. But it won’t be as different as how we had to adapt to easily photoshopped images and photorealistic CGI from Hollywood.

However, importantly, keep a healthy dose of skepticism for what you see on the internet going forward. Particularly this year, where there will be 64 countries running democratic elections, including the USA. I believe this will be the first year where we will see adversarial countries like Russia and potentially China mass weaponizing AI to manipulate elections, the same way they did in the 2017 election with Facebook.

--

--