
Why Detection Isn’t The Answer Anymore
We have long relied on detecting the presence of AI in content such as images and videos. With the rate of AI advancement, we are in need of new tools.
8/16/20251 min read
Generative AI allows us to generate content that looks increasingly real. Anything can be faked and anything can be altered. To counter this we train detection AI to help us separate real from fake.
But is detection of fake content a suitable approach?
Probably not.
Detection Fails
Detectors analyse content (with AI) and hunts for "artifacts"—subtle flaws like odd pixels or text patterns. Based on the number of these flaws the detector will say stuff like “there is a 96% change of this image being real”. Let’s for now ignore what we do with the 4% manipulation that we let through, and focus on the bigger problem: can the detector guarantee this?
GenAI is getting increasingly good at avoiding artifacts. In fact, in a method called “adversarial training” the generator is trained by pinning it against a detector. The better the detector - the better the generator that can fool it!
We have no theoretical reason to believe that artifacts will remain in AI generated content. At some point, genAI may simply be so good at faking reality that there are no artifacts. There is no difference between AI images and real images. There is nothing to detect.
We Already Know This
OpenAI’s ChatGPT detector, launched in 2023, was scrapped within months for poor accuracy. The 2023 paper 'Can AI-Generated Text Be Reliably Detected?' openly discussed whether detection of AI generated content is possible at all.
Text detection is already doomed, and images and video are next.
While detection algorithms usually showcase “good” accuracies in the high-90s, this is usually tested in a lab with a known generator. Accuracy drops a lot when tested with new generators. We don’t expect the adversarials of the internet to tell us which methods they use to produce misinformation and fraud - detection is always behind.
Generative AI is quickly closing the gap with reality, leaving no artifacts to catch.
Everyone in tech dreams of an “oracle” to spot fakes, but this is unfortunately very unrealistic.
The Solution: Prove Authenticity
Instead of detection we turn the problem upside down: protect and guarantee real content.
Create a technology-enforced safe space for reality online, where no manipulation can exist.
Everything outside the safe space can be real and it can be fake - nobody knows.
We know the necessary technology for creating this safe space: cryptography.
But that’s for another post!
Curious? Follow us on social media for the latest!