
Reality Online: Certifying Content
We have already spoken about why detection is no longer able to be effective against ever advancing AI capabilities. And so what is the answer?
Jeppe Nørregaard
12/12/20253 min read
AI is very cool. It writes stories, paints portraits, fakes voices, and generates videos that feel indistinguishable from 'the real thing'. But as impressive as this is, we’re now looking at a dangerous blur: real content and AI content are increasingly impossible to separate. Humans can hardly tell the difference anymore.
That’s why industries lean on detection companies. They promise to flag fake content with claims like:
Unfortunately, everything is wrong with that claim.
Not a Real Score
They achieved 97% in a controlled test against a known deep fake engine - that is how models are tested. The deep fake detection tool is tested against known deep fake engines and learns to look for precisely those models. However real world tests require real world content produced by multiple deep fake engines, some possibly unknown to the detection developers, adding in photoshopping and look-alikes if you want. In these tests we will not find the same high percentages of accuracy. Those tests are very difficult to carry out though, as it would require knowing the ground truth against their own adversarials (malicious actors generating content), which they don't. It is naive to believe that the test-score will also be achieved in real life.
Adversaries Use Detectors Too
Worse, adversaries use these detectors to determine the pieces that slip past - this may be the exact 3% that also manipulates people, sway elections, crash stock markets, or scam grandma. The detectors are being used as a testing tool for those looking to do harm.
A Failing Arms Race
Every time a new genAI model drops from a tech giant, the detectors are left behind. Yesterday’s 98% becomes today’s 95%…tomorrow’s 90%. Updating detection systems is a perpetual game of cat-and-mouse. And in the meantime, most of the content online isn’t even checked - it’s too costly and slow to scan everything.
Constant Doubt
Even experts are left guessing. The White House published a video in September 2025, with Donald Trump appearing in a clip that many suspected was AI-manipulated. It took days of expert debate to conclude it probably wasn’t. Probably. That’s where we’re at: not knowing, not trusting.
Undetectable AI
Researchers openly discuss the coming era of undetectable AI. Detectors work by finding tiny flaws ('artifacts') that AI accidentally leaves behind. But if generative models stop leaving artifacts, becoming more intelligent at leaving no trace, the detection industry collapses overnight. There’s simply nothing left to detect.
Certifying Reality
We can catch 97% of AI-generated media
Imagine a “ringfence” on the internet. Inside the fence, everything is guaranteed authentic. Outside the fence? Same as today: maybe real, maybe fake, who knows. If your job, your vote, or your research depends on reality, you stick within the ringfence. Reality safeguarded. Simple.
Cryptography is a technology that can give us exactly this ability. It has always been a cornerstone of security systems. Cryptography is able to 'sign' content, creating unique proof of exactly where a piece of content comes from. A cryptographic signature does not guess - it guarantees. There is no "97% accurate". If content is inside the cryptographic ringfence, you can trust it without a doubt.
Further good news is that the hardware industry is already moving in this direction. New technology brought out on the latest 2025 devices allow photos and videos to be cryptographically certified at the moment of capture - right on your device. That’s rock-solid security for authenticity at the source. Although cryptographically signing at the source is only half the puzzle.
A Network for Reality
So we have half of the puzzle with cryptographic signatures on content. When hardware starts to sign content as real, we will have billions of devices signing billions of photos and videos on a daily basis. Now we need to tackle the second half of the puzzle - how do we check these content signatures? We need a way for everyone to be able to seamlessly, passively and efficiently check these billions of reality-certified photos and videos. We need a network which can do just this, acting as the ringfence and checking these cryptographic signatures so we as users know what is real and what is not. However 5.3 billion photos are taken globally on a daily basis, this huge volume means capacity and scaling is paramount: we need a network for handling proofs of reality and unprecedented scale.
At InReality we are building the infrastructure that constitutes the ringfence, e.g. the network; we build the wall around reality. A technology allowing anyone to add certified real content, and anyone can verify it. A place with no doubt.
We build a safe zone for reality.
Stop focusing on what is fake >>> Focus on what is real