Learn more about our technology, product and what's happening at InReality.

InReality has been chosen from 4,800+ applications across 108 countries. By joining the advanced computing and cybersecurity area we are recognised as a key innovator in this space. We are now proud to be part of the world’s largest community of hard tech startups solving the world’s toughest challenges. 🌍✨
Hello Tomorrow was started in 2011 and is a respected global deep tech ecosystem. It is focused on supporting up and coming deep tech start ups. Being selected is a fantastic milestone for us at InReality.
Jeppe Nørregaard, CTO of InReality shared his thoughts: "We are so looking forward to accelerating forward with the help of Hello Tomorrow. Being recognised as a promising start up in the deep tech space is really exciting. If you're at the Global Summit in June we'd love to speak with you!"

AI is very cool. It writes stories, paints portraits, fakes voices, and generates videos that feel indistinguishable from 'the real thing'. But as impressive as this is, we’re now looking at a dangerous blur: real content and AI content are increasingly impossible to separate. Humans can hardly tell the difference anymore.
That’s why industries lean on detection companies. They promise to flag fake content with claims like:
We can catch 97% of AI-generated media
Unfortunately, everything is wrong with that claim.
They achieved 97% in a controlled test against a known deep fake engine - that is how models are tested. The deep fake detection tool is tested against known deep fake engines and learns to look for precisely those models. However real world tests require real world content produced by multiple deep fake engines, some possibly unknown to the detection developers, adding in photoshopping and look-alikes if you want. In these tests we will not find the same high percentages of accuracy. Those tests are very difficult to carry out though, as it would require knowing the ground truth against their own adversarials (malicious actors generating content), which they don't. It is naive to believe that the test-score will also be achieved in real life.
Worse, adversaries use these detectors to determine the pieces that slip past - this may be the exact 3% that also manipulates people, sway elections, crash stock markets, or scam grandma. The detectors are being used as a testing tool for those looking to do harm.
Every time a new genAI model drops from a tech giant, the detectors are left behind. Yesterday’s 98% becomes today’s 95%…tomorrow’s 90%. Updating detection systems is a perpetual game of cat-and-mouse. And in the meantime, most of the content online isn’t even checked - it’s too costly and slow to scan everything.
Even experts are left guessing. The White House published a video in September 2025, with Donald Trump appearing in a clip that many suspected was AI-manipulated. It took days of expert debate to conclude it probably wasn’t. Probably. That’s where we’re at: not knowing, not trusting.
Researchers openly discuss the coming era of undetectable AI. Detectors work by finding tiny flaws ('artifacts') that AI accidentally leaves behind. But if generative models stop leaving artifacts, becoming more intelligent at leaving no trace, the detection industry collapses overnight. There’s simply nothing left to detect.
Stop focusing on what is fake >>> Focus on what is real
Imagine a “ringfence” on the internet. Inside the fence, everything is guaranteed authentic. Outside the fence? Same as today: maybe real, maybe fake, who knows. If your job, your vote, or your research depends on reality, you stick within the ringfence. Reality safeguarded. Simple.

Cryptography is a technology that can give us exactly this ability. It has always been a cornerstone of security systems. Cryptography is able to 'sign' content, creating unique proof of exactly where a piece of content comes from. A cryptographic signature does not guess - it guarantees. There is no "97% accurate". If content is inside the cryptographic ringfence, you can trust it without a doubt.
Further good news is that the hardware industry is already moving in this direction. New technology brought out on the latest 2025 devices allow photos and videos to be cryptographically certified at the moment of capture - right on your device. That’s rock-solid security for authenticity at the source. Although cryptographically signing at the source is only half the puzzle.
So we have half of the puzzle with cryptographic signatures on content. When hardware starts to sign content as real, we will have billions of devices signing billions of photos and videos on a daily basis. Now we need to tackle the second half of the puzzle - how do we check these content signatures? We need a way for everyone to be able to seamlessly, passively and efficiently check these billions of reality-certified photos and videos. We need a network which can do just this, acting as the ringfence and checking these cryptographic signatures so we as users know what is real and what is not. However 5.3 billion photos are taken globally on a daily basis, this huge volume means capacity and scaling is paramount: we need a network for handling proofs of reality and unprecedented scale.
At InReality we are building the infrastructure that constitutes the ringfence, e.g. the network; we build the wall around reality. A technology allowing anyone to add certified real content, and anyone can verify it. A place with no doubt.
We build a safe zone for reality.


Proud to announce that InReality was selected as a finalist for The Bright Idea award from The Otto Mønsted Fund, presented at Digital Tech Summit in Copenhagen yesterday!
The Bright Idea Award is awarded by the Otto Mønsted Fund to reward, recognise and celebrate ideas that can create value for Danish society and business of Denmark. The objective of the Otto Mønsted Foundation is to contribute to the development of Danish trade and industry, according to the original deed of the foundation from 1934.
After a deep jury analysis of business vision, likelihood of implementation, value of the idea and also how it would contribute towards UN Sustainable Development Goals, InReality was selected as one of the top 3 finalists for the Award. Congratulations to Njord Aqua who eventually took home the grand prize. For us it is great to be a part of such a brilliant group of startups.
Full announcement can be found here: Linkedin
For other aspiring start ups looking to understand if the Bright Idea Award could be right for you. Find out more on The Otto Mønsted website: https://omfonden.dk/

Generative AI allows us to generate content that looks increasingly real. Anything can be faked and anything can be altered. To counter this we train detection AI to help us separate real from fake.
But is detection of fake content a suitable approach?
Probably not.
Detectors analyse content (with AI) and hunts for "artifacts"—subtle flaws like odd pixels or text patterns. Based on the number of these flaws the detector will say stuff like “there is a 96% change of this image being real”. Let’s for now ignore what we do with the 4% manipulation that we let through, and focus on the bigger problem: can the detector guarantee this?
GenAI is getting increasingly good at avoiding artifacts. In fact, in a method called “adversarial training” the generator is trained by pinning it against a detector. The better the detector - the better the generator that can fool it!
We have no theoretical reason to believe that artifacts will remain in AI generated content. At some point, genAI may simply be so good at faking reality that there are no artifacts. There is no difference between AI images and real images. There is nothing to detect.

OpenAI’s ChatGPT detector, launched in 2023, was scrapped within months for poor accuracy. The 2023 paper 'Can AI-Generated Text Be Reliably Detected?' openly discussed whether detection of AI generated content is possible at all.
Text detection is already doomed, and images and video are next.
While detection algorithms usually showcase “good” accuracies in the high-90s, this is usually tested in a lab with a known generator. Accuracy drops a lot when tested with new generators. We don’t expect the adversarials of the internet to tell us which methods they use to produce misinformation and fraud - detection is always behind.
Generative AI is quickly closing the gap with reality, leaving no artifacts to catch. Everyone in tech dreams of an “oracle” to spot fakes, but this is unfortunately very unrealistic.
Instead of detection we turn the problem upside down: protect and guarantee real content. Create a technology-enforced safe space for reality online, where no manipulation can exist. Everything outside the safe space can be real and it can be fake - nobody knows.
We know the necessary technology for creating this safe space: cryptography. But that’s for another post!