[ccpw id="5"]

[ccpw id="5"]

HomeCrypto NewsCan Blockchain Prove What’s Real Online Versus AI?

Can Blockchain Prove What’s Real Online Versus AI?

-


How often have you come across an image online and wondered, “Real or AI”? Have you ever felt trapped in a reality where AI-created and human-made content blur together? Do we still need to distinguish between them?

Artificial intelligence has unlocked a world of creative possibilities, but it has also brought new challenges, reshaping how we perceive content online. From AI-generated images, music and videos flooding social media to deepfakes and bots scamming users, AI now touches a vast part of the internet.

According to a study by Graphite, the amount of AI-made content surpassed human-created content in late 2024, primarily due to the launch of ChatGPT in 2022. Another study suggests that more than 74.2% of pages in its sample contained AI-generated content as of April 2025.

As AI-generated content becomes more sophisticated and nearly indistinguishable from human-made work, humanity faces a pressing question: How much can users truly identify what’s real as we enter 2026?

AI content fatigue kicks in: Demand for human-made content is rising

After a few years of excitement around AI’s “magic,” online users have been increasingly experiencing AI content fatigue, a collective exhaustion in response to the unrelenting pace of AI innovation.

According to a Pew Research Center survey, a median of 34% of adults globally were more concerned than excited about the increased use of AI in a spring 2025 survey, while 42% were equally concerned and excited.

“AI content fatigue has been cited in multiple studies as the novelty of AI-generated content is slowly wearing off, and in its current form, often feels predictable and available in abundance,” Adrian Ott, chief AI officer at EY Switzerland, told Cointelegraph.

Source: Pew Research

“In some sense, AI content can be compared to processed food,” he said, drawing parallels between how both these phenomena have evolved.

“When it first became possible, it flooded the market. But over time, people started going back to local, quality food where they know the origin,” Ott said, adding:

“It might go in a similar direction with content. You can make the case that humans like to know who is behind the thoughts that they read, and a painting is not only judged by its quality but by the story behind the artist.”

Ott suggested that labels like “human-crafted” might emerge as trust signals in online content, similar to “organic” in food.

Managing AI content: Certifying real content among working approaches

Although many may argue that most people can spot AI text or images without trying, the question of detecting AI-created content is more complicated.

A September Pew Research study found that at least 76% of Americans say it’s important to be able to spot AI content, and only 47% are confident they can accurately detect it.

“While some people fall for fake photos, videos or news, others might refuse to believe anything at all or conveniently dismiss real footage as ‘AI-generated’ when it doesn’t fit their narrative,” EY’s Ott said, highlighting the issues of managing AI content online.

Source: Pew Research

According to Ott, global regulators seem to be going in the direction of labeling AI content, but “there will always be ways around that.” Instead, he suggested a reverse approach, where real content is certified the moment it is captured, so authenticity can be traced back to an actual event rather than trying to detect fakes after the fact.

Blockchain’s role in figuring out the “proof of origin”

“With synthetic media becoming harder to distinguish from real footage, relying on authentication after the fact is no longer effective,” said Jason Crawforth, founder and CEO at Swear, a startup that develops video authentication software.

“Protection will come from systems that embed trust into content from the start,” Crawforth said, underscoring the key concept of Swear, which ensures that digital media is trustworthy from the moment it’s created using blockchain technology.

Swear’s video-authentication software has been named Time magazine’s Best Invention of 2025 in the Crypto and Blockchain category. Source: Time magazine

Swear’s authentication software employs a blockchain-based fingerprinting approach, where each piece of content is linked to a blockchain ledger to provide proof of origin — a verifiable “digital DNA” that cannot be altered without detection.

“Any modification, no matter how discreet, becomes identifiable by comparing the content to its blockchain-verified original in the Swear platform,” Crawforth said, adding: 

“Without built-in authenticity, all media, past and present, faces the risk of doubt […] Swear doesn’t ask, ‘Is this fake?’, it proves ‘This is real.’ That shift is what makes our solution both proactive and future-proof in the fight toward protecting the truth.”

So far, Swear’s technology has been used among digital creators and enterprise partners, targeting mostly visual and audio media across video-capturing devices, including bodycams and drones.

“While social media integration is a long-term vision, our current focus is on the security and surveillance industry, where video integrity is mission-critical,” Crawforth said.

2026 outlook: Responsibility of platforms and inflection points

As we enter 2026, online users are increasingly concerned about the growing volume of AI-generated content and their ability to distinguish between synthetic and human-created media.

While AI experts emphasize the importance of clearly labeling “real” content versus AI-created media, it remains uncertain how quickly online platforms will recognize the need to prioritize trusted, human-made content as AI continues to flood the internet.

Internet, Media, Authentication, Social Media, New Year's Special
Dictionary publisher Merriam-Webster named slop as 2025 word or the year amid AI content concerns. Source: Merriam-Webster

“Ultimately, it’s the responsibility of platform providers to give users tools to filter out AI content and surface high-quality material. If they don’t, people will leave,” Ott said. “Right now, there’s not much individuals can do on their own to remove AI-generated content from their feeds — that control largely rests with the platforms.”

As the demand for tools that identify human-made media grows, it is important to recognize that the core issue is often not the AI content itself, but the intentions behind its creation. Deepfakes and misinformation are not entirely new phenomena, though AI has dramatically increased their scale and speed.

Related: Texas grid is heating up again, this time from AI, not Bitcoin miners

With only a handful of startups focused on identifying authentic content in 2025, the issue has not yet escalated to a point where platforms, governments or users are taking urgent, coordinated action.

According to Swear’s Crawforth, humanity has yet to reach the inflection point where manipulated media causes visible, undeniable harm:

“Whether in legal cases, investigations, corporate governance, journalism, or public safety. Waiting for that moment would be a mistake; the groundwork for authenticity should be laid now.”