Why trust services matter in the age of AI

As AI makes content creation effortless, proving authenticity becomes essential. Open standards for verification, credentials, and provenance are the infrastructure of digital trust.

In early 2024, a finance employee at a multinational company joined a video call with what appeared to be the company's CFO and several other executives. The CFO instructed the employee to transfer funds for a confidential acquisition. The employee complied. Twenty-five million dollars later, the company discovered that every person on that call, except the employee, had been a deepfake.

This isn't science fiction. It's a case study now cited across the security industry. And it represents a new category of problem: as AI makes content creation effortless, how do we know what's real?

The question matters beyond fraud prevention. It touches everything from journalism to legal contracts, from academic credentials to medical records. When anyone can generate convincing text, images, audio, and video, the ability to verify authenticity becomes infrastructure, as essential as the content itself.

The economics of eroding trust

The costs are already measurable. According to a 2024 survey by Regula, organizations that experienced deepfake-related fraud lost an average of $450,000 per incident. Financial services firms fared worse, averaging $603,000 when video or audio deepfakes were involved. Ten percent of organizations affected reported losses exceeding one million dollars.

These numbers are growing. U.S. generative AI fraud losses are projected to rise from $12.3 billion in 2023 to $40 billion by 2027, a compound annual growth rate of 32%. North America alone saw over $200 million in deepfake-enabled fraud in just the first quarter of 2025.

But the economic damage extends beyond direct fraud. Trust itself has value, and that value is eroding. A 2025 survey by Jumio found that 69% of adults now believe AI-powered fraud poses a greater threat than traditional identity theft. In a June 2025 study, 85% of Americans reported becoming less likely to trust news, photos, or videos online over the past year specifically because of deepfakes.

This erosion creates a paradox. AI is enabling remarkable advances in how we create documents, analyze information, and collaborate. But if people can't trust what they're seeing, those capabilities lose much of their value. The more powerful our creative tools become, the more essential our verification tools must be.

What trust services actually do

Trust services is a broad category, but the underlying mechanisms share common goals: proving identity, demonstrating integrity, and establishing when something happened. These capabilities, when implemented correctly, let both humans and machines answer fundamental questions about digital content.

Digital signatures form the foundation. When you sign a document cryptographically, you create a mathematical relationship between your identity (via a private key) and the document's contents. Anyone can verify that relationship using your public key. If even a single bit of the document changes after signing, the verification fails. This provides both authenticity (the signer is who they claim to be) and integrity (the content hasn't been altered).

The legal weight of these signatures is well-established. In the United States, the ESIGN Act and the Uniform Electronic Transactions Act (adopted by 49 states) ensure that electronic signatures cannot be denied legal effect simply because they're electronic. In January 2026, a U.S. District Court judge issued the first judicially signed order using a PKI-based digital certificate, a milestone in official recognition of cryptographic authentication.

The European Union goes further. Under the eIDAS regulation, electronic signatures exist in three tiers: simple, advanced, and qualified. Qualified electronic signatures, created using certified devices and issued by qualified trust service providers, carry the same legal weight as handwritten signatures throughout the EU. A document signed with a qualified signature in Portugal is legally equivalent to a pen-and-ink signature in Finland.

Timestamps add another dimension: proving when something existed. RFC 3161, published in 2001, established a standard protocol where a trusted third party (a Time-Stamping Authority) issues cryptographic proof that a document existed at a specific moment. Under EU law, qualified electronic timestamps enjoy a legal presumption of accuracy for the date and time they assert. For documents that need to prove they existed before a certain date (contracts, intellectual property filings, regulatory submissions), this is essential.

Together, signatures and timestamps create audit trails. Who signed what, and when? What was the document's state at each stage? These questions, which used to require physical archives and notary records, can now be answered through cryptographic evidence that travels with the document itself.

The standards making this possible

Trust services only work at scale if different systems can understand each other. This is where open standards become essential, and where significant progress is happening.

In May 2025, the W3C published the Verifiable Credentials 2.0 family of specifications as formal web standards. This is a major milestone. Verifiable credentials provide a standardized way to make claims that can be cryptographically verified: a university can issue a diploma, an employer can confirm employment, a government can provide identity documents, all in formats that any compliant system can validate.

The specification defines three roles: issuers (who create credentials), holders (who receive and present them), and verifiers (who check them). Credentials can include selective disclosure mechanisms, allowing holders to prove specific claims without revealing everything. You might prove you're over 21 without revealing your exact birthdate, or confirm your employment status without disclosing your salary.

Adoption is accelerating globally. The European Union's Digital Identity Wallet regulation (eIDAS 2.0) requires member states to provide interoperable digital identity wallets by late 2026, built on standards including W3C verifiable credentials. Spain has already published technical specifications using verifiable credentials for online age verification, protecting minors while minimizing the personal data that must be shared. Japan began issuing digital identity credentials on smartphones in June 2025. Australia's Digital ID Act 2024 established an accreditation framework for digital identity providers.

For content authenticity specifically, the Coalition for Content Provenance and Authenticity (C2PA) has developed an open standard for attaching cryptographic metadata to digital media. Over 200 organizations now participate, including Adobe, Microsoft, Google, Meta, Amazon, and OpenAI. The C2PA specification, now under review for ISO standardization, provides a way to record how content was created, whether AI was involved, and what edits were made.

TikTok implemented C2PA-based detection to automatically flag AI-generated videos. Cloudflare added options to preserve content credentials for images hosted on their CDN. Sony launched cameras that embed authenticity data at the moment of capture, letting publishers verify that footage came from a physical camera rather than an AI generator. These aren't theoretical capabilities; they're shipping features.

How verification actually works

Understanding the technical mechanisms helps clarify what's possible and what isn't.

Public key infrastructure (PKI) underlies most trust services. It works through asymmetric cryptography: you have a private key that you keep secret and a public key that you share. Data signed with your private key can only be verified with your public key, and the public key can't be used to derive the private key. Certificate authorities vouch for the connection between public keys and identities, creating chains of trust.

When you sign a document, you're typically signing a cryptographic hash, a fixed-length fingerprint that represents the document's contents. Any change to the document produces a completely different hash. The signature proves both that you created it (because only your private key could produce that signature) and that the content is unchanged (because the hash would differ if anything were altered).

Timestamps work similarly but involve a trusted third party. You send the hash of your document to a Time-Stamping Authority, which creates a signed assertion: "I, the TSA, certify that this hash existed at this specific moment." The TSA doesn't see your document, only its hash, but its signature provides evidence of when the document existed.

C2PA takes this further for media files. Content credentials can be "hard bound" through cryptographic hashes (any change breaks the verification) or "soft bound" through watermarks and fingerprints (allowing verification even if metadata is stripped). The specification supports assertions about creation tools, AI involvement, editing history, and more, all signed so tampering is detectable.

Verifiable credentials add claims and selective disclosure. A credential might contain your name, birthdate, address, and photo, all signed by a government authority. But when presenting the credential, you can choose to reveal only what's needed. Cryptographic techniques like zero-knowledge proofs allow you to prove statements ("I'm over 18") without revealing the underlying data (your exact birthdate).

The limits of verification

It's important to be clear about what trust services can and cannot do.

Verification proves authenticity, not truth. A signed document proves that a specific person signed specific content at a specific time. It doesn't prove that the content is accurate or that the person was telling the truth. A perfectly authenticated document can still contain lies.

Provenance tracking is voluntary. C2PA metadata only exists if creators choose to include it. A deepfake without content credentials is indistinguishable from legitimate content without credentials. The absence of provenance data isn't evidence of manipulation. This is why adoption matters: the more content carries verified credentials, the more suspicious unmarked content becomes.

Metadata can be stripped. While hard-bound cryptographic hashes detect tampering, someone can simply remove the metadata entirely and share the raw content. Soft binding through watermarks provides some resilience, but sophisticated actors can attempt to remove or corrupt these signals. No system is tamper-proof against determined adversaries.

Privacy and verification exist in tension. The more information we embed for verification (creator identity, location, timestamp, device information), the more we potentially expose. Journalists, activists, and vulnerable individuals may have legitimate reasons to create content without traceable provenance. Good standards accommodate both verification and privacy needs.

These limitations don't diminish the value of trust services. They define the appropriate use cases. Verification is powerful for contexts where authenticity matters and parties are willing to participate in verification systems: legal documents, official credentials, business transactions, archival records. It's less useful for adversarial contexts where bad actors deliberately evade detection.

Why this matters for documents

Everything we've discussed applies directly to documents, whiteboards, and drawings, the formats the Open Document Alliance cares about most. This is why open document standards are so important. They're the foundation that trust infrastructure builds upon.

Consider a contract. Today, most contracts live as PDFs, static images that can be signed electronically but carry limited provenance information. The signature proves someone signed, but what about the rest of the document's history? When was each clause added? Who proposed which changes? Were any modifications made after signing? These questions matter for legal interpretation, and current formats make them hard to answer. We're working on a new open format for agreements that addresses exactly these challenges.

Now imagine a contract format with verification built in. Every edit is signed by its author with a timestamp. The complete revision history is cryptographically chained, making alterations detectable. The signing ceremony captures not just the final signature but proof of what each party saw and agreed to. Years later, anyone can verify the complete provenance of every element.

The same logic applies to other document types. Academic transcripts that can be verified instantly, without calling the registrar. Medical records with tamper-evident audit trails. Whiteboard diagrams where you can prove the original sketch predated a patent filing. Engineering drawings with certified revision histories for regulatory compliance.

AI makes these capabilities more important, not less. As AI assistants help us draft documents, analyze content, and suggest edits, knowing what was human-authored versus AI-generated becomes valuable context. A document format that captures this provenance (which sections came from AI suggestions, which from human authors) provides transparency without restricting how people work.

Making verification practical

Standards exist. Legal frameworks support them. Major companies are adopting them. So why isn't verification ubiquitous?

Partly, it's tooling. Implementing cryptographic signatures correctly is technically demanding. Managing certificate hierarchies, handling revocation, supporting long-term validation: these require expertise that most developers don't have. The distance between "this standard exists" and "I can use it in my application" remains substantial.

This is exactly where open-source infrastructure helps. When reference implementations exist in major programming languages, adoption becomes practical. When validation libraries are battle-tested and well-documented, developers can integrate verification without becoming cryptography experts. The ecosystem work matters as much as the specification work.

User experience is equally important. Verification should be visible but not burdensome. People should see that a document is verified, and be able to inspect what that means, without navigating complex certificate chains. The best implementations make trust signals as intuitive as the padlock icon in a browser's address bar.

Interoperability requires coordination. A credential issued by one system should be verifiable by another. A timestamp from one provider should be recognized by different platforms. This is why open standards matter: they create the shared language that lets different systems understand each other's trust signals.

What we're building toward

At the Open Document Alliance, we believe verification should be a first-class feature of document formats, not an afterthought bolted on through external services.

This means designing formats where signatures, timestamps, and provenance data have defined places in the schema. Where every element can optionally carry authorship and modification history. Where the format itself supports the verification workflows that modern use cases demand.

It means building tooling that makes verification accessible. Parsers that validate signatures as they read documents. Libraries that make it easy to sign content correctly. Reference implementations that demonstrate best practices. The goal is lowering the barrier until verification becomes the default rather than the exception.

And it means aligning with existing standards rather than inventing new ones. W3C Verifiable Credentials, C2PA content provenance, RFC 3161 timestamps, eIDAS-compliant signatures: these represent years of expert work on hard problems. Our job is to integrate these capabilities into document formats and make them practical for developers.

The trust infrastructure we need

Trust has always been essential for cooperation. Contracts, credentials, and records exist because we need to make commitments and verify claims. What's changing is the scale and sophistication of the threats to trust, and fortunately, the power of the tools available to preserve it.

AI isn't the enemy here. The same technology that enables deepfakes also powers better fraud detection. The same models that generate synthetic content can help identify manipulation. We're not in a war between humans and machines; we're in a transition where the tools for both creation and verification are becoming more powerful.

What matters is building the infrastructure that lets trust scale: open standards that work across platforms and jurisdictions, tooling that makes verification practical for everyday developers, formats that carry provenance as naturally as they carry content, and user experiences that make authenticity visible without creating friction.

This is essential work. Documents are how we record agreements, preserve knowledge, and coordinate action. If we can't trust documents, we can't trust the systems built on them. But if we build verification into the foundations, into the formats and tools and workflows, we create the conditions for trust to persist even as technology transforms how we create. For a practical look at how this applies, see our piece on rebuilding digital trust with open standards.

The standards are maturing. The adoption is accelerating. The legal frameworks are in place. Now it's about building: turning specifications into implementations, turning implementations into defaults, turning defaults into expectations. We're excited to be part of that work.