Skip to content
The Fake, the Forensic, and the Fight for Truth

This topic unsurprisingly fascinates me. I have made a career, and prior to that a young adulthood, tearing apart computers and data to understand who did what, when, and how it all works. My question for today is: can data really lie? Let’s walk through it a bit.

Falsified Evidence in Scope

This is no longer a pattern; it’s a real trend. Our teams encounter alleged falsified evidence (largely documents) in proceedings of all types in jurisdictions on four (4) continents. These typically come in the form of provided evidence rather than collected or acquired evidence, i.e., a forensic collection and process have not taken place; rather, someone has said, “Here are some key documents.” In these scenarios, circumstance is often key. Our favourite bullet list in the team:

  • Complicated backstory
  • Single source of highly relevant evidence
  • No original

Types of Falsified Evidence

I say it often, but it’s worth repeating that there are a few categories of falsified evidence to consider:

  • Modified originals: some semblance of the document existed at some juncture, but it was somehow manipulated in situ or by some external tools.
  • Completely fabricated by hand: an example would be the screenshot of a WhatsApp conversation that is not authentic and was aesthetically constructed using software.
  • AI-generated content: while still emerging, the lowered barrier for accessibility means these types of content (including ‘slop’) are going to soon overwhelm.

The first two are categories we encounter quite regularly as experts (and with increasing frequency) but have yet to see AI-generated content enter disputes in a noticeable way just yet (that does not mean it isn’t happening).

Cause for Concern

I remain very concerned about this third category, not only for legal disputes but for geopolitical reasons as well. A very impressive video of a world leader saying or doing something could be assumed legitimate– but ideally, with some scrutiny. The knee-jerk reaction for people to trust what they see, paired with the [anti]social media engines propagating said content, could make this hard to control. This type of scenario is already happening, at least lightly, all the time. The court of public opinion is very different from the courts in which we hash out our legal disputes. Huge swathes of society will have already formed an immovable opinion based on little more than a clip or soundbite.

The Real Courts

Let’s consider how this might be handled in the High Court, or an arbitration, say. or an arbitration, say. Unlike the court of public opinion, these have norms and rules regarding evidence: the manner in which it’s provided, attestations that it has not been altered and otherwise preserved, and so forth (if you require more detail, please consult something less interesting). The example of the world leader saying or doing something on a video is a great example, so let’s stick with it. From an evidentiary perspective, here are some of the questions I would ask to challenge or better understand this:

  1. Where is the device that captured the video?
  2. Are there other copies of the video?
  3. What is the alleged date and time of the captured event?
  4. Are there other videos captured in the area with the same date and time that can corroborate?
  5. What do the metadata tell us about the video’s creation, software used in its processing, editing and production?
  6. The party who provided or produced this video: have their computers been preserved and inspected? What about their mobile phones?
  7. Can we determine from their devices whether any LLM platforms were accessed or prompted to create this content or similar materials prior to its creation?
  8. Video forensic expert: is there something unnatural about the video in terms of colours, pixels and frames?

The list goes on. In fact, this list is not even close to exhaustive, but it should start to paint a picture of how these inquiries could progress.

This is always progressing, but a few things come to mind:

  • Metadata validation (as already discussed)
  • Model fingerprinting: these already exist in the wild and co-rely on the platform creators assigning a signature at the time of generation. “Is this a SORA video because of x?”
  • A third-party signatory that can assist in vouching for content, like the Coalition for Content Provenance and Authenticity (C2PA) framework. While not a joke, four out of five big players in AI are on the steering committee for C2PA.

The Effectiveness of an Expert

An expert can evaluate facts and form an opinion based on his or her experience. This will always be important when the authenticity of data and documents is questioned. An expert ought to be asking some of the questions posed above in either their consultative efforts or in their reply to formal instruction. While the burden, at least in criminal proceedings, is for the State to prove a claim, it can often feel like one is on the back foot in civil litigation, having to prove something did not happen.

Data Do Not Lie (Unless They Weren’t Real to Begin With)

The same data forensic principles apply: preservation, corroborative data, circumstances, and human intelligence. Generative AI content can be concerning for a variety of reasons noted above, but in my view (at least in 2025), the approach to inquiry remains sound. Convincing content masquerading as fact can cause quite a stir, but with a heightened sense of diligence, we will surely adapt.


iDS provides consultative data solutions to corporations and law firms around the world, giving them a decisive advantage – both in and out of the courtroom. iDS’s subject matter experts and data strategists specialize in finding solutions to complex data problems, ensuring data can be leveraged as an asset, not a liability. To learn more, visit stg-idsinccom-stage.kinsta.cloud.