Counterfeiting Crisis: Artificial Intelligence and the Danger of Fake Videos and Voice

Barnier Geerling
3 min readMay 25, 2023


Picture of a telephone such as in the movie the Matrix

Artificial Intelligence is in the midst of an uprising. As we explore uncharted waters, we face an alarming reality: AI-enabled fake videos and voices threaten our reality perception and trust of digital content. This threat cannot wait; its impacts have already begun having an effect.

Geoffrey Hinton, the pioneer of deep learning at Google, made a striking comparison upon leaving that has stuck with me ever since — likening the rise in fake videos and voices to counterfeiting money, an act that is punishable with severe penalties in most countries. Hinton’s words echo truer than ever now as we witness AI’s incredible capacity to replicate, mimic and distort reality.

Governments worldwide consider counterfeiting an act of serious theft that threatens trust between business and consumers, undermining our economic systems and social and digital landscapes. If left unchecked, AI-powered fake videos and voices pose a similar risk of undermining this trust that serves as the cornerstone of society and digital platforms.

At, we take great pride in creating AI that respects and upholds human individuality. Our mission is to develop AI voices which do not impersonate, copy or clone any existing individuals; rather than replacing genuine human voices entirely with synthetic ones, the goal should be extending existing capacities without overpowering or replicating natural capabilities.

I care deeply about this issue. Our voices are an intimate part of who we are — carrying emotions, intentions, quirks and characteristics unique to ourselves — making the thought that someone could use our essence without consent disconcerting and raising serious ethical concerns concerning voice cloning troubling and alarming.

Misusing AI technology to clone voices and produce fake videos is more than an invasion of privacy; it’s an assault on our identities, with potentially disastrous repercussions such as damaging personal reputations or altering public opinion based on falsified information. Such counterfeiting attacks not just our wallets but our sense of truth, trust in each other, and ultimately, society as a whole.

There is an array of possible misuses for voice cloning technology, from spreading misinformation and shaping public opinion to outright fraud and more. With so much power at our fingertips comes great responsibility — misuses must not go unchecked!

An alarming incident in the Netherlands underscores this point. An elderly woman was nearly duped into paying funeral costs after receiving an alert that purported to come from what she thought was her son, involved in an accident that proved fatal. Criminals used artificial intelligence technology to impersonate her son’s voice during a call with his actual son — only for him to later walk into the room during this call and reveal it’s deceitful nature. This harrowing experience highlights the grave danger of voice cloning; how it can abuse trust and cause significant emotional trauma without proper regulation.

No one should turn a blind eye to this growing concern. Just as we have laws against counterfeit money, so do we require strong legislation against the misuse of artificial intelligence (AI) to create fake videos and voices for evil purposes. This problem transcends technology; it requires collaboration among lawmakers, tech companies and citizens as a collective response.

Counterfeit currency devalues itself; similarly, voice cloning and fake videos devalue human uniqueness. At, we believe each voice should be respected; accordingly we oppose voice cloning while advocating for the responsible use of AI technology.

At the core, technology, including AI, is simply a tool — one which can either help us or harm us depending on how it’s used. As we navigate this brave new world of AI, it is critical that we make decisions based on shared values like trust, authenticity and respect for individuals.

I recognize the task ahead is daunting, yet remain optimistic. By taking a firm stand against AI misuse, we can harness its full potential while upholding shared human values.

Now is the time to stand against counterfeiting of our identities online; digital trust lies at stake and time is running out — let’s not wait until it is too late.

Stay tuned for more!

Barnier Geerling

Voicing the Future Now