Podcaster Sues Google Over AI Voice
Introduction
Podcaster David Greene has filed a lawsuit accusing Google of using his voice without permission to train an artificial intelligence feature in its NotebookLM tool. The complaint alleges that one of the AI-generated hosts in NotebookLM’s Audio Overviews closely mimics Greene’s distinctive vocal style and delivery. The case adds to growing legal and ethical debates surrounding the use of creative works and personal likenesses in AI training.
Allegations Against Google
Google introduced Audio Overviews in 2024, allowing users to generate short, AI-produced podcast-style discussions based on uploaded notes and documents. These segments typically feature a male and female virtual co-host.
Greene contends that the male AI voice was trained on hours of his broadcast work without consent or compensation. According to the lawsuit filed in Santa Clara County, California, the company allegedly sought to replicate his recognizable cadence, tone and persona to create synthetic audio content.
Background and Evidence Claims
Greene previously co-hosted NPR’s Morning Edition for nearly a decade and currently hosts KCRW’s Left, Right & Center. After the AI feature’s release, colleagues reportedly pointed out similarities between his voice and the virtual host. Greene then consulted an AI forensic firm.
The lawsuit states that analysis conducted by the firm suggested a 53% to 60% confidence level that the AI voice was trained on Greene’s recordings, with scores above 50% considered relatively high. The firm’s CEO reportedly concluded it was their opinion that the model had used Greene’s voice in its training process.
Google’s Response
Google has denied the allegations. A company spokesperson stated that the male voice used in NotebookLM’s Audio Overviews was based on a paid professional actor hired by the company. Google has described NotebookLM as one of its notable AI successes and has highlighted the feature’s popularity for its natural-sounding output.
Broader AI Intellectual Property Debate
The lawsuit reflects broader concerns about how AI systems are trained. Large language and audio models require extensive datasets, raising questions about authorization and compensation when creative content is used in training.
Voice replication, in particular, presents additional challenges. Advances in generative AI make it possible to simulate tone, cadence and delivery with increasing realism, prompting disputes over identity rights and control over personal likeness.
High-profile controversies have previously emerged in the AI industry involving voice and likeness claims, underscoring the unsettled legal landscape surrounding generative technologies.
Conclusion
David Greene’s lawsuit against Google highlights ongoing tensions between technological innovation and intellectual property protections. As courts examine the boundaries of AI training practices, the outcome may influence how companies source data and how creators protect their voices and identities in an era of increasingly sophisticated synthetic media.

