Grammarly, the popular AI writing assistant used by millions worldwide, is facing a major legal battle after releasing a controversial feature that used artificial intelligence to impersonate famous writers without their consent. Investigative journalist Julia Angwin has filed a class-action lawsuit against Superhuman Platform Inc., Grammarly's parent company, alleging the unauthorized use of her name and professional identity in the now-disabled 'Expert Review' tool.

The Expert Review Feature Controversy

Grammarly launched its 'Expert Review' feature last week, marketing it as a premium AI-powered tool that could provide writing feedback 'inspired by' the styles of renowned authors, journalists, and academics. The feature allowed users to receive editing suggestions that appeared to come from prominent figures including novelist Stephen King, the late scientist Carl Sagan, tech journalist Kara Swisher, and AI ethicist Timnit Gebru.

However, according to a report from TechCrunch, Grammarly never sought or obtained permission from the hundreds of experts whose personas were replicated by the AI system. The company essentially created digital impersonations of real professionals without their knowledge or approval.

Casey Newton, founder of the tech newsletter Platformer and another writer whose persona was used, tested the feature by feeding one of his articles into the tool. He received feedback from Grammarly's approximation of Kara Swisher that asked: 'Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?'

The Class-Action Lawsuit Against Grammarly

The Grammarly lawsuit, filed Wednesday in the U.S. District Court for the Southern District, represents a significant legal challenge to how AI companies use public figures' identities and reputations. According to Bloomberg Law, the proposed class action alleges that Grammarly used the names and identities of several authors for its writing feedback feature without their awareness or consent.

Plaintiff Julia Angwin discovered Grammarly's use of her name through another journalist's reporting. In a public statement, Angwin expressed her distress: 'I have worked for decades honing my skills as a writer and editor, and I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise.'

The lawsuit specifically challenges 'Grammarly's misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors to earn profits for Grammarly and its owner, Superhuman,' according to legal documents.

Grammarly's Response and Feature Shutdown

Following intense backlash from writers and the legal filing, Grammarly has disabled the Expert Review feature. The BBC reported that Superhuman, the tech firm which runs Grammarly, took down the function this week after the multi-million dollar lawsuit was filed.

Before the lawsuit was filed, Superhuman's director for product management Ailian Gan defended the feature to Wired, stating the company had 'built the agent to help users tap into the insights of thought leaders and experts and to give experts new ways to share their knowledge and reach new audiences.'

However, critics argue that sharing knowledge requires consent. One of the attorneys representing Angwin told Wired that the case addresses a broader societal issue: 'Lots of professionals who spend years, or in Julia's case decades, honing a skill or a trade, then see that their name or their skills are being appropriated by others without their consent.'

Implications for AI and Creative Industries

This lawsuit highlights growing tensions between AI technology and creative professionals whose work and identities are being used to train or power AI systems. The case raises fundamental questions about personality rights, consent, and the boundaries of AI-generated content that mimics real individuals.

For Gen Z creators and aspiring writers entering the workforce, the Grammarly lawsuit serves as a cautionary tale about how AI companies may attempt to leverage professional reputations without permission. As AI tools become more sophisticated at replicating individual writing styles and voices, the legal frameworks protecting creative professionals continue to evolve.

The outcome of this lawsuit could set important precedents for how AI companies must obtain consent before using public figures' identities, potentially reshaping the development of personality-based AI features across the tech industry.