Grammarly Sold Fake Experts. Did Your Campus Buy the Subscription?
- Jeff Dillon

- 1 day ago
- 5 min read
Updated: 1 day ago

In early March of 2026, Platformer's Casey Newton discovered something unsettling: Grammarly had made him an editor without his knowledge, his consent, or his paycheck. The company's "expert review" feature was generating AI writing advice and slapping the names of real journalists, authors, and researchers on it — including Newton, Kara Swisher, Shoshana Zuboff, and investigative journalist John Carreyrou — to make paying customers feel like they were getting the real thing. They weren't.
Within days, a class-action lawsuit had been filed. Just a day later, Grammarly pulled the feature entirely.
Good. But higher education should not treat this as someone else's problem.
What the Grammarly “Expert Review” Feature Actually Did
Let's be clear about the offense, because "AI hallucination" as a framing lets Grammarly off the hook. This wasn't an accident. Grammarly built a product feature that deliberately invoked real people's names and professional reputations to sell subscriptions at $144 a year. The support page promised "insights from leading professionals," then buried the disclaimer that those professionals were never involved anywhere near the fine print.
Newton, writing on Platformer, noted that the advice attributed to his AI clone bore no resemblance to how he actually edits. One tech writer found that Grammarly was drawing on his speaker bio as source material. Kara Swisher's "advice" — generated without her input, in her name, to paying customers — was described by Newton as nothing like her actual editing style. Her response to Grammarly was characteristically direct and unprintable.
Vanessa Heggie, an associate professor at the University of Birmingham, took to LinkedIn to condemn the software's inclusion of fellow academic David Abulafia, who died in January, calling it "obscene."
Investigative journalist Julia Angwin, the lead plaintiff in the lawsuit, is among those whose work appeared in the software. "I had thought of deepfakes as something that happens to celebrities, mostly around images," she told the BBC. "Editing is a skill … it's my livelihood, but it's not something I've ever thought about anyone trying to steal from me before. I didn't even think it was steal-able."
This is not a bug. It's a business decision. Grammarly chose to commoditize real people's professional identities because it knew those names would move product. The fact that it took a viral backlash, media pressure, and a federal lawsuit to stop it tells you everything about the company's judgment.
Why the Grammarly AI Controversy Matters for Higher Education
Grammarly claims 40 million daily users. A significant chunk of that audience is on college campuses — students, faculty, and administrators who rely on the tool for everything from application essays to grant proposals to research manuscripts. Many of those users are almost certainly on institutional licenses, which means universities may be writing checks to a company that was, until this week, running an identity theft operation against the very scholars and writers it invoked as credible.
Think about what "expert review" would look like applied to a research context. Imagine a graduate student getting AI-generated writing advice attributed to a prominent faculty member in their field — advice that misrepresents that scholar's actual methodology, perspective, or voice. Now imagine that faculty member is at your institution. Now imagine the student cites it.
Grammarly had Shoshana Zuboff — whose entire career is built around exposing the extractive nature of surveillance capitalism — as one of its "experts." The irony isn't funny. It's instructive. If a company is willing to conscript the woman who wrote the book on corporate data exploitation into a paid hallucination service without asking her, your institution's faculty are not safe either.
The Larger Ai Training Problem Casey Newton Highlighted
Newton is careful to note that what Grammarly did is not categorically different from what every major LLM is doing — it's just more visible. Paste a draft into Claude or ChatGPT and ask it to edit the way a specific writer would, and the model will comply. It won't ask permission. It certainly won't pay a royalty. The difference, as Newton writes, is that Grammarly "took a latent capability and turned it into a product feature" — one that explicitly monetized real people's identities.
For higher education, that distinction matters less than institutions might like to believe. Universities sit on enormous concentrations of faculty expertise, research output, and professional identity — all of which is already inside these models in ways no one fully understands. The Grammarly story is alarming in part because it made the extraction visible. The invisible version of this story is happening every day.
As Newton put it in his follow-up piece after Grammarly pulled the feature: "Names, after all, are what made Angwin's lawsuit possible: Grammarly was brazen enough to leave a trail." Most AI systems are not that brazen. They just quietly benefit from the work.
What Campus Technology Leaders Should Do Right Now
This isn't a call to ban AI writing tools on campus. It's a call for institutional leadership to start treating AI vendor accountability as a procurement requirement, not an afterthought.
Audit your institutional licenses. If you have a Grammarly or Superhuman campus agreement, you need to understand what features your students and faculty are actually using and whether any of those features have undergone the kind of ethical review your institution would expect. "We didn't know" is not a defense when the terms of service were always right there.
Ask vendors harder questions. Before your next renewal, ask specifically how the product handles real people's names, likenesses, and published work. Ask who is represented in any AI-generated output. Ask what consent mechanisms exist. If the vendor can't answer, that's your answer.
Protect faculty intellectual identity. Your institution has policies around IP, copyright, and data governance. Most of them were not written with generative AI in mind. Update them. Faculty whose published work is being used to train or power commercial AI products deserve to know, and deserve a voice in whether that use is acceptable.
Treat AI literacy as institutional risk management. Grammarly's 40 million daily users included a lot of people who had no idea what "expert review" was doing behind the scenes. Helping your campus community understand how these tools actually work isn't just good pedagogy. It's self-defense.
The Feature Is Gone. The AI Accountability Problem Isn’t
Grammarly's CEO, Shishir Mehrotra, apologized on LinkedIn and pulled the feature. He also telegraphed that a rebuilt version is coming — one where experts "choose to participate" and "control their business model." Newton, generously, says the story is over.
For higher education, it isn't. The class-action lawsuit filed by investigative journalist Julia Angwin on behalf of hundreds of writers whose identities were appropriated is still alive. The broader question of what it means for AI companies to quietly digest professional expertise, institutional knowledge, and academic output without consent or compensation is nowhere near resolved.
Universities like to think of themselves as knowledge institutions. If you can't account for where your knowledge is going and who is profiting from it, that
identity is under pressure.
Grammarly got caught because it put names on things. Most companies won't make that mistake twice.
Sources
The Guardian: https://www.theguardian.com/books/2026/mar/13/grammarly-removes-ai-expert-review-feature-mimicking-writers-after-backlash Casey Newton, Platformer: "Grammarly turned me into an AI editor against my will and I hate it" (March 9, 2026)
Casey Newton, Platformer: "I have been released from my responsibilities as an unwilling editor for Grammarly" (March 12, 2026)



