What's Happening?
Grammarly has removed its AI-driven Expert Review feature after facing backlash and a class-action lawsuit. The feature, which generated editing suggestions inspired by the styles of well-known authors and academics, was criticized for using real names
without consent. Among those whose styles were mimicked were Stephen King, Neil deGrasse Tyson, and Carl Sagan. The lawsuit, filed in the Southern District of New York, claims that using individuals' names for commercial purposes without permission is illegal, with damages potentially exceeding $5 million. Investigative journalist Julia Angwin, who is the lead plaintiff, expressed concern over the unauthorized use of her editing style, likening it to deepfakes. Grammarly's parent company, Superhuman, has apologized for the misrepresentation and announced the feature's removal for redesign.
Why It's Important?
The lawsuit against Grammarly highlights significant ethical and legal concerns surrounding the use of AI in commercial applications. The case underscores the importance of consent and intellectual property rights in the digital age, particularly as AI technologies become more prevalent. For writers and academics, the unauthorized use of their styles represents a potential threat to their professional identities and livelihoods. The outcome of this lawsuit could set a precedent for how AI companies must handle personal data and intellectual property, impacting future developments in AI-driven content creation tools. Additionally, the case may influence public policy and regulatory measures concerning AI and privacy rights.
What's Next?
Superhuman has stated its intention to defend against the lawsuit, claiming the legal claims are without merit. However, the company has acknowledged the need to rethink its approach and has removed the Expert Review feature for redesign. As the case progresses, it may attract further attention from other writers and academics who feel their identities have been misused. The legal proceedings could lead to stricter regulations on AI technologies and their use of personal data, potentially affecting how companies develop and implement AI features in the future.
Beyond the Headlines
The controversy surrounding Grammarly's AI feature raises broader questions about the ethical use of AI in creative fields. As AI becomes more capable of mimicking human creativity, the boundaries between inspiration and infringement become increasingly blurred. This case may prompt discussions on the moral responsibilities of AI developers and the need for transparent consent processes. Furthermore, it highlights the potential for AI to disrupt traditional industries, challenging existing norms and practices in fields like writing and editing.









