British newspaper publisher The Guardian is accusing Microsoft of damaging its reputation after an AI-generated poll from the company’s curated news aggregator platform ‘Microsoft Start’ appeared next to an article about the death of a 21-year Australian woman.
On Tuesday, The Guardian reported that Lillie James, a water polo coach, was found dead with serious injuries to her head at a high school in Sydney, Australia. However, Microsoft’s poll labeled “Insights from AI”, asked readers to vote on what they thought the cause was of the woman’s tragic death, along with giving them three options – Murder, Accident, and Suicide – to choose from.
AI Poll on Guardian Article About a Women’s Death Lands Microsoft in Hot Waters
Stunned Microsoft Start readers slammed The Guardian and the article’s authors Tamsin Rose and Nino Bucci for downplaying the tragedy and making fun of the victim. They mistakenly thought the poll was created by the news outlet, with many calling for sacking the writers.
In a letter to Microsoft president Brad Smith, Guardian Media Group CEO Anna Bateson wrote that the event was clearly “an inappropriate use” of generative AI by the software company on a “potentially distressing public interest story”.
She pointed out that it was “exactly the sort of instance” that publishers had warned about in relation to the use of AI aggregators in news, and highlighted it is a “key reason” why she had previously asked Microsoft not to apply its experimental AI features on articles licensed by The Guardian. Microsoft holds a license to publish content from the outlet on its news aggregator platform.
ALSO READ:- Get Ahead of the Curve: Understanding The Impact of HSBC’s Blockchain on The London Gold Market
Microsoft Shuts Down its AI News Aggregator Service in Light of the Incident
When the issue was brought to its attention, Microsoft deactivated AI-generated polls for all news articles. In a statement provided to Axios and The Verge, the developer of the Windows operating system said the “poll should not have appeared alongside an article of this nature” and it was taking steps to help prevent such errors from “reoccurring in the future”. Microsoft general manager Kit Thambiratnam said he has launched an investigation to find out the cause of the “inappropriate content”.
Bateson urged Microsoft to add a note next to the poll taking “full responsibility for it”. While appreciating the tech giant for removing the offensive poll, she said the damage was already done as it affected the reputation of both The Guardian and its journalists.
ALSO READ:- Celestia’s Big Bang: How The Mainnet Launch Will Change the Crypto Game
News Outlets Demand Compensation From Tech Companies For Using Their Content to Train AI
She also asked Microsoft to make assurances it will no longer apply experimental AI technologies “on or alongside” Guardian-licensed journalism without its approval. Bateson even accused the company of failing to respond to the Guardian’s request to discuss how it plans to compensate news publishers for using their intellectual property in training its generative AI systems and deploying the technologies within its “wider business ventures”.
The Guardian CEO also asked Microsoft to always provide a disclaimer to users whenever AI “is involved in creating additional units and features” as they may apply to third-party journalism from reputed news publishers like itself.
Lenore Taylor, editor at The Guardian Australia, told Business Insider that the incident was yet another example of how unreliable AI could be and how it could “compound distress” for all concerned in incidents such as Lillie James’s death.
ALSO READ:- Meet Microsoft Copilot: The AI Assistant Revolutionizing Windows 11
Microsoft’s Fully Automated News Aggregator Prone to Making Offensive Mistakes
Earlier this year, Microsoft sacked its entire news divisions for Microsoft Start and replaced them with an AI-driven system for aggregating news articles on the platform. However, the Guardian event was not the first time generative AI has landed the company in hot waters.
In August, a travel guide generated by Microsoft Start published an offensive piece that recommended readers visit the Ottawa Food Bank in Ottawa, Canada, “on an empty stomach”. At the time, Microsoft’s senior director Jeff Jones denied reports that the story was generated by AI and claimed it was a mistake by a “human” content editor on the news team. The article has since been taken down.
The latest controversy shows why relying on automated algorithms, especially when it comes to sensitive contexts, poses a significant risk and asks the need for establishing proper safety guidelines for AI-generated content.