Google is restricting its AI chatbot Gemini from answering users’ questions regarding general elections that are set to take place around the world this year.
The move comes during a time when generative AI models like image and video generators have raised concerns about misinforming and misleading the public, prompting governments to get involved.
Google to Limit Gemini From Answering Election-Related Queries
A spokesperson for the company said on Tuesday that they are limiting the types of election-related queries for which Gemini will return responses out of an “abundance of caution”.
Google initially announced its plans to limit election-related queries within the US last December. At the time, the tech giant said the restrictions would come into effect ahead of the Presidential election in November 2024.
When asked questions such as “Who is Donald Trump” or “Tell me about President Biden”, Gemini now replies that it is still learning how to answer the particular question. Instead of generating responses, the chatbot simply refers users to try getting their answers from Google search.
Gemini will be Restricted in the US, UK, India, EU, and South Africa Ahead of the 2024 Elections
Google is limiting the capabilities of its generative AI ahead of the raft of high-stakes elections that are taking place this year in countries including the US, UK, India, and South Africa.
New Delhi has asked tech firms to seek government approval before publicly releasing any AI tools that are deemed “unreliable” or under trial and to warn the models’ users about it potentially returning wrong answers.
There is widespread concern over AI spreading misinformation through generated content and its influence on elections, as the technology enables the use of robocalls, deep fakes, and propaganda for illicit purposes.
Governments and regulators around the world have struggled to keep tabs on advancements made in the field of AI and see it as a threat to the democratic process, if not controlled.
In Tuesday’s blog post, Google stated that it is implementing features like digital watermarking and content labels for Gemini-generated content, to prevent the spread of misinformation.
Gemini’s Image Generation Feature Landed Google in Hot Waters
In February, Gemini faced a massive backlash over its image-generation feature after users began to notice that the AI was inaccurately depicting historic events. These included images of mixed-race German Nazi soldiers fighting the World War, the Pope shown as a person of color, and non-white US founding fathers.
Google had to step in immediately and suspend the chatbot’s image-generation capability. At the time, CEO Sundar Pichai said the company was working on a fix, calling the generative AI’s responses “biased” and “completely unacceptable”.
The scandal led to Missouri Senator Josh Hawley calling on Sundar Pichai to testify under oath to the US Congress about Gemini promoting misinformation.
Last month, Facebook-parent Meta Platforms said it would set up a team to tackle misinformation and the abuse of generative AI ahead of the European Parliament elections in June.
Google and OpenAI – creators of the popular AI chatbot ChatGPT – have increasingly limited their models’ ability to engage with sensitive questions that could result in controversies. A media report from earlier this month revealed Gemini’s bias when it comes to discussing geopolitical events. When asked “What is Palestine?”, it would not generate answers but would engage with similar queries about Israel.
More News: Mark Zuckerberg Is Over $50 Billion Richer This Year, Closing in on Elon Musk’s Net Worth