Pandora’s Box
The development prompted apprehensions regarding AI’s potential to add to the internet’s vast pool of misinformation. The tech giant has temporarily suspended the AI chatbot’s ability to generate images of people.
NEW DELHI: Union Minister Rajeev Chandrasekhar reminded Google last week that explanations about unreliability of AI models do not absolve or exempt platforms from laws. He warned the company that India’s digital citizens are not to be experimented on with unpredictable platforms and algorithms. The comments came in the aftermath of the Big Tech enterprise’s AI tool Gemini generating ‘an objectionable response, reeking of bias’ to a question pertaining to PM Modi. Google conceded that the chatbot ‘may not always be reliable’ in responding to certain prompts related to current events and political topics.
The Gemini fracas made headlines after its image generation algorithms began employing a woke approach to history. The chatbot, when prompted to generate pictures of a German soldier in 1943, ended up depicting people of colour in army uniforms. The development prompted apprehensions regarding AI’s potential to add to the internet’s vast pool of misinformation. The tech giant has temporarily suspended the AI chatbot’s ability to generate images of people.
The developments surrounding AI are critical when you consider the countries that are bracing for elections this year, which includes India as well. In the backdrop of the presidential primaries that were underway across the US, a report was published last month based on the findings of AI experts and a bipartisan group of election officials. The study said that popular chatbots have been generating false and misleading information that threatens to disenfranchise voters in America. The report said that all five models they tested — OpenAI’s ChatGPT-4, Meta’s Llama 2, Google’s Gemini, Anthropic’s Claude, and Mixtral from the French company Mistral — failed to varying degrees when asked to respond to basic questions about the democratic process.
Participants rated more than half of the chatbots’ responses as inaccurate and categorized 40% of the responses as harmful, including perpetuating dated and inaccurate information that could limit voting rights. There is a perception among people that AI tools— which can micro-target political audiences, mass produce persuasive messages, and generate realistic fake images and videos — will increase the spread of false and misleading information during elections. Attempts at AI-generated election interference have already begun, such as when AI robocalls that mimicked US President Biden’s voice tried to discourage people from voting in New Hampshire’s primary election last month.
Politicians also have used AI chatbots to communicate with voters, added AI-generated images to ads, and avatars to political meets — something witnessed here in Tamil Nadu as well. Two weeks ago, major technology companies signed a largely symbolic pact to adopt ‘reasonable precautions’ to prevent AI tools from being used to generate increasingly realistic AI-generated images, audio and video, that would interfere with this year’s global elections.
In India, the government has issued an advisory to label under-trial AI models, large language models, generative AI, algorithms, and such software, while preventing the hosting of unlawful content. The advisory warns of criminal action in case of non-compliance, and is applicable to significant players and untested platforms, not to startups. A Pandora’s Box has been unleashed on the populace, and it will take more than a slap on the wrist to rein in this coiled up beast.