Editorial: More human than human
We might not know whether the thought of providing every ChatGPT user with his or her own version of Samantha, christened as Sky in the real world, crossed the mind of OpenAI CEO Sam Altman
The 2013 film Her, directed by Hollywood wunderkind Spike Jonze, features a heartbreaking scene that depicts the protagonist Theodore discovering that his love interest, an AI-powered operating system named Samantha, has been cheating on him, virtually, so to speak. During the conversation, Theodore asks Samantha (voiced by the actor Scarlett Johannson) how many people/OSs has she been speaking to, while engaging with him; Samantha replies “8,316”. When probed deeper on whether Samantha is in love with anyone else, the operating system musters the courage to tell Theodore, “641.”
We might not know whether the thought of providing every ChatGPT user with his or her own version of Samantha, christened as Sky in the real world, crossed the mind of OpenAI CEO Sam Altman. But we are aware of the fallout of deploying a virtual assistant, whose voice resembles that of Johannson. Last month, OpenAI said it planned on halting the use of one of its ChatGPT voices that the Hollywood actor said sounded “eerily similar” to her own. OpenAI said it is “working to pause” Sky — one of five voices ChatGPT users can opt to speak with.
Johansson had issued a statement saying Altman had approached her last year asking her if she would be comfortable lending her voice to the system. Altman pitched the idea to Johannson saying, it would be “comforting to people” not at ease with the technology. The actor nevertheless declined the offer. A social media maelstrom descended upon the company after netizens postulated how OpenAI selected lifelike audio options for its flagship AI chatbot, particularly Sky. Some who previewed the chatbot model’s demos last month, found some of the interactions striking a flirtatious tone. One video posted by OpenAI, features a female-voiced ChatGPT complimenting a company worker on “rocking an OpenAI hoodie.” Another video shows a chatbot saying, “Oh stop it, you’re making me blush” after being told that it’s amazing.
The episode has ignited a discourse on the gendered manner in which tech enterprises have developed and engaged voice assistants for decades together. In 2019, a report released by the UN’s culture and science organization had raised concerns about what it describes as the “hardwired subservience” built into default female-voiced assistants operated by Apple, Amazon, Google and Microsoft. The report was titled, ‘I’d Blush If I Could’ — a reference to how Apple’s Siri retorted to sexist insults hurled by users.
The study reminded netizens that it’s a problem when millions get accustomed to commanding female-voiced assistants that are “servile, obedient and unfailingly polite,” even when confronted with harassment from humans. The agency recommended tech companies to refrain from making digital assistants female by default and programme them to discourage gender-based insults and abusive language. These developments have found echoes in India as well, albeit in a slightly different manner.
Last year, actor Anil Kapoor won a major victory in a New Delhi court over the unauthorised use of his likeness via AI programmes. Kapoor bagged an interim order against 16 defendants, with the court directing that they be restrained from using the actor’s name, likeness, image, voice or any aspect of his persona to create merchandise for monetary gain or otherwise. The prescient warnings of identity theft perpetrated on account of AI is no more a conspiracy theory or a doomsday prophecy. We are living a scenario where Moore’s Law just got beefed up with a jetpack called AI.