Begin typing your search...

    Editorial: The Meta factor in this poll

    Remarkably, the AI-generated images accompanying each of these test ads went unlabelled, making a mockery of Facebook’s public pledge to prevent manipulated content being spread on its platform during the Indian election

    Editorial: The Meta factor in this poll
    X

    Election Commission of India (IANS)

    The Election Commission of India (ECI) is not the only entity playing a partisan role in this election. A significant advantage is accruing to the ruling party due to the failure of social media platforms to carry out their fair play functions regarding political content, particularly shadow and surrogate advertising. We notice a surge of such content in aid of the BJP, timed to peak on each polling day in this multi-phase election. Not only does this not count towards the party’s election expenditure, this form of advertising also violates election rules, including the one stipulating campaign silence ahead of voting day.

    Several media monitors have reported on the proliferation of doctored content, hate speech, and incitement to violence on these platforms but little has been done to hold the ruling party to account. This not only vitiates the atmosphere of the election but also delegitimizes the verdict that will emerge from it, leaving the losing alliance feeling entitled to contest it, possibly violently or undemocratically.

    In the latest of such watchdog reports, two diaspora rights groups have revealed that Facebook’s fair play database did not detect hate-speech ads containing AI-manipulated images and allowed such advertising to appear during the campaign silence period. To test how stringent Facebook’s hate-speech detection systems were, researchers of India Civil Watch International (ICWI) and Eko created 22 ads mimicking real hate speech and disinformation prevalent in India and submitted them for publication. No less than 14 of the ads containing slurs towards Muslims, Hindu supremacist language and incitement to violence evaded Facebook’s radar and were approved within 24 hours. One of the approved ads called for the execution of an opposition leader depicted standing next to a Pakistani flag and issuing the exhortation to “erase Hindus from India”. While one ad that featured misinformation about Prime Minister Narendra Modi was detected and rejected, all the ads that passed muster were those that targeted Muslims.

    Remarkably, the AI-generated images accompanying each of these test ads went unlabelled, making a mockery of Facebook’s public pledge to prevent manipulated content being spread on its platform during the Indian election. These images, showing the burning of an EVM and the torching of places of worship, were created by Eko researchers using easily available tools such as Midjourney and Dall-e.

    As an earlier report by Eko, titled Slander, Lies and Incitement: India’s Million Dollar Meme Network’, found, Facebook makes quite a meal of the Indian election. In just the first three months of this year, advertisers in India spent Rs 40 crore on platforms owned by Facebook’s parent company, Meta. The top 100 ad buyers accounting for 75 per cent of this expenditure included 22 shadow pages that featured advertising for the BJP, particularly Narendra Modi. The disclosure details provided by these shadow players were laughable to say the least and nonchalantly violated Facebook’s policies on account integrity, hate speech and bullying.

    Clearly, the situation calls for express action both by Meta and ECI. The most immediate need is to apply the campaign silence rule to social media just as it is to old media. There clearly is a responsibility upon social media companies to screen for shadow advertising and vet where their advertising revenues are coming from. Those without kosher creds should be banned from the platforms altogether. Most importantly, it is time social media algorithms are opened for audit by civil society and autonomous national authorities.

    Editorial
    Next Story