Begin typing your search...

    Editorial: Guard-railing AI

    Recently, Italian PM Giorgia Meloni sought 1,00,000 euros in damages after deepfake videos of her were created and circulated online

    Editorial: Guard-railing AI
    X

    Still from the science fiction film 'The Matrix'

    Earlier this week, the science fiction film The Matrix, celebrated 25 years since its release. While the film is credited with rewriting the template for sci-fi storytelling, what it also did was presciently set the tone for a world where the notion of artificial intelligence (AI) would seem more second nature than it ever was. The throwback coincides with some interesting developments playing out in AI regulation, especially in first world economies.

    Last month, the European Parliament approved the Artificial Intelligence Act. The regulation aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a frontrunner in this space. The law also establishes obligations for AI based on potential risks and level of impact. Per the new legislation, certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from internet or CCTV footage to create facial recognition databases, have been banned.

    Emotion recognition in the office and educational institutions, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden. A broad range of high-risk AI use cases has been notified, and these include critical infrastructure, education and vocational training, employment, essential private and public services (healthcare, banking), systems in law enforcement, migration and border management, justice and democratic processes (influencing elections, for instance)

    What’s especially pertinent is that artificial or manipulated images, audio or video content (deepfakes) need to be clearly labelled as such. Recently, Italian PM Giorgia Meloni sought 1,00,000 euros in damages after deepfake videos of her were created and circulated online. Deepfake has now become the industry standard for defamation, dissemination of falsehoods in an election-heavy year, and getting away with white collar crimes. A case in point happens to be how European banks were targetted with AI-generated voice deepfakes employed by miscreants to make illegal bank transfers from customer accounts.

    Despite the EU tightening the leash on the technology, activists and free speech advocates feel there is ample room for improvement. For instance, the non-profit Algorithm Watch has drawn focus to a major shortcoming of the EU database. It seems the negotiators agreed on a backdoor access for law enforcement, migration, asylum and border control authorities. Providers of high-risk systems will be asked to register only a limited amount of data, and only in a non-publicly accessible section of the database.

    “As certain important pieces of information, such as the training data used, will not be disclosed, it could prevent those affected, civil society, journalists, watchdog groups and academics from exercising public scrutiny in high-stake areas prone to fundamental rights violation and holding them accountable,” the NGO had emphasised. Taking a leaf from the EU’s book, India is set to come up with its first draft of the AI regulations framework by June-July this year, as per Rajeev Chandrasekhar, Minister of State for Electronics and IT. Like in the first world, India’s challenge will also be to harness the potential of AI, while introducing guardrails that do not stifle innovation.

    Editorial
    Next Story