Artificial intelligence is better off with participatory approach: IIT-M study

Their study sought to establish the need and importance of a participatory approach to AI governance while grounding it in real-world use cases, through an interdisciplinary collaboration.

Author :  DTNEXT Bureau
Update: 2024-11-07 09:46 GMT

 IIT-Madras

CHENNAI: A study published by IIT-Madras researchers and the Vidhi Centre for Legal Policy, Delhi, has called for participatory approaches in the development and governance of artificial intelligence (AI) in India and abroad.

They have identified the primary reasons why a participatory approach in AI development can improve the outcomes of the algorithm as well as enhance the fairness of the process.

Their study sought to establish the need and importance of a participatory approach to AI governance while grounding it in real-world use cases, through an interdisciplinary collaboration.

"As operations in multiple domains get increasingly automated through AI, the various choices and decisions that go into their setup and execution can get transformed, become opaque, and obfuscate accountability. This model highlights the importance of involving relevant stakeholders in shaping the design, implementation, and oversight of AI systems," the study noted.

Highlighting such a need, professor B Ravindran, Head, Wadhwani School of Data Science and Artificial Intelligence (WSAI), IIT-M said, "This study found that persons who will be impacted by the deployment of these systems have little to no say in how they are developed. Seeing this as a major gap, this research study advances the premise that a participatory approach is beneficial to building and using more responsible, safe, and human-centric AI systems.

"The recommendations from this study are crucial for addressing several pressing issues in AI development. By ensuring that diverse communities are included in AI development, we can create systems that better serve everyone, particularly those who have been historically underrepresented. Increasing transparency and accountability in AI systems fosters public trust, making it easier for these technologies to gain widespread acceptance. Further, by involving a wide range of stakeholders, we can reduce risks like bias, privacy violations, and lack of explainability, making AI systems safer and more reliable," he noted.

Adopting a participatory approach to AI governance, establishing clear mechanisms for stakeholder identification, developing effective methods for collating and translating stakeholder input, addressing ethical considerations throughout the AI lifestyle, and prioritising human oversight and control are the recommendations for implementing participatory AI.

The findings of the study were published in a Pre-Print Paper in ‘arXiv’, an open-access archive.

The papers can be viewed using the following links - https://arxiv.org/abs/2407.13100 and https://arxiv.org/abs/2407.13103

Tags:    

Similar News