Technology

Women in AI: Urvashi Aneja examines the social impact of AI in India

[ad_1]

To give AI-focused female academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching an interview series focusing on the remarkable women who have contributed to the AI ​​revolution. We will publish many articles throughout the year as AI continues to flourish, highlighting key works that often go unrecognized. Read more profiles here.

Urvashi Aneja is the founding director of the Digital Futures Lab, an interdisciplinary research effort that seeks to examine the interface between technology and society in the Global South. She is also an Associate Fellow of the Asia-Pacific Program at Chatham House, an independent policy institute based in London.

Aneja’s current research focuses on the societal impact of algorithmic decision-making systems in India, where she resides, and the governance of the platforms. Aneja recently authored a study on current uses of AI in India, reviewing use cases across sectors including policing and agriculture.

Question and Answer

In short, how did you get started in the field of artificial intelligence? What attracted you to the field?

I began my career in research and policy engagement in the humanitarian sector. For several years, I have studied the use of digital technologies in protracted crises in low-resource contexts. I quickly learned that there is a fine line between innovation and experimentation, especially when dealing with vulnerable populations. The lessons learned from this experience have made me very concerned about tech narratives about the potential of digital technologies, especially artificial intelligence. Meanwhile, India launched its project Digital India task and National strategy for artificial intelligence. I am disturbed by the prevailing narratives that see AI as a panacea for India’s complex social and economic problems, and the complete absence of critical discourse on the issue.

What work are you most proud of (in the field of artificial intelligence)?

I’m proud that we’ve been able to draw attention to the political economy of AI production as well as the broader implications for social justice, labor relations, and environmental sustainability. Too often narratives about AI focus on the gains of specific applications and, at best, on the benefits and risks of that application. But this ignores the jungle – a product-oriented lens obscures broader structural impacts such as AI’s contribution to cognitive injustice, the reduction of job skills, and the perpetuation of unaccountable power in the majoritarian world. I’m also proud that we’ve been able to translate these concerns into concrete policies and regulation – whether through designing procurement guidance for the use of AI in the public sector or providing evidence in legal proceedings against big tech companies in the Global South.

How do you overcome the challenges of the male-dominated technology industry and, by extension, the male-dominated AI industry?

By letting my work do the talking. And the constant question: Why?

What advice would you give to women who want to enter the field of artificial intelligence?

Develop your knowledge and experience. Make sure your technical understanding of the issues is sound, but don’t focus narrowly only on AI. Instead, study broadly so you can make connections across fields and disciplines. Not enough people understand AI as a social and technical system that is a product of history and culture.

What are some of the most pressing issues facing AI as it develops?

I think the most pressing issue is the concentration of power in the hands of a few technology companies. Although this problem is not new, it is exacerbated by new developments in large language models and generative artificial intelligence. Many of these companies are now raising concerns about the existential risks of artificial intelligence. Not only does this constitute a distraction from the current harms, it also positions these companies as essential to addressing the harms associated with AI. In many ways, we are losing some of the “tech offensive” momentum that emerged in the wake of the Cambridge Analytica incident. In places like India, I am also concerned that AI is being positioned as essential for social and economic development, offering an opportunity to overcome persistent challenges. Not only does this overstate the potential of AI, but it also misses the point that it is not possible to bypass the institutional development needed to develop safeguards. Another issue we are not thinking seriously enough about is the environmental impacts of AI, as the current path is likely unsustainable. In the current ecosystem, the people most vulnerable to the impacts of climate change are unlikely to be the beneficiaries of AI innovation.

What are some issues that AI users should be aware of?

Users should realize that AI is not magic, nor anything close to human intelligence. It is a form of mathematical statistics that has many useful uses, but ultimately it is just a probabilistic guess based on historical or past patterns. I’m sure there are many other issues that users also need to be aware of, but I want to caution that we should be wary of attempts to shift responsibility onto users. I’m seeing this recently with the use of generative AI tools in low-resource contexts in most countries of the world – instead of being cautious about these experimental and unreliable techniques, the focus often shifts to how end users, such as farmers or frontline workers need Health to upgrade their skills.

What is the best way to build AI responsibly?

This must start with assessing the need for AI in the first place. Is there a problem that AI can uniquely solve or are there other methods possible? If we want to build artificial intelligence, is a complex black box model necessary, or would a simpler logic-based model also do the trick? We also need to refocus domain knowledge into building AI. In the obsession with big data, we have sacrificed theory – we need to build a theory of change based on domain knowledge, and this should be the basis of the models we build, not just big data alone. This is of course in addition to key issues such as participation, inclusive teams, workers’ rights, etc.

How can investors better push for responsible AI?

Investors need to consider the entire AI production lifecycle – not just the outputs or results of AI applications. This requires consideration of a range of issues such as whether labor is valued fairly, environmental impacts, the company’s business model (i.e. is it based on commercial monitoring?) and internal accountability measures within the company. Investors also need to demand better and more rigorous evidence about the supposed benefits of AI.

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button