Two starkly different approaches with different outcomes, yet clearing the way forward.
Shortly after the EU passed the EU AI Act that bans emotion recognition applications based on hitherto extractive applications of Facial Recognition Technologies (FRT), OpenAI dropped 4o - a multi-modal virtual assistant that among a host of seemingly intelligent things also purports to infer your emotional state from your facial expressions. The legitimacy of FRT-based emotion recognition aside, what could the legalities of this mean? Could 4o itself be banned in the EU, or could its use cases for predictive analysis, such as consumer behaviour insights, sales optimisation, or even disease diagnosis, face prohibition? Or could the regulation, despite all its rigour, deliberation and intent to safeguard against invasive technologies turn into a potential impediment to technological innovation in Europe? More importantly, how do these emerging global regulations illuminate the Indian context?
The accelerated pace of AI development has been a stark reminder of the contrasting tardiness of institutions of civil welfare and governance in assessing risks and propping up appropriate guardrails soon enough. Europe, as of today, has the most advanced tech policies that are proposed to reign in unfettered tech expansion and exploitation and the negative externalities thereof. Developed European economies appear to agree on the need to factor in externalities of narrow, unidimensional growth mechanisms. The US on the other hand, has liberally championed tech innovation and startups over the late 90s and early 00s leading to the Silicon Valley boom. However, the negative externalities and resulting social costs started taking a toll on their people and democracy, over the last decade or so. This led to post facto accountability calls on the consequences, with tech leaders since 2018. Of which at least some of the consequences could have been prevented through Responsible Innovation (RI) frameworks. Ex-post regulatory patches in the wake of prolonged media pressure and public outcry, like President Joe Biden’s Executive Order for Safe and Trustworthy AI in October 2023, or the more recent AI Policy Roadmap, are considered significant steps, yet don’t do enough.
The Industrial Revolution transformed Europe from agrarian societies into industrial powerhouses. However, regarding technological innovation, Europe is lagging behind the US and Asia. Lessons in labour exploitation, discrimination, increased wealth inequality, fossil fuel dependency, and environmental damages from the Industrial Revolution could very well be informing Europe’s approach to the AI movement. Even as the US models the Industry 4.0 framework for the rest of the world to accelerate GDP growth, Europe is paving the way for Industry 5.0 which recalibrates industries’ goals to triple bottom lines - people, planet and profits. Paying heed to some of the most pressing issues facing humanity today, and ensuring policies weigh in on them are arguably multistakeholder, circular, and non-zero-sum approaches, that could either set benchmarks, or isolate Europe over time.
Notably, when Google rolled out a raw Bard in about 180 countries in May 2023, it excluded Europe, leading to speculation that Google wanted to steer clear of regulatory barbed wires of the EU AI Act that was shaping up. One way to look at it would be that EU policies are possibly the only vanguard against Big Tech’s rapid releases of potent yet under-prepared AI technologies into the public sphere. The result then could very well be that the EU gets left out in the frenzied AI arms race, and homegrown innovation also finds itself restricted. Or that by the time the AI chips settle, we would see more responsible innovation coming out of Europe, much like how regulation on fossil fuels spurred innovation in the electric vehicles category.
Both US and Europe’s approaches pose unique opportunities and challenges. In a much larger and more complex context as India which is now rolling out new laws for data privacy, competition, and all things digital, there are probable advantages in seeing what’s already done and what could be avoided. The EU’s risk-based categorisation and stringent requirements for high-risk applications underscore the importance of safeguarding fundamental rights and ensuring transparency and accountability. With better clarity now on the potential long-term social harms, newer regulation could benefit from the EU's method of mandating responsible development and deployment of potent technologies, while documenting clear definitions of harm. The risk-based approach sets clear guidelines for responsible downstream adoption and does not necessitate setting rules for every use case. Risk assessments and AI governance mechanisms prevent privacy overreaches, bias, discrimination and potential manipulation. This however could be cost-intensive on the books to start with, but as in previous regulatory instances of automobiles, junk food or broadcast media, it is known to save a lot more cost to business and society later on.
India, gunning for industrial and economic growth, is more aligned with the US in that aspect, so the window of flexibility offered to innovation in US policies holds a lot more appeal. Rapid advancements yield significant economic returns in the immediate future that fulfil India’s growth goals. With more than half of India’s population still offline, the digital divide is stark. Among those who use the internet, digital and information literacy and skills are still sparse, limiting access and equitable distribution of economic opportunity, information, healthcare and education. On the other hand, by breaking new ground with UPI, Indian language LLMs, and AI applications for Indian use cases, India is demonstrating new standards in innovation to the world. There is no question that India stands to benefit more from enabling the widespread adoption of existing and emerging technologies.
With potent technologies like AI, however, the socio-economic risks are also equally higher in India. Recognising that increased capabilities uncover a new class of risks and responsibilities is pertinent while charting out the way forward for new technologies in India. Balancing oversight and governance of AI with innovation and enterprise therefore is as uncompromisable as it is a tight-rope walk for India. The key lesson from the US here is to act in time to avoid the scramble that the US found itself in with social media regulation. And more importantly, closely examine the incentives of growth that often lead to unintended outcomes. Avoiding the mistakes of being either too lax, as seen in the early stages of US tech policy, or too restrictive, as sometimes critiqued in EU regulations, India can strive for a middle path.
We are way past the information problem stage of Collingridge’s Dilemma. There is sufficient knowledge and understanding of the risks and harms now. Acting on robust AI governance before we outgrow the control stage of the Dilemma, is now the task at hand.
Nidhi Sudhan, Co-founder of Citizen Digital Foundation, is listed among the ‘100 Brilliant Women in AI Ethics™ 2024’.