Notable recent advancements in generative Artificial Intelligence (AI) such as OpenAI’s ChatGPT and Google’s Gemini, have led to significant efforts towards AI regulation. India’s AI market has grown by 25-35 per cent and is projected to reach USD 17 billion by 2027. However, despite Prime Minister Narendra Modi’s acknowledgement of the harmful consequences of AI misuse to society at large, and the need to establish a set of “do’s and don’ts” around the technology, there is a notable absence of such regulations. This gap is evident in the number of deepfakes that have surfaced during the Indian general elections.
With a commitment to leveraging AI as a catalyst for economic growth and societal advancement, India’s aspirations for global leadership in this field are understandable. In March 2024, the union government announced an USD $1.2 billion (INR 10,371 crore) investment for the India AI Mission, which aims to foster a holistic ecosystem to drive AI innovation, promulgate AI utilization across sectors, and prioritize skill enhancement and socio-economic advancement.
India’s advancement in AI is apparent in various parameters such as the AI Skill Penetration Rate, which is the highest in the world followed by the United States and Germany. India is also ranked 8th in terms of AI patent filing, with a 93 percent share of Machine Learning, showcasing how Indian companies are at the forefront of AI innovation. For instance, Niramai Health Analytix, a Bangalore-based deep-tech start-up, detects early-stage breast cancer without radiation by using AI to analyze thermal images. Another company, Spotle, an AI-powered millennial career platform, uses a patented ML-based algorithm to build capability and potential scores for a candidate and accordingly rank his or her profile for a job. Industry leaders are constantly pushing the boundaries of innovation. For example, Tech Mahindra has unveiled “Project Indus” which is an initiative in language technology to develop a pure Hindi Large Language Model (LLM).
This advancement comes with a dual imperative: to foster progress while ensuring responsible development. The key to India’s adoption of AI to stimulate economic expansion and foster societal integration hinges on developing a policy framework that must navigate a complex landscape fraught with opportunities and ethical considerations. However, the early signs reflect inconsistency and contradictions.
India’s AI Thrust
NITI Ayog — a government-run think-tank — is at the forefront of India’s journey into the realm of AI. It unveiled the National Strategy for Artificial Intelligence, #AIForAll, in 2018, outlining a comprehensive framework for AI research and development across various sectors, including healthcare, education, agriculture, and smart cities.
One of the pillars of India’s AI strategy is leveraging data platforms to facilitate AI development. Initiatives such as the National AI Resource Platform (NAIRP) and the IndiaAI Datasets Platform signify the government’s proactive approach to promoting data sharing for AI innovation. While these platforms aim to democratise access to datasets and foster collaboration amongst stakeholders, including government agencies, industry entities, and academic institutions, concerns arise regarding the privacy implications and potential commercial exploitation by the monetisation of government-held data.
Private firms, both domestic and international, are seemingly aligning with the government’s vision, leveraging AI to develop solutions that drive growth. For example, to help modernise India’s agricultural sector, the World Economic Forum’s AI4AI (Artificial Intelligence for Agriculture Innovation) program is bringing together various stakeholders such as government officials, academia, and business entities to create new solutions for farming challenges such as crop wastage and market access.
Internationally, India’s participation in collaborations such as the Global Partnership on Artificial Intelligence (GPAI), underscores its commitment to global cooperation in AI governance. The Ministerial Declaration signed at the recent GPAI Summit held in New Delhi in December 2023, highlights India’s role in promoting responsible AI, data governance, and addressing global challenges through AI innovation. India also signed the Bletchley Declaration at the AI Safety Summit held in the UK in November 2023, which reaffirms the immense global opportunities presented by AI while recognizing its significant risks such as disinformation and potential intentional misuse in domains such as cybersecurity and biotechnology.
India’s National Strategy also highlights several challenges to AI adoption in the country including unclear privacy, security, and ethical regulations. Several state bodies are actively engaged in formulating AI policies to address these challenges, including the Ministry of Electronics and Information Technology (MEITY), which has convened committees to tackle development, safety, and ethical concerns. The Bureau of Indian Standards has also set up a committee for drafting Indian AI standards.
Policy Landscape: Blind Spots, Inconsistency, and Contradictions
Technological advancements and their adoption have political and social implications. For instance, the Delhi Police initially employed facial recognition technology in 2019, reportedly for locating missing children under authorization from the Delhi High Court dating back to 2017. However, recent disclosures through RTI requests indicate that the use of this technology has expanded beyond its original purpose to include investigative activities without legal authorization. This raises significant privacy concerns, exacerbated by the absence of comprehensive data protection laws and specific regulations governing facial recognition technology in India.
Biased datasets can lead to biased AI tools. India’s rich linguistic diversity presents both opportunities and challenges in AI deployment. Initiatives like the Bhashini program and AI4 Bharat underscore the potential of vernacular language models to empower non-English-speaking populations. However, like any AI model, these programs are susceptible to bias if the training data reflects existing social inequalities. For example, relying primarily on written sources may underrepresent spoken dialects or languages used by marginalised communities. This bias could lead to the AI perpetuating stereotypes or even excluding certain languages.
India’s digital divide also creates a major barrier — 45 percent of the country’s population lacks internet access. Many small-scale farmers lack access to the internet, smartphones, or the technical expertise to use AI effectively. While the #AIForAll strategy has acknowledged the need for bridging the digital divide, the policy fails to provide solutions to achieve it.
Moreover, there are growing concerns regarding the government’s regulatory overreach in the tech sector, particularly in AI development and deployment. Recently, Google’s Gemini sparked a controversy due to a response to the question, “Is Prime Minister Narendra Modi a fascist?” and caused an uproar among government officials. In response, the Ministry of Electronics and Information Technology (MeiTY), issued a controversial AI Advisory that mandated AI companies to seek government approval before offering their products online in India. However, it was retracted after a significant backlash from the industry. As Rohit Kumar, founder of Quantum Hub, told the Hindu, the advisory “would have severely reduced speed to market and dented the innovation ecosystem.”
While multiple policy initiatives have been introduced, they lack coherence and fail to adequately address the complex ethical, legal, and societal implications of AI technologies. These shortcomings stem in part from a lack of public consultations in policymaking. Certain groups or perspectives are often subtly included or excluded, leading to an uneven representation. The consultative committees shaping these frameworks are stacked with market leaders such as Google, Microsoft, and IBM Watson alongside academics, with only minimal involvement from civil society organizations advocating for the public interest. This exclusion leads to blind spots regarding the potential impact on societal stakeholders such as workers and their rights. Recent comments made by the Minister of State for Information Technology, Rajeev Chandrasekhar, dismissing concerns about AI-related job displacement as “nonsense, bakwas” exemplify this neglect.
The emerging regulatory framework prioritizes technical compliance and rapid innovation, which benefits the government with easier enforcement capabilities and makes it expedient for large companies with resources to comply. Furthermore, the government’s approach to addressing AI-related harms, such as deepfakes, has been reactive and superficial, lacking deep research and comprehensive strategies. To overcome these challenges, India needs a holistic approach to AI governance that prioritizes risk assessment, transparency, accountability, and protecting citizens’ rights.
Trailblazing Tomorrow
India’s journey towards AI leadership hinges on its ability to harness technology for the greater good while upholding values of inclusivity, equity, and responsible innovation. Along these lines, the Indian Government plans to introduce a draft regulatory framework for AI by the summer of 2024 to harness its benefits for economic development while also safeguarding citizens’ rights. This framework is expected to center on the fundamental principle that every platform must bear legal responsibility for any harm it causes or facilitates. The AI regulations would also set clear guidelines for platforms throughout their development process, with a focus on tackling issues like bias and misuse during model training, and include a provision to ensure that they do not enable criminal activities.
While these measures could significantly mitigate the risks associated with AI deployment, challenges remain in balancing stringent regulations with innovation and ensuring consistent enforcement across diverse applications. Adopting a risk-based approach is crucial given the varying levels of potential impacts of these applications on individuals. Comprehensive public consultations and inclusive policymaking will be essential to address these issues effectively. The current landscape reveals several challenges and shortcomings in the regulatory framework, transparency, and accountability that must be addressed to ensure AI’s beneficial impact on society. In navigating these challenges and fortifying its regulatory framework, India needs to steer its AI journey towards a future that is both prosperous and socially responsible.
Editor’s Note: A version of this piece originally appeared on 9DASHLINE and has been republished with permission from the editors.
***
Also Read: The Militarization of AI in South Asia
Image 1: 9Dashline and Anant Sharma and Igor Omilaev via Unsplash
Image 2: UK Government hosts the AI Safety Summit at Bletchley Park, November 2023, via UK Government Flickr