Research and Articles
Hotline
- Capital Markets Hotline
- Companies Act Series
- Climate Change Related Legal Issues
- Competition Law Hotline
- Corpsec Hotline
- Court Corner
- Cross Examination
- Deal Destination
- Debt Funding in India Series
- Dispute Resolution Hotline
- Education Sector Hotline
- FEMA Hotline
- Financial Service Update
- Food & Beverages Hotline
- Funds Hotline
- Gaming Law Wrap
- GIFT City Express
- Green Hotline
- HR Law Hotline
- iCe Hotline
- Insolvency and Bankruptcy Hotline
- International Trade Hotlines
- Investment Funds: Monthly Digest
- IP Hotline
- IP Lab
- Legal Update
- Lit Corner
- M&A Disputes Series
- M&A Hotline
- M&A Interactive
- Media Hotline
- New Publication
- Other Hotline
- Pharma & Healthcare Update
- Press Release
- Private Client Wrap
- Private Debt Hotline
- Private Equity Corner
- Real Estate Update
- Realty Check
- Regulatory Digest
- Regulatory Hotline
- Renewable Corner
- SEZ Hotline
- Social Sector Hotline
- Tax Hotline
- Technology & Tax Series
- Technology Law Analysis
- Telecom Hotline
- The Startups Series
- White Collar and Investigations Practice
- Yes, Governance Matters.
- Japan Desk ジャパンデスク
Social Sector Hotline
October 21, 2021Can States delegate responsibilities to Artificial Intelligence?
-
No universally accepted definition of Artificial Intelligence (“AI”) despite the advancements and growth in its ambit
-
Obligation on States under International Human Rights Law to be respected while using AI technologies in state functions
-
Deployment of AI in State functions may pose challenges to: (a) Right to Life; (b) Right to Privacy; and (c) Right against Discrimination
-
A need of a regulatory framework for using AI in law-and-order governance
Introduction
Recently the White House’s Office of Science and Technology Policy expressed the need for a new “bill of rights” to guard against powerful and uncharted use of artificial intelligence technology in day-to-day life. The concerns about the misuse of this technology which can potentially infringe upon the basic rights of individuals were raised by the chief science advisor to the Joe Biden Government, Mr. Eric Lander1.
Artificial Intelligence (AI) is a machine’s capacity to duplicate or replicate intelligent human behavior. It is an umbrella term that encompasses multiple technologies including machine learning, neural computing, deep learning, computer vision, natural language processing (NLP), machine reasoning, and strong AI2. However, there is no universally accepted definition of AI. With growing technology and advanced data science, the ambit of AI is ever increasing. Use of AI in day-to-day life has not been a new phenomenon but its active use by the State Governments for maintaining law and order in society has come with its own challenges. Regulating the use of AI in a State’s internal governance functions becomes imperative as the powerful technologies created by AI poses many ethical and legal challenges. Despite the rampant use of the AI technology, there have been no regulatory framework for the method, extent, conditions of use of AI in governing law and order situations.
Challenges posed by AI
-
Right to Life
There is a positive obligation placed upon states to protect the ‘right to life’ of its citizen. The International Human Rights Law (IHRL) which governs the obligations of States towards citizens and other individuals within their jurisdiction, imposes positive duties on Governments to protect individuals from human rights violations, and against infringement to ‘right to life’.3
Deprivation of life is only permitted if it happens within a legal framework keeping in mind the principals of necessity, proportionality, and legality.4 Right to life is the grundnorm of all evolved existing legal systems. It does not only encompass a mere animal existence but a right to a dignified life5. If machines are given the power to take policing decisions on their own, it is most likely to be done on the basis of automated processes. For such a decision making, data will be collected, stored, analysed and used through algorithms. AI’s decision making would rely upon a software that will help in predicting the likelihood of a given scenario. AI cannot be expected to understand the complexities of societal structural problems as it lacks the basic elements of empathy, pain, guilt, feeling, emotions, love, care etc. that are exclusive to human beings. It is thus reasonable to doubt whether a machine would be able to access necessity, proportionality and legality of any actions. Moreover, the State cannot delegate its obligation to protect the life and dignity of its individuals to a machine, no matter how advanced it may be.
-
Right to Privacy
Right to Privacy has been protected under the IHRL. The International Covenant on Civil and Political Rights (ICCPR), a multilateral treaty adopted by United Nations General Assembly Resolution 2200A (XXI) on 16 December 19666, prohibits ‘arbitrary or unlawful interference with his privacy’ under Article 17, thereby obligating the parties to avoid unwanted interference in privacy of an individual.7 The constant surveillance action by the State upon its citizens would subject them to a constant monitoring activity. Such monitoring activity would effectively be carried out by data collection of every activity of an individual. This data will be analysed by a set of recognised algorithms, (which may or may not be accurate for every situation) which will lay a resolution plan. This is likely to result in ‘one size fits all solution’ to certain complex societal problems. The constant monitoring of the individual activities would be a serious interference in the liberty of an individual, and thus there needs to be a safeguard against such harmful uses of AI. Recently, in India, the Right to Privacy became a fundamental right8, meaning thereby that this right is placed on equal pedestal with ‘right to life’ or ‘right against discrimination’. The protection of privacy has become an inevitable duty that the States must perform diligently.
-
Right against discrimination
The Council of Europe recommendation has defined profiling as ‘an automatic data processing technique that consists of applying a “profile” to an individual, particularly in order to take decisions concerning her or him or for analysing or predicting her or his personal preferences, behaviours and attitudes.’9 The constant surveillance and collection of data by the State with the use of AI can result in a ‘profiling’ activity. Profiling of personal data can have the possible outcome of infringement of right against discrimination. The probability of algorithm-based decision-making being biased towards certain colour, caste, gender etc. cannot be entirely denied. The States with definitive legal systems prohibiting any discrimination would then have the responsibility to avoid any such event of discrimination. Such a responsibility cannot be delegated to machinery run by AI. The likelihood of tampering of AI cannot be ignored. If such sophisticated systems or the information gathered by such systems ends up being misused, the repercussions of the same can be beyond imaginable for the State as well as the individuals. According to report by Reuters, Amazon had used AI to automate the resume-review process for engineers and coders. The team that had trained the AI was male dominated. As a result, the system learned to disqualify anyone who attended a women’s college or who listed women’s organizations on their resume.10 Such act of discrimination by one of the world’s largest corporate giants would have opened floodgates for lawsuits on gender discrimination, but the software was pulled out by Amazon. Indirect discrimination at the hands of AI was recognized by the European Court of Human Rights (ECHR) in the case of DH & Ors v Czech Republic. The court struck down an apparently neutral set of statutory rules, which implemented a set of tests designed to evaluate the intellectual capability of children which resulted in an excessively high proportion of minority Roma children scoring poorly and consequently being sent to special schools, probably because the tests could not navigate the cultural and linguistic differences.11
Conclusion
With the development of AI, there is a need to revisit the applicable rules which are in tune with the international law on Human Rights. It is important to determine the extent and use of AI and regulate the conduct of States and individuals. It is imperative to analyse the extent of the duty of due diligence to be used by the State before taking aid from AI technology for governance functions. Both the right to life and the right to privacy demand a regulation of the use of AI in domestic law enforcement that must meet with the standards of the domestic law or law of the land of the State. The Governmental actions would have to be predictable and also provide adequate and effective guarantees against abuse of the power that the AI technology provides. Transparency in the use of AI in governance is of utmost importance. The States taking aid of AI cannot delegate the responsibilities for law and order control to AI entirely. The chances of AI technology being misused for certain benefits cannot be kept aside and thus an effective regulatory framework for legal use, method of use and rationale behind the use of AI in governance needs to be developed. Issues of liability and responsibility for the use or misuse of the AI have to be addressed by such regulatory framework in order to enable the individuals of a state to take necessary actions against misuse of the AI technology.
You can direct your queries or comments to the authors
1 See AP News, White House proposes tech ‘bill of rights’ to limit AI harms, available at https://apnews.com/article/joe-biden-science-technology-business-biometrics-b9dbf5fee3bf0e407b988b31e21f5300
2 See PR Newswire, Artificial Intelligence Market Forecasts, available at http://www.prnewswire.com/news-releases/artificial-intelligence-market-forecasts-300359550.html.
3 See Human Rights Committee, General Comment No 31 The Nature of the General Legal Obligation Imposed on States Parties to the Covenant, UN Doc CCPR/C/21/Rev.1/Add. 13 (26 May 2004) para 8, available at https://undocs.org/CCPR/C/21/Rev.1/Add.13
4 See Spagnolo Andrea, ‘Human rights implications of autonomous weapon systems in domestic law enforcement: sci-fi reflections on a lo-fi reality’, available at http://www.qil-qdi.org/human-rights-implications-autonomous-weapon-systems-domestic-law-enforcement-sci-fi-reflections-lo-fi-reality/
5 Maneka Gandhi v. Union of India, 1978 AIR 597
6 See International Covenant on Civil and Political Rights, available at https://treaties.un.org/Pages/ViewDetails.aspx?chapter=4&clang=_en&mtdsg_no=IV-4&src=IND
7 See International Covenant on Civil and Political Rights, ‘Article 17 1. No one shall be subjected to arbitrary or unlawful interference with his privacy, family, home or correspondence, nor to unlawful attacks on his honour and reputation. 2. Everyone has the right to the protection of the law against such interference or attacks.’, available at https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx
8 K.S. Puttuswamy v. Union of India, (2017) 10 SCC 1
9 See Recommendation CM/Rec(2010)13 of the Committee of Ministers to member States on the protection of individuals with regard to automatic processing of personal data in the context of profiling, Adopted by the Committee of Ministers on 23 November 2010 at the 1099th meeting of the Ministers’, available at https://rm.coe.int/16807096c3
10 See Jeffry Dastin, Amazon Scraps secret AI recruiting tool thatshowed bias against women, available at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
11 D.H and Ors. V. The Czech Republic, Application No. 57325/00, Judgement dated 13.11.2017, available in English at http://www.errc.org/uploads/upload_en/file/02/D1/m000002D1.pdf