Research and Articles
Hotline
- Capital Markets Hotline
- Companies Act Series
- Climate Change Related Legal Issues
- Competition Law Hotline
- Corpsec Hotline
- Court Corner
- Cross Examination
- Deal Destination
- Debt Funding in India Series
- Dispute Resolution Hotline
- Education Sector Hotline
- FEMA Hotline
- Financial Service Update
- Food & Beverages Hotline
- Funds Hotline
- Gaming Law Wrap
- GIFT City Express
- Green Hotline
- HR Law Hotline
- iCe Hotline
- Insolvency and Bankruptcy Hotline
- International Trade Hotlines
- Investment Funds: Monthly Digest
- IP Hotline
- IP Lab
- Legal Update
- Lit Corner
- M&A Disputes Series
- M&A Hotline
- M&A Interactive
- Media Hotline
- New Publication
- Other Hotline
- Pharma & Healthcare Update
- Press Release
- Private Client Wrap
- Private Debt Hotline
- Private Equity Corner
- Real Estate Update
- Realty Check
- Regulatory Digest
- Regulatory Hotline
- Renewable Corner
- SEZ Hotline
- Social Sector Hotline
- Tax Hotline
- Technology & Tax Series
- Technology Law Analysis
- Telecom Hotline
- The Startups Series
- White Collar and Investigations Practice
- Yes, Governance Matters.
- Japan Desk ジャパンデスク
Technology Law Analysis
June 19, 2018Can artificial intelligence be given legal rights and duties?
This article was originally published in the 17th June 2018 edition of
No law currently in force recognises Artificial Intelligence as a legal person
Artificial Intelligence (AI) has ceased to be that fantastic big idea of the future. AI is now more science and less fiction, with computers and robots replacing humans.
AI, simply put, is the capability of a machine to imitate intelligent human behaviour. With the advent of new technologies, the permeation of AI in our day-to-day lives has become more pronounced.
However, a question that has still not been answered is: How do we address the possibility of an AI causing harm or damage in some form to human society? The more pertinent question is who do we hold responsible for such harm. To comprehend our inability to answer this question, one needs to understand the fallibility of our legal system in being outdated and incapable of dealing with AI.
LEGAL PERSONALITY OF AI
Legal personhood is inherently linked to individual autonomy but has not been granted exclusively to humans. No law currently in force recognises AI as a legal person. However, with Sophia, a humanoid being granted citizenship by Saudi Arabia, coupled with the recent accident caused by Uber’s self-driving car, it has become imperative to address the legal personhood of AI.
The question of whether legal personhood can be conferred on an AI boils down to whether it can be made the subject of legal rights and duties. The legal fiction created for corporates serves as a precedent for granting legal personhood to AI. However, there exists a distinction between corporates and AI. Corporates are fictitiously independent, yet accountable via their stakeholders, while an AI may be actually independent.
A possible middle ground may be granting AI a bundle of rights selected from those currently ascribed to legal persons. However, concrete steps in this regard are yet to be seen.
Another issue that arises is attributing liability to an AI. The general rule has been that since an AI cannot qualify as a legal person, it cannot be held liable in its own capacity. The biggest roadblock to reconsider this rule is the conundrum as to how to penalise an AI for its wrongdoing, which has not been dealt with as of today.
CONTRACTUAL RELATIONSHIPS
Another concern is the ability of an AI to execute and be bound by contracts. While international laws have recognised self-enforcing contracts, there is a need for a compre-hensive legislation on the subject.
Under Indian law only a “legal person” can be competent to enter a valid contract. The general rule thus far has been that an AI may not qualify as a legal person. Hence, a contract entered into by an AI of its own volition may not be regarded as a valid contract in India.
Resultantly, steps need to be taken to ensure that technology standards are developed to adequately regulate contracts entered into by AI.
EMPLOYMENT AND AI
The driver behind the development of AI is the demand and need for automation. With the objective of increasing efficiency, companies across the world have prescribed to the practice of utilising AI as a replacement of the human workforce.
This wave of automation is creating a gap between the existing employment laws and the growing use of AI in the workplace.
For instance, can an AI claim benefits such as provident fund payments or gratuity under existing employment legislation or sue a company for wrongful termination of employment? Such questions also hold relevance for the human workforce, as in most instances, AI requires individuals to function and the failure of employment laws to have clarity with regard to the above may adversely impact such individuals, as well.
The penetration of self-driven cars, robots and fully-automated machines is only expected to surge with the passage of time. As a result, the dependency of society as a whole on AI systems is also expected to increase.
To safeguard the integration of AI, a balanced approach would need to be adopted which efficiently regulates the functioning of AI systems but also maximisesits benefits.