Research Scholar at Department of Law, University of Rajasthan, Jaipur, India
Assistant Professor at Department of Law, University of Rajasthan, Jaipur, India
The rapid proliferation of Artificial Intelligence (AI), particularly self-driven and autonomous systems, has outpaced the existing legal frameworks governing liability and accountability. As AI systems gain the capacity to make decisions independently of human intervention, traditional criminal law—anchored in human intent, consciousness, and moral blameworthiness—faces profound challenges. This paper critically examines the possibility and practicality of imposing criminal liability on AI systems, with a specific focus on self-driven machines capable of causing harm or violating legal norms. The discussion begins with an exploration of the theoretical limitations of existing doctrines of criminal liability when applied to non-human agents. It then analyzes the potential for adapting legal frameworks, including the "Adaptive Regulatory Framework Theory," to bridge the accountability gap. This theory suggests a dynamic legal approach that evolves with the capabilities and integration of AI, enabling regulators to respond proportionately to emerging risks and responsibilities. Additionally, the paper evaluates the relevance of product liability under civil law and its intersection with criminal accountability. In the Indian context, the Consumer Protection Act, 2019 is examined as a legislative tool that addresses harm caused by defective AI products, especially in terms of consumer safety, service deficiencies, and unfair trade practices. However, the Act’s civil remedies raise critical questions about the adequacy of penal consequences in cases involving gross negligence or autonomous misconduct by AI systems. The study concludes by proposing a hybrid liability model, where human actors—manufacturers, programmers, or users—could face penal consequences under specific circumstances, while simultaneously exploring the need for new categories of liability uniquely tailored to AI. Ultimately, the research argues for a forward-looking legal framework that upholds justice, ensures deterrence, and preserves accountability in an age of intelligent machines.
Research Paper
International Journal of Law Management and Humanities, Volume 8, Issue 2, Page 3125 - 3133
DOI: https://doij.org/10.10000/IJLMH.119387This is an Open Access article, distributed under the terms of the Creative Commons Attribution -NonCommercial 4.0 International (CC BY-NC 4.0) (https://creativecommons.org/licenses/by-nc/4.0/), which permits remixing, adapting, and building upon the work for non-commercial use, provided the original work is properly cited.
Copyright © IJLMH 2021