AI Liability and Accountability: A Complex Landscape in an Evolving World

  • Vismitha S and Sri Vijai
  • Show Author Details
  • Vismitha S

    Student at SASTRA Deemed University, India

  • Sri Vijai

    Student at SASTRA Deemed University, India

  • img Download Full Paper


Artificial intelligence (AI) is rapidly transforming our world, with applications in virtually every industry and sector. As AI systems become more powerful and autonomous, it is essential to consider the potential for damages that they could cause. Imagine AI breaches the codes of law or causes any damage, who is liable for damages caused? This is a complex question with no easy answer. Liability will depend on the specific facts and circumstances of each case, as well as the applicable legal framework. However, there are a number of factors that could be considered, including the designer, developer, manufacturer, owner, operator, and user of the AI system, as well as any third-party that contributed to its development or use. In addition to the question of liability, there is also the question of accountability. Who should be held accountable for AI damages? This is a broader question, encompassing not only legal responsibility, but also moral and ethical responsibility. Accountability is important because it helps to ensure that those who are harmed by AI systems have access to justice and that those involved in the development and use of AI systems are held responsible for their actions. There are a number of challenges to developing a legal framework for AI liability and accountability. One challenge is the complexity of AI systems. It can be difficult to determine who is at fault when an AI system malfunctions, especially if the system is being used in a complex or unexpected way. Another challenge is the difficulty of anticipating all of the potential ways in which AI systems could cause harm. AI systems are constantly evolving, and new applications are being developed all the time. This makes it difficult to develop laws and regulations that can keep up with the pace of change. Despite these challenges, it is important to develop a legal framework for AI liability and accountability. Therefore, this research article aims to discuss the liability and accountability regimes in place and its flaws to help to protect the public from harm and ensure that the benefits of AI are realized in a responsible and equitable manner.


Research Paper


International Journal of Law Management and Humanities, Volume 6, Issue 6, Page 2367 - 2374


Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution -NonCommercial 4.0 International (CC BY-NC 4.0) (, which permits remixing, adapting, and building upon the work for non-commercial use, provided the original work is properly cited.


Copyright © IJLMH 2021