Student at Hidayatullah National Law University, Raipur, India
This paper explores the ethical challenges and responsibilities that arise with the growing use of Artificial Intelligence (AI), especially in the legal field. AI has the power to mimic human intelligence and is now being used to make legal decisions, but its use raises important concerns around fairness, accountability, and transparency. These concerns are grouped under the concept of FATE, which stands for ‘Fairness, Accountability, and Transparency in Ethics’. The paper discusses how faulty data can bring about to biased results, how it is difficult to hold anyone accountable when AI systems make mistakes, and how many AI systems lack the transparency needed for people to understand how decisions are made. Various countries and global institutions have proposed ethical guidelines to manage AI use responsibly, including India’s NITI Aayog, the European Union, UNESCO, the UAE, and China. However, most of these guidelines are non-binding and symbolic. Lastly, the paper suggests a shift toward Human-Centered AI (HCAI), which focuses on supporting human values, rights, and dignity to ensure the development of a truly trustworthy AI.
Research Paper
International Journal of Law Management and Humanities, Volume 8, Issue 3, Page 3852 - 3860
DOI: https://doij.org/10.10000/IJLMH.1110301This is an Open Access article, distributed under the terms of the Creative Commons Attribution -NonCommercial 4.0 International (CC BY-NC 4.0) (https://creativecommons.org/licenses/by-nc/4.0/), which permits remixing, adapting, and building upon the work for non-commercial use, provided the original work is properly cited.
Copyright © IJLMH 2021