La explicabilidad de la IA (XAI) como imperativo categórico para la seguridad jurídica y la integridad ética
DOI:
https://doi.org/10.20983/anuariodcispp.2025.11Keywords:
Debido proceso, Discriminación indirecta, Explicabilidad algorítmica, Responsabilidad tecnológicaAbstract
La creciente incorporación de sistemas de inteligencia artificial opacos en decisiones públicas y privadas plantea un problema estructural para los derechos humanos: la imposibilidad de conocer las razones detrás de decisiones automatizadas erosiona la dignidad humana, vulnera el debido proceso y debilita la seguridad jurídica. El objetivo del artículo es demostrar que la explicabilidad (XAI) constituye un imperativo categórico para garantizar la legitimidad ética y jurídica de la IA en contextos de alto impacto. Metodológicamente, se desarrolla un enfoque interdisciplinario que articula un análisis filosófico-normativo (dignidad kantiana e injusticia epistémica), revisión jurídico-comparada de jurisprudencia crítica (State v. Loomis, syri, Deliveroo) y un estudio técnico de herramientas de explicabilidad (lime, shap, contrafactuales) como instrumentos de auditoría y control. Los resultados evidencian que la opacidad algorítmica genera tres efectos sistémicos: cosificación del individuo, al ser reducido a un vector de datos; afectación estructural al derecho de defensa, al impedir impugnar la lógica de decisión; y discriminación indirecta derivada de variables proxy invisibles. Asimismo, los casos comparados muestran que la ausencia de explicabilidad permite que el secreto comercial prevalezca sobre garantías procesales, mientras que los modelos transparentes fortalecen el equilibrio entre eficacia administrativa y derechos fundamentales. El estudio concluye que la IA solo puede ser compatible con un orden constitucional democrático si incorpora mecanismos robustos de explicabilidad orientados no solo a describir el funcionamiento técnico, sino a justificar normativamente las decisiones. La XAI emerge así como una condición estructural de legitimidad y el fundamento indispensable para la responsabilidad y confianza pública en sistemas algorítmicos.
References
Akitra. (2024). Accountability and liability in agentic ai systems. https://akitra.com/accountability-and-liability-in-agentic-ai-systems/
Alami, H., Lehoux, P., Shaw, J., Fortin, J.-P., Fleet, R., Ag Ahmed, M. A., & Denis, J.-L. (2024). Epistemic injustice in generative ai. aaai/acm Conference on ai, Ethics, and Society. https://ojs.aaai.org/index.php/AIES/article/view/31671/33838
Aloisi, A., & De Stefano, V. (2021). Frankly, my rider, I don’t give a damn. La rivista il Mulino. https://www.rivistailmulino.it/a/frankly-my-rider-i-don-t-give-a-damn-1
Barocas, S., & Selbst, A. D. (2017). Big data’s disparate impact. Colorado Technology Law Journal, 15(4). http://ctlj.colorado.edu/wp-content/uploads/2021/02/17.1_4-Washington_3.18.19.pdf
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). ai, algorithms, and awful humans. Fordham Law Review, 87(6), 2147-2161. https://ir.lawnet.fordham.edu/cgi/viewcontent.cgi?article=6079&context=flr
Brown, A., & Weiß, M. (2024). Bias in the loop: How humans evaluate ai-generated suggestions. arXiv. https://arxiv.org/html/2509.08514v1
Business & Human Rights Resource Centre. (2021). Court rules Deliveroo used “discriminatory” algorithm. https://www.business-humanrights.org/en/latest-news/court-rules-deliveroo-used-discriminatory-algorithm/
Citron, D. K. (2019). Artificial intelligence and procedural due process. University of Pennsylvania Journal of Constitutional Law, 22(1). https://scholarship.law.upenn.edu/cgi/viewcontent.cgi?article=1901&context=jcl
Clifford Chance. (2021). The Italian courts lead the way on explainable ai. Talking Tech. https://www.cliffordchance.com/insights/resources/blogs/talking-tech/en/articles/2021/06/the-italian-courts-lead-the-way-on-explainable-ai.html
Deep Science Research. (2024). Explainable artificial intelligence (xai) as a foundation for trustworthy artificial intelligence. Deep Science Research. https://deepscienceresearch.com/dsr/catalog/book/10/chapter/74
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114-126.
Elish, M. C. (2016). Moral crumple zones: Cautionary tales in human-robot interaction. Data & Society Research Institute.
eu ai Act. (2024). Key issue 4: Human oversight. https://www.euaiact.com/key-issue/4
European Union. (2024). Article 14: Human oversight. eu Artificial Intelligence Act. https://artificialintelligenceact.eu/article/14/
Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46(3), 205-211.
Hadfield, G. K., Bozdag, E., Law, A., & Neilson, S. (2019). Explanation and justification: ai decision-making, law, and the rights of citizens. Schwartz Reisman Institute. https://srinstitute.utoronto.ca/news/hadfield-justifiable-ai
Harvard Law Review. (2017). State v. Loomis, 130, 1530-1539. https://harvardlawreview.org/print/vol-130/state-v-loomis/
Hatherley, J. J. (2024). Healthy mistrust: Medical black box algorithms, epistemic authority, and preemptionism. Cambridge Quarterly of Healthcare Ethics. https://www.cambridge.org/core/journals/cambridge-quarterly-of-healthcare-ethics/article/healthy-mistrust-medical-black-box-algorithms-epistemic-authority-and-preemptionism/38018A52AF77F8C120DC815A4EE6AD52
He, Y., Yang, Q., & Chen, S. (2024). Algorithm appreciation or aversion: The effects of accuracy disclosure on users’ reliance on algorithmic suggestions. Behaviour & Information Technology. https://www.tandfonline.com/doi/full/10.1080/0144929X.2025.2535732
Henriksen, A. (2024). High-risk ai transparency? On qualified transparency mandates for oversight bodies under the eu ai Act. Technology and Regulation. https://techreg.org/article/view/19876
ieee Computer Society. (2024). ai’s role in ethical decision-making: Fostering fairness in critical systems with explainable ai (xai). ieee Computer Society. https://www.computer.org/publications/tech-news/community-voices/explainable-ai
IEEE Standards Association. (2019). Ethically aligned design. http://standards.ieee.org/wp-content/uploads/import/documents/other/ead_v2.pdf
Keeling, G. (2024). Why dignity is a troubling concept for ai ethics. AI & Society. https://pmc.ncbi.nlm.nih.gov/articles/PMC11963102/
Lauridsen, K. M., & Bjørnsen, H. N. (2024). Epistemic authority and medical ai: Epistemological differences and challenges in medical practice. Medicine, Health Care and Philosophy. https://www.researchgate.net/publication/397176188
Liu, S. (2024). Does explainable ai have moral value? arXiv. https://arxiv.org/html/2311.14687
Liu, M., Grunde-McLaughlin, M., Goel, A., & Brummette, M. (2023). Automation complacency: Navigating the ethical challenges of ai in healthcare. Columbia University School of Professional Studies. https://sps.columbia.edu/news/automation-complacency-navigating-ethical-challenges-ai-healthcare
London, A. J. (2019). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical ai. Journal of Medical Ethics, 45(12), 820-826. https://pubmed.ncbi.nlm.nih.gov/33737318/
Mäki-Kuutti, I., Raisamo, R., & Vakkuri, V. (2021). Philosophical foundations for digital ethics and ai ethics: A dignitarian approach. Frontiers in Computer Science, 3. https://pmc.ncbi.nlm.nih.gov/articles/PMC7909376/
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381-410. https://pubmed.ncbi.nlm.nih.gov/21077562/
Rachovitsa, M. (2022). Human rights implications of the use of ai in the digital welfare state: Lessons learned from the Dutch syri case. Human Rights Law Review, 22(2). https://academic.oup.com/hrlr/article/22/2/ngac010/6568079
Raknes, S., & Bakken, T. H. (2023). Informed consent to ai-based decisions in healthcare: Must patients understand the ai’s output? Oslo Law Review, 11(1), 82-99. https://www.scup.com/doi/full/10.18261/olr.11.1.7
Siegel, M. D., & Klein, D. (2024). Deepfakes in the courtroom: Problems and solutions. Illinois State Bar Association. https://www.isba.org/sections/ai/newsletter/2025/03/deepfakesinthecourtroomproblemsandsolutions
Stanford Encyclopedia of Philosophy. (2015). Reasons for action: Justification vs. explanation. https://plato.stanford.edu/archives/sum2015/entries/reasons-just-vs-expl/
Supreme Court of Wisconsin. (2016). State v. Loomis, 881 N.W.2d 749. https://courts.ca.gov/sites/default/files/courts/default/2024-12/btb24-2l-3.pdf
United Nations Educational, Scientific and Cultural Organization (Unesco). (2021). Recommendation on the ethics of artificial intelligence. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
-----. (2022). Unesco’s input in reply to the ohchr report on the Human Rights Council Resolution 47/23. https://www.ohchr.org/sites/default/files/2022-03/UNESCO.pdf
Van Bekkum, M., & Borgesius, F. Z. (2021). Digital welfare fraud detection and the Dutch syri judgment. Computer Law & Security Review, 42. https://www.iapp.org/news/a/digital-welfare-fraud-detection-and-the-dutch-syri-judgment
Verityai. (2024). ieee ethically aligned design: Engineering ethics into ai systems. https://verityai.co/blog/ieee-ethically-aligned-design-guide
Viljoen, S., & Wenger, A. (2024). Algorithmic profiling as a source of hermeneutical injustice. Philosophy & Technology, 38(1). https://pmc.ncbi.nlm.nih.gov/articles/PMC11741985/
Washington, A. L. (2019). How to argue with an algorithm: Lessons from the compas-ProPublica debate. Colorado Technology Law Journal, 17(1). http://ctlj.colorado.edu/wp-content/uploads/2021/02/17.1_4-Washington_3.18.19.pdf
