Evaluating the trustworthiness of AI systems is crucial to ensure their responsible development and deployment. Several frameworks have been proposed by prominent scholars and organizations to guide this evaluation process. Here are some of the notable frameworks, including those developed recently, with expanded descriptions:
1. Assessment List for Trustworthy AI (ALTAI) (Ala-Pietilä et al., 2020; Radclyffe et al., 2023)
Developed by the European Commission's High-Level Expert Group on Artificial Intelligence, ALTAI provides a checklist for organizations to self-assess the trustworthiness of their AI solutions. It covers various aspects of AI development and deployment, including data quality, algorithmic design, and societal impact. ALTAI emphasizes a holistic approach to trustworthiness, encompassing technical, human-centered, and legal considerations.
Key Aspects:
2. Trustworthy AI Framework by the US Department of Veterans Affairs (Department of Veterans Affairs, 2023)
This framework offers detailed guidance to ensure that health-related AI is implemented ethically, effectively, and securely. It aligns with the "FAVES" principles (Fair, Appropriate, Valid, Effective, and Safe) outlined by the federal government. The framework emphasizes six major principles: Fair/Impartial, Robust/Reliable, Transparent/Explainable, Responsible/Accountable, Privacy, and Safe/Secure.
Key Aspects:
3. NIST AI Risk Management Framework (AI RMF) (National Institute of Standards and Technology, 2023)
The AI RMF, developed by the US National Institute for Standards and Technology (NIST), provides a voluntary and flexible framework for managing risks associated with AI systems. It emphasizes four main functions: Govern, Map, Measure, and Manage. The AI RMF helps organizations address risks related to bias, explainability, and robustness, among others.
Key Aspects:
4. OECD AI Principles (Organisation for Economic Co-operation and Development, 2019)
The OECD AI Principles provide a set of internationally agreed-upon guidelines for responsible AI development and deployment. These principles emphasize human-centered values, fairness, transparency, and accountability. They serve as a foundation for national AI policies and strategies.
Key Aspects:
5. Anekanta Responsible AI Governance Framework for Boards (Anekanta Consulting, 2024)
This framework, developed in 2024, provides guidance for boards of directors on governing AI responsibly. It emphasizes the importance of board-level oversight of AI ethics, risk management, and societal impact.
Key Aspects:
6. Trustworthy AI STARS framework (Charter Global, 2025)
Developed by Charter Global, the STARS framework focuses on Security, Transparency, Accountability, Reliability, and Societal well-being. It emphasizes a comprehensive approach to trustworthy AI, considering both technical and ethical aspects.
Key Aspects:
7. IBM's AI Safety and Governance Framework (IBM, 2024)
This framework outlines IBM's approach to responsible AI development and use. It emphasizes safety, transparency, and accountability, and aligns with the AI Seoul Summit's commitments for AI Frontier Safety.
Key Aspects:
8. Designing Trustworthy AI: A Human-Machine Teaming Framework (Salehi, Weller, & Olson, 2023)
This framework, proposed by Salehi, Weller, and Olson, focuses on human-machine teaming as a key aspect of trustworthy AI. It provides guidance on designing AI systems that effectively collaborate with humans, ensuring human oversight and control.
Key Aspects:
These frameworks, while differing in their specific focus and approach, share common goals: to promote responsible AI development, mitigate risks, and foster trust in AI systems. They provide valuable guidance for researchers, developers, and policymakers in navigating the complex landscape of trustworthy AI.
Ala-Pietilä, P., Andreasson, S., Brandtzaeg, P. B., Brinkman, W.-P., Brundage, M., Čerka, P., ... & Zuiderwijk, A. (2020). Assessment list for trustworthy artificial intelligence (ALTAI). High-Level Expert Group on Artificial Intelligence.
Anekanta Consulting. (2024, March). Anekanta Responsible AI Governance Framework for Boards.
Charter Global. (2025). Trustworthy AI STARS framework.
Department of Veterans Affairs. (2023). Trustworthy AI framework.
IBM. (2024). Trustworthy AI at scale: IBM's AI Safety and Governance Framework.
National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0).
Organisation for Economic Co-operation and Development. (2019). OECD Principles on Artificial Intelligence.
Radclyffe, A., Janssens, O., & Lievens, E. (2023). Trustworthy artificial intelligence in government: Emerging international and national governance frameworks. Telecommunications Policy, 47(10), 102543.
Salehi, N., Weller, A., & Olson, J. (2023). Designing trustworthy AI: A human-machine teaming framework to guide development. arXiv preprint arXiv:2308.08481.
Dewel Insights, founded in 2023, empowers individuals and businesses with the latest AI knowledge, industry trends, and expert analyses through our blog, podcast, and specialized automation consulting services. Join us in exploring AI's transformative potential.
Monday-Friday
5:00 p.m. - 10:00 p.m.
Saturday-Sunday
11:00 a.m. - 2:00 p.m.
3555 Georgia Ave, NW Washington, DC 20010
ai@dewel-insight.com
Dewel@2025