AI: Shaping the Future with Insight—Balancing Promise and Peril

an abstract image of a sphere with dots and lines

Notable Frameworks for Evaluating Trustworthy AI (Expanded)

08 March 2025
macro photography of silver and black studio microphone condenser

Evaluating the trustworthiness of AI systems is crucial to ensure their responsible development and deployment. Several frameworks have been proposed by prominent scholars and organizations to guide this evaluation process. Here are some of the notable frameworks, including those developed recently, with expanded descriptions:

1. Assessment List for Trustworthy AI (ALTAI) (Ala-Pietilä et al., 2020; Radclyffe et al., 2023)

Developed by the European Commission's High-Level Expert Group on Artificial Intelligence, ALTAI provides a checklist for organizations to self-assess the trustworthiness of their AI solutions. It covers various aspects of AI development and deployment, including data quality, algorithmic design, and societal impact. ALTAI emphasizes a holistic approach to trustworthiness, encompassing technical, human-centered, and legal considerations.

Key Aspects:

  • Seven key requirements: Human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, societal and environmental well-being, and accountability.  
     
     
     
  • Accessible and dynamic checklist: Translates AI principles into practical steps for self-assessment.
  • Focus on risk mitigation: Helps organizations understand and minimize potential risks associated with AI systems.
  • Promotes stakeholder involvement: Encourages participation from diverse stakeholders within and outside the organization.
  • Available as a web-based tool: Provides an interactive platform for self-assessment and guidance.

2. Trustworthy AI Framework by the US Department of Veterans Affairs (Department of Veterans Affairs, 2023)

This framework offers detailed guidance to ensure that health-related AI is implemented ethically, effectively, and securely. It aligns with the "FAVES" principles (Fair, Appropriate, Valid, Effective, and Safe) outlined by the federal government. The framework emphasizes six major principles: Fair/Impartial, Robust/Reliable, Transparent/Explainable, Responsible/Accountable, Privacy, and Safe/Secure.  

 

 

Key Aspects:

  • Focus on healthcare AI: Provides specific guidance for ethical AI implementation in healthcare settings.
  • Alignment with federal standards: Adheres to the "FAVES" principles for trustworthy AI in healthcare.
  • Emphasis on patient safety and privacy: Prioritizes the well-being and data protection of patients.
  • Comprehensive risk management: Includes strategies for identifying and mitigating potential risks associated with AI in healthcare.
  • Promotes transparency and explainability: Ensures that AI systems used in healthcare are understandable and their decisions can be explained.

3. NIST AI Risk Management Framework (AI RMF) (National Institute of Standards and Technology, 2023)

The AI RMF, developed by the US National Institute for Standards and Technology (NIST), provides a voluntary and flexible framework for managing risks associated with AI systems. It emphasizes four main functions: Govern, Map, Measure, and Manage. The AI RMF helps organizations address risks related to bias, explainability, and robustness, among others.

Key Aspects:

  • Voluntary and flexible: Allows organizations to adapt the framework to their specific needs and context.
  • Focus on risk management: Provides a structured approach to identify, assess, and mitigate AI risks.
  • Four core functions: Govern, Map, Measure, and Manage, providing a comprehensive framework for AI risk management.
  • Emphasis on trustworthiness characteristics: Outlines traits of trustworthy AI, including validity, reliability, safety, security, accountability, transparency, explainability, privacy, and fairness.
  • Supports responsible AI development: Encourages organizations to consider ethical and societal implications of AI systems.

4. OECD AI Principles (Organisation for Economic Co-operation and Development, 2019)

The OECD AI Principles provide a set of internationally agreed-upon guidelines for responsible AI development and deployment. These principles emphasize human-centered values, fairness, transparency, and accountability. They serve as a foundation for national AI policies and strategies.

Key Aspects:

  • International standard: Provides a common framework for responsible AI development across different countries.
  • Human-centered values: Emphasizes the importance of human well-being, rights, and values in AI development.
  • Focus on trustworthiness: Promotes AI systems that are reliable, safe, and aligned with human values.
  • Five value-based principles: Inclusive growth, sustainable development and well-being; human-centered values and fairness; transparency and explainability; robustness, security and safety; and accountability.  
     
  • Recommendations for policymakers: Offers guidance on AI research and development, ecosystem development, governance, and international cooperation.

5. Anekanta Responsible AI Governance Framework for Boards (Anekanta Consulting, 2024)

This framework, developed in 2024, provides guidance for boards of directors on governing AI responsibly. It emphasizes the importance of board-level oversight of AI ethics, risk management, and societal impact.

Key Aspects:

  • Focus on board governance: Provides a framework for boards to oversee responsible AI implementation within their organizations.
  • Emphasis on ethical considerations: Addresses issues of bias, fairness, and accountability in AI systems.
  • Independent review and audit: Encourages regular independent reviews of AI systems to ensure responsible development and use.
  • Training and education: Emphasizes the importance of training employees on responsible AI practices.
  • Privacy and compliance: Ensures that AI systems comply with relevant privacy and data protection regulations.

6. Trustworthy AI STARS framework (Charter Global, 2025)

Developed by Charter Global, the STARS framework focuses on Security, Transparency, Accountability, Reliability, and Societal well-being. It emphasizes a comprehensive approach to trustworthy AI, considering both technical and ethical aspects.

Key Aspects:

  • Comprehensive approach: Addresses both technical and ethical aspects of trustworthy AI.
  • Focus on security and privacy: Emphasizes the importance of protecting data used in AI systems.
  • Transparency and explainability: Promotes AI systems that are understandable and their decisions can be explained.
  • Accountability and governance: Ensures clear lines of responsibility for AI systems and their outcomes.
  • Societal well-being: Considers the broader impact of AI on society and promotes fairness and equity.

7. IBM's AI Safety and Governance Framework (IBM, 2024)

This framework outlines IBM's approach to responsible AI development and use. It emphasizes safety, transparency, and accountability, and aligns with the AI Seoul Summit's commitments for AI Frontier Safety.

Key Aspects:

  • Focus on safety and ethics: Prioritizes the safe and ethical development and deployment of AI systems.
  • Transparency and explainability: Promotes AI systems that are understandable and their decisions can be explained.
  • Accountability and governance: Establishes clear lines of responsibility for AI systems and their outcomes.
  • Open source and collaboration: Encourages open source contributions and collaboration to promote responsible AI development.
  • Multidisciplinary approach: Involves stakeholders from different disciplines to address the complex challenges of AI governance.

8. Designing Trustworthy AI: A Human-Machine Teaming Framework (Salehi, Weller, & Olson, 2023)

This framework, proposed by Salehi, Weller, and Olson, focuses on human-machine teaming as a key aspect of trustworthy AI. It provides guidance on designing AI systems that effectively collaborate with humans, ensuring human oversight and control.

Key Aspects:

  • Human-machine teaming: Emphasizes the importance of collaboration between humans and AI systems.
  • Accountability and responsibility: Ensures clear lines of responsibility for AI systems and their outcomes.
  • Respectful and secure: Promotes AI systems that respect human values and are secure against misuse.
  • Honest and usable: Ensures that AI systems provide accurate information and are easy to use.
  • Diversity and inclusion: Considers the needs of diverse users and avoids bias in AI systems.

These frameworks, while differing in their specific focus and approach, share common goals: to promote responsible AI development, mitigate risks, and foster trust in AI systems. They provide valuable guidance for researchers, developers, and policymakers in navigating the complex landscape of trustworthy AI.

References

Ala-Pietilä, P., Andreasson, S., Brandtzaeg, P. B., Brinkman, W.-P., Brundage, M., Čerka, P., ... & Zuiderwijk, A. (2020). Assessment list for trustworthy artificial intelligence (ALTAI). High-Level Expert Group on Artificial Intelligence.

Anekanta Consulting. (2024, March). Anekanta Responsible AI Governance Framework for Boards.

Charter Global. (2025). Trustworthy AI STARS framework.

Department of Veterans Affairs. (2023). Trustworthy AI framework.

IBM. (2024). Trustworthy AI at scale: IBM's AI Safety and Governance Framework.

National Institute of Standards and Technology. (2023). Artificial intelligence risk management framework (AI RMF 1.0).

Organisation for Economic Co-operation and Development. (2019). OECD Principles on Artificial Intelligence.

Radclyffe, A., Janssens, O., & Lievens, E. (2023). Trustworthy artificial intelligence in government: Emerging international and national governance frameworks. Telecommunications Policy, 47(10), 102543.

Salehi, N., Weller, A., & Olson, J. (2023). Designing trustworthy AI: A human-machine teaming framework to guide development. arXiv preprint arXiv:2308.08481.

Dewel Insights, founded in 2023, empowers individuals and businesses with the latest AI knowledge, industry trends, and expert analyses through our blog, podcast, and specialized automation consulting services. Join us in exploring AI's transformative potential.

Menu

Schedule

Monday-Friday

5:00 p.m. - 10:00 p.m.

 

Saturday-Sunday

11:00 a.m. - 2:00 p.m.

Get in touch

3555 Georgia Ave, NW Washington, DC 20010

ai@dewel-insight.com

Dewel@2025