NIST Proposes a Risk Management Approach for Artificial Intelligence - Articles

Articles

Stay at the forefront of the consumer insights and analytics industry with our Thought Leadership content. Here you’ll find timely updates on the Insights Association’s advocacy efforts, including the latest legislative and regulatory developments that impact how we work. In addition, this section offers expert perspectives on innovative research techniques and methodologies, as well as valuable analysis of evolving consumer trends. Together, these insights provide a trusted resource for professionals looking to navigate change, elevate their practice, and shape the future of our industry.

NIST Proposes a Risk Management Approach for Artificial Intelligence

NIST Proposes a Risk Management Approach for Artificial Intelligence

The National Institute of Standards and Technology (NIST) is seeking input on a draft risk management approach to artificial intelligence (AI).

The framework (currently in second draft form) is “intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”

Per the framework, "AI systems can amplify, perpetuate, or exacerbate inequitable outcomes. AI systems may exhibit emergent properties or lead to unintended consequences for individuals and communities. A useful mathematical representation of the data interactions that drive the AI system’s behavior is not fully known, which makes current methods for measuring risks and navigating the risk-benefits tradeoff inadequate. AI risks may arise from the data used to train the AI system, the AI system itself, the use of the AI system, or interaction of people with the AI system. While views about what makes an AI technology trustworthy differ, there are certain key characteristics of trustworthy systems. Trustworthy AI is valid and reliable, safe, fair and bias is managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy-enhanced."

NIST defines an “AI system” as “an engineered or machine-based system that can, for a given set of human-defined objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.”

NIST also proposed an AI risk management playbook that “includes suggested actions, references, and documentation guidance for stakeholders to achieve the outcomes for “Map” and “Govern” – two of the four proposed functions in the AI RMF. Draft material for the other two functions, Measure and Manage, will be released at a later date.”

NIST seeks public comment on the framework and playbook by September 29, 2022 (which will be made publicly available, so personal or sensitive information should not be included).

IA members with suggestions or recommendations on the potential impact on the insights industry should connect with IA staff.

UPDATE: NIST finalized the AI risk framework in January 2023.

Related

Share

Login

Members only Article - Please login to view
  • Back to top