web analytics

Risky Analysis

Rate this post

Assessing and Improving AI Governance Tools

Al systems should not be deployed without simultaneously evaluating the potential adverse impacts of such sys- tems and mitigating their risks. Most of the world agrees about the need to take precautions against the threats posed by Al systems. Tools and techniques exist to evaluate and measure Al systems for their inclusiveness, fairness, explainability, privacy, safety and other trustworthiness issues. These tools and techniques – called here collectively Al governance tools can improve such issues. While some Al governance tools provide reassurance to the public and to regulators, the tools too often lack meaningful oversight and quality assessments. Incomplete or ineffective Al governance tools can create a false sense of confidence, cause unintended problems, and general- ly undermine the promise of AI systems. This report addresses the need for improved Al governance tools.

It is the goal of this research to help gather evidence that will assist in the building of a more reliable body of AI governance tools. This report analyses, investigates, and appraises Al governance tools, including practical guid- ance, self assessment questionnaires, process frameworks, technical frameworks, technical code, and software dis- seminated in Africa, Asia, North America, Europe, South America, Australia and New Zealand. The report also analyzes existing frameworks, such as data governance and privacy, and how they integrate into the AI ecosystem. In addition to an extensive survey of Al governance tools, the research presents use cases discussing the contours of specific risks. The research and analysis for this report connects many layers of the Al ecosystem, including policy, standards, scholarly and technical literature, government regulations, and best practices.

Our work found that Al governance tools used in most regions of the world for measuring and reducing risks and negative impacts of AI could introduce novel, unintended problems or create a false sense of confidence unless accompanied by evaluation and measurement of those tools and their effectiveness and accuracy.

In this report we suggest pathways for creating a healthy Al governance tools environment, and offer suggestions for governments, multilateral organizations, and others creating or publishing Al governance tools. These sugges- tions include best practices taken from existing AI and other quality assessment standards and practices already in widespread use. Appropriate procedural and administrative controls include: 1) providing Al governance tool documentation and contextualization, review, audit, and other quality assurance procedures to prevent integra- tion of inappropriate or ineffective methods in policy guidance; 2) identifying and preventing conflicts of interest; and 3) ensuring that capabilities and functionality of Al governance tools align with policy goals. If governments, multilateral institutions, and others working with or creating AI governance tools can incorporate lessons learned from other mature fields such as data governance and quality assessment, the result will establish a healthier body of Al governance tools, and over time, healthier and more trustworthy Al ecosystems.

Views: 1

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post