Rapid Read    •   9 min read

EY Identifies Three Types of AI Assessments Amid Growing Regulatory Frameworks

WHAT'S THE STORY?

What's Happening?

The rapid advancement of artificial intelligence (AI) has led to a proliferation of regulatory frameworks aimed at assessing AI systems. According to a report by EY, there are over 1,000 AI policy initiatives globally, encompassing legislation, regulations, and voluntary agreements across 70 countries. These frameworks often differ in their objectives, methodologies, and terminologies. EY has categorized AI assessments into three main types: governance, conformity, and performance. Governance assessments evaluate internal corporate policies and processes related to AI systems. Conformity assessments check compliance with laws and standards, while performance assessments measure the quality of AI systems' core functions. The report highlights the challenges of ensuring consistent quality and accountability due to varying requirements across jurisdictions. For instance, different U.S. states have distinct policies for assessing bias in AI systems used in hiring. The complexity of AI systems and the rapid pace of technological advancement further complicate the assessment process.
AD

Why It's Important?

The growing focus on AI assessments reflects the critical need for safe and effective AI adoption. As AI systems become integral to various sectors, ensuring their reliability and compliance with regulations is essential. These assessments help build confidence in AI technologies, which is crucial for unlocking their potential as drivers of growth and prosperity. Businesses, investors, and regulators stand to benefit from rigorous assessments that identify and manage AI-related risks. However, the lack of uniformity in assessment frameworks poses challenges for companies operating across multiple jurisdictions. The demand for trust services in AI is increasing, offering opportunities for accounting professionals to provide independent assurance. This trend underscores the importance of developing clear and consistent standards for AI assessments to support innovation while safeguarding public interest.

What's Next?

As AI technology continues to evolve, the development of technical standards for evaluating AI performance will be crucial. Businesses may choose to conduct voluntary AI assessments to build confidence in their systems, even in the absence of regulatory requirements. The market dynamics and investor demand for trustworthy AI systems are likely to drive further adoption of assessment practices. Additionally, the role of third-party assessors may become more prominent, as their evaluations are often viewed as more credible. The ongoing dialogue among stakeholders, including regulators, businesses, and professional bodies, will shape the future landscape of AI assessments.

Beyond the Headlines

The ethical and legal dimensions of AI assessments are significant, as they address issues of fairness, transparency, and accountability. The ambiguity in terms like 'trustworthiness' and 'transparency' highlights the need for precise definitions to ensure meaningful assessments. The integration of AI systems into complex environments with multiple stakeholders adds another layer of complexity. As AI systems evolve, the concept of 'model drift'—where a model's performance changes over time—poses challenges for maintaining accurate assessments. The rapid pace of AI innovation may outstrip the development of assessment standards, necessitating ongoing adaptation and collaboration among stakeholders.

AI Generated Content

AD
More Stories You Might Enjoy