• AI Governance Shapes the Future of Occupational Safety and Health Professionals

  • Oct 8 2024
  • Length: 3 mins
  • Podcast

AI Governance Shapes the Future of Occupational Safety and Health Professionals

  • Summary

  • The European Union Artificial Intelligence Act, which came into effect in August 2024, represents a significant milestone in the global regulation of artificial intelligence technology. This legislation is the first of its kind aimed at creating a comprehensive regulatory framework for AI across all 27 member states of the European Union.

    One of the pivotal aspects of the EU Artificial Intelligence Act is its risk-based approach. The act categorizes AI systems according to four levels of risk: minimal, limited, high, and unacceptable. This risk classification underpins the regulatory requirements imposed on AI systems, with higher-risk categories facing stricter scrutiny and tighter compliance requirements.

    AI applications deemed to pose an "unacceptable risk" are outrightly banned under the act. These include AI systems that manipulate human behavior to circumvent users' free will (except in specific cases such as for law enforcement with court approval) and systems that use “social scoring” by governments in ways that lead to discrimination.

    High-risk AI systems, which include those integral to critical infrastructure, employment, and essential private and public services, must meet stringent transparency, data quality, and security stipulations before being deployed. This encompasses AI used in medical devices, hiring processes, and transportation safety. Companies employing high-risk AI technologies must conduct thorough risk assessments, implement robust data governance and management practices, and ensure that there's a high level of explainability and transparency in AI decision-making processes.

    For AI categorized under limited or minimal risk, the regulations are correspondingly lighter, although basic requirements around transparency and data handling still apply. Most AI systems fall into these categories and cover AI-enabled video games and spam filters.

    In addition, the AI Act establishes specific obligations for AI providers, including the need for high levels of accuracy and oversight throughout an AI system's lifecycle. Also, it requires that all AI systems be registered in a European database, enhancing oversight and public accountability.

    The EU Artificial Intelligence Act also sets out significant penalties for non-compliance, which can amount to up to 6% of a company's annual global turnover, echoing the stringent penalty structure of the General Data Protection Regulation (GDPR).

    The introduction of the EU Artificial Intelligence Act has spurred a global conversation on AI governance, with several countries looking towards the European model to guide their own AI regulatory frameworks. The act’s emphasis on transparency, accountability, and human oversight aims to ensure that AI technology enhances societal welfare while mitigating potential harms.

    This landmark regulation underscores the European Union's commitment to setting high standards in the era of digital transformation and could well serve as a blueprint for global AI governance. As companies and organizations adapt to these new rules, the integration of AI into various sectors will likely become more safe, ethical, and transparent, aligning with the broader goals of human rights and technical robustness.
    Show more Show less
activate_Holiday_promo_in_buybox_DT_T2

What listeners say about AI Governance Shapes the Future of Occupational Safety and Health Professionals

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.