What New Int'l Treaty Means For Global AI Regulation

(November 5, 2024, 3:30 PM GMT) --
Kate Deniston
Louise Lanzkron
On Sept. 5, the U.S., European Union and U.K. signed a new legally binding international treaty, the "Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law."[1]

In this article, we examine the AI convention and consider its main provisions. We look into the AI convention's compliance structure and the remedies for breach.

Finally, we consider what the AI convention means for AI regulation globally and provide some practical takeaways for businesses operating in this sphere to become compliance-ready.

Existing Regulation

Over the past year or so, governments across the world have been considering whether, and how, to regulate AI within their jurisdictions: U.S. President Joe Biden issued an executive order on AI in October 2023, the U.K. government pledged to introduce AI legislation on the most powerful AI models in July, and the EU's comprehensive AI Act came into force on Aug. 1.[2]

China has passed its own AI legislation too. In addition, several U.S. states, such as California, are seeking to pass their own AI safety laws at the state level.

There have been several international developments on AI regulation, such as the Bletchley Declaration in November 2023, and the U.S. and U.K. safety institute collaboration in April, known as the partnership on science of AI safety,[3] but these have not had legally binding weight. However, the AI convention is the first legally binding international agreement on AI regulation.

Background

The AI convention text was adopted by the Council of Europe on May 17, after two years of drafting by 46 member states, of which the U.K. is one, the EU, and 11 nonmember states such as Australia, Japan and the U.S.

In addition, various observers from the private sector and academia contributed to its content. Each signatory state is expected to implement measures to give effect to the treaty's requirements.

In addition to the U.S., EU and U.S., the AI convention was also signed on Sept. 5 by Andorra, Georgia, Iceland, Israel, Norway, the Republic of Moldova and San Marino.[4]

It aims to ensure that AI systems are used in ways that are consistent with human rights, democracy and the rule of law.

The signing of the AI convention is seen as a significant step by human rights advocates and supporters of global AI governance.

The treaty will enter into force on the first day of the month after a three-month period, once five signatories, including a minimum of three Council of Europe member states, have ratified or approved it.

Broad Principles

The AI convention sets out international legal standards and obligations for states to follow, including measures to prevent AI from undermining democratic institutions and ensuring AI use aligns with human rights laws.

It is a collection of broad principles rather than specific requirements, allowing signatory states to interpret the treaty according to their own legal, political and social traditions.[5]

While this flexibility is beneficial for accommodating diverse legal systems, it will likely result in significant variations in national regulations that implement the AI convention.

Signatory states agree to ensure that activities within the life cycle of their AI systems must comply with various fundamental principles including human dignity and individual autonomy, equality and nondiscrimination, respect for privacy and personal data protection, transparency and oversight, accountability and responsibility, and reliability and safe innovation.

To ensure its provisions remain relevant for as long as possible, the AI convention is technology-neutral and does not seek to regulate any specific type of AI system. The definition of "artificial intelligence system" is broad and echoes the definition in the EU AI Act.

However, there are several issues that stakeholders in the AI life cycle should be aware of.

Scope

The AI convention's principles and obligations apply to public authorities — and private actors acting on their behalf — and private actors. However, the EU AI Act, Article 3, allows signatory states to choose how these principles apply to private actors, either directly or through "other appropriate measures."

This flexibility is intended to accommodate different legal systems across the world, but it could lead to inconsistencies in how the AI convention is applied globally, potentially causing confusion for companies operating on a multinational basis.

Additionally, the term "public authority" is not defined, which could cause complications in the practical application of the treaty.

Signatories of the AI convention are not obligated to implement the treaty's provisions for activities concerning the protection of their national security interests, but they must ensure these activities uphold international law and democratic institutions and processes.

The AI convention additionally excludes national defense matters. Research and development activities are also excluded, except in cases where AI system-testing might affect human rights, democracy, or the rule of law.

Compliance Structure

The AI convention establishes a follow-up mechanism, the Conference of the Parties, consisting of official representatives from the parties to the AI convention.[6]

This body is tasked with assessing how well the provisions are being implemented. Their conclusions and suggestions aim to ensure that states adhere to the AI convention and maintain its effectiveness over time. Additionally, the conference will facilitate cooperation with relevant stakeholders, including the holding of public hearings on key aspects of the AI convention's implementation.

However, the compliance mechanism is not stringent. Although compliance reporting is required by Article 23, there are no strict enforcement criteria.

This lack of a robust enforcement mechanism could limit the effectiveness and impact of the AI convention.

Mitigation of Risk

Each signatory state must implement measures to identify, assess, prevent and mitigate risks from AI systems by considering their current and potential effects on human rights, democracy and the rule of law.

This will include the monitoring and documentation of actual and potential risks and, where appropriate, the testing of AI systems before making them available for first use and when they have been significantly altered or updated.

Further, signatories will establish sufficient prevention and mitigation measures as a result of the implementation of these assessments. This is also the case where AI systems are considered incompatible with the respect for human rights, the functioning of democracy or the rule of law, or where they introduce a ban or moratoria on certain applications of AI systems.

Again, signatory states are free to interpret what these provisions mean in the context of their own legal systems, which will inevitably result in a patchwork of laws that businesses in the AI sphere will have to navigate in order to ensure compliance.

Remedies

The AI convention mandates that signatories provide remedies for breaches of human rights related to the treaty's obligations and principles and ensure a body is in place for lodging complaints.

However, the treaty does not specify remedies such as fines, leaving it to national legislation to determine appropriate remedies. This could result in wide variations in enforcement and remedies across different jurisdictions.

While the AI convention represents a significant step toward the global governance of AI, its broad principles, flexible scope, vague compliance structure and unspecified remedies may pose challenges in achieving consistent and effective implementation across different countries.

The EU AI Act

In addition to signing the AI convention, the EU is preparing for the phased implementation of its own AI Act, which came into force on Aug. 1.[7] The EU AI Act is considered to be the world's most comprehensive legislative framework for AI developers, deployers, importers and distributors.

The aim of the EU AI Act is to create a harmonized internal market ensuring that AI systems placed within it respect fundamental rights, are safe and uphold the EU's ethical principles, and will become applicable in a phased manner with most provisions applying after 24 months.

Unlike the AI convention, the EU AI Act has a robust two-tier compliance and enforcement mechanism that takes place both at the EU level and also at the member state level. The EU AI Act gives the European commission's new AI office, established in February 2024, strong enforcement powers over general-purpose AI models.

The AI office has the authority to assess general-purpose AI models, ask model providers for information, and impose sanctions, while the commission is responsible for issuing fines. Each member state is additionally required to designate a national supervisory authority representing that individual state on the European artificial intelligence board.

Individual member states are able to impose fines, which they have laid down, on those infringing the provisions of the EU AI Act, and these are set in different tiers depending on the prohibition infringed.

For example, infringement of provisions relating to prohibited systems can be up to €35 million ($38.2 million) or 7% of the total worldwide annual turnover of the infringer.

Steady Increase in AI System Regulation

Governments worldwide are crafting new treaties, regulations and laws to manage the risks posed by rapid AI advancements. Legislators are employing diverse strategies, which means those working in the AI arena will need to navigate a complex landscape of diverging legislation across multiple jurisdictions to ensure compliance.

The AI convention is part of a wider global effort to regulate AI. The AI convention utilizes broad and uncontentious principles, such as upholding human rights and the rule of law, to create an inclusive system of regulation with a broad definition of AI and is designed to incorporate as many legal systems as possible to ensure maximum take up by parties to it.

On a more granular level, the EU AI Act takes a risk-based approach, seeking to apply proportionate answers to different risk profiles of AI systems. China has introduced its own AI measures but has signed the Bletchley Declaration.

While the U.K. does not have any specific AI legislation in place the new Labour government has announced plans for narrow legislation to regulate developers of the most powerful AI models and has now signed the AI convention.

While unlikely to be successful, Lord Clement-Jones' private member's bill, recently introduced in the House of Lords, seeks to regulate the use of AI as a decision-making tool in the public sector and sets out mechanisms for accountability and redress. In this way its aims complement the AI convention with its key focus on public authorities.[8]

Practical Takeaways

Regulation of AI systems has begun; businesses will need to understand a number of factors:

  • Which pieces of legislation or regulation are within scope and territorial reach?

  • When does that legislation or regulation come into force?

  • Are impact assessments required, who is responsible for these assessments and is training required?

  • Compliance tool kits should be assembled.

  • Regulatory compliance clauses should be added to contracts.

  • Any secondary legislation and guidance should be mapped.

Overall, the pace of regulation either by way of international treaties such as the AI convention, or by national legislation, is increasing.

This can be expected to continue as governments worldwide, and their appointed regulators, seek to manage the autonomous nature of these systems.

The signing of the AI convention indicates that most wish to do so in a way that continues to protect human rights and uphold democracy and the rule of law.



Kate Deniston is a professional support lawyer, and Louise Lanzkron is a dispute resolution knowledge and development lawyer, at Bird & Bird.

The opinions expressed are those of the author(s) and do not necessarily reflect the views of their employer, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.


[1] https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence.

[2] https://www.twobirds.com/en/insights/2024/uk/labours-plans-for-ai-regulation--in-the-kings-speech.

[3] https://www.commerce.gov/news/press-releases/2024/04/us-and-uk-announce-partnership-science-ai-safety.

[4] https://www.coe.int/en/web/portal/-/council-of-europe-opens-first-ever-global-treaty-on-ai-for-signature.

[5] (Articles 7-13).

[6] https://aiconference.london/?msclkid=0d9ad6433f0011648873b0b65afae9f0.

[7] https://artificialintelligenceact.eu/ai-act-implementation-next-steps.

[8] Public Authority Algorithmic and Automated Decision-Making Systems Bill [HL]: https://bills.parliament.uk/bills/3760.

For a reprint of this article, please contact reprints@law360.com.

Hello! I'm Law360's automated support bot.

How can I help you today?

For example, you can type:
  • I forgot my password
  • I took a free trial but didn't get a verification email
  • How do I sign up for a newsletter?
Ask a question!