Laytons Artificial Intelligence (AI) Series: UK’s approach to regulating Artificial Intelligence (AI)
Artificial Intelligence (AI) is increasingly becoming part of our everyday lives. Ranging from speech-to-text recognition, translation software, chatbots to automated stock trading, AI is helping us with decision-making. With expenditure on AI expected to increase by £35.6 billion by 2025, businesses will need to navigate the legal and regulatory obligations associated with the use of AI systems to promote the public’s confidence in using AI.
Existing Legislation
Some AI systems may fit within existing legislation, but others may require further review. Contracting parties need to bear in mind that they may be subjected to indirect regulation by third parties who are subjected to AI regulation through obligations that flow down contractually.
The interaction between AI and intellectual property was recently considered in the Thaler case (Thaler v Comptroller General of Patents Trade Marks And Designs [2021] EWCA Civ 1374, 21 September 2021) which ruled that AI cannot be an inventor on a patent. However, the case will be heard in the Supreme Court and it is hoped that further clarifications will ensue.
UK’s approach to regulating AI
The UK government, in its policy paper, adopted a pro-innovation approach on AI regulation to balance between supporting innovation and protecting the public. The UK government proposed to regulate the use of AI rather than the technology itself. In line with this, no universal definition of AI has been set out as businesses and the public may not have the same view as to what should and should not be subject to regulation.
The government’s initial approach is to implement six cross-sectoral principles on a non-statutory basis which allows the government to keep this under review and if necessary, update its approach.
These six principles are:
Ensure that AI is used safely
Ensure that AI is technically secure and functions as designed
Ensure that AI is appropriately transparent and explainable
Embed consideration of fairness into AI
Define legal persons’ responsibility for AI governance
Clarify routes to redress or contestability
Regulators like The Office of Communications (Ofcom), Information Commissioner's Office (ICO), and Financial Conduct Authority (FCA) are to implement these principles to develop sector-specific AI regulation measures. To ensure coherence across regulators, the framework proposes to look for ways to support collaboration between regulators.
Key takeaways
There are no proposals for legislation at this stage, but the government is not ruling it out. The white paper and public consultation coming out in late 2022 will further address issues surrounding the implementation of the approach. An AI standards hub is set up to shape global technical standards for AI. The Hub is important as it will shape future policy concerning AI and give an indication on how AI might be regulated.
Businesses or individuals involved in AI need to consider whether their use of AI fits within existing regulations and if not, whether any indirect regulations may be applicable. In the meantime, we will have to monitor legislation, case law, and guidance from regulators on AI.
UK’s approach to AI regulation is a stark contrast to the EU’s approach. The EU has published proposals on AI Act and AI Liability Directive which we will discuss in more detail in our upcoming articles in the Artificial Intelligence series.
If you have any queries surrounding the regulation of Artificial Intelligence, please reach out to Paddy Kelly, Partner or Carmen Yong, Solicitor in our Corporate and Commercial Department.