UK Proposes AI Rulebook to Foster Innovation and Public Trust

Ruby McKenzie
7 Min Read

A cross-sector proposal for artificial intelligence regulation was released by the UK government on July 18, 2022. Six main principles are laid forth in the plan, which is called a “pro-innovation” regulatory framework, to handle the primary dangers connected with AI. The initial implementation will be non-statutory and applicable to all sectors in the United Kingdom. Regulation and voluntary standards will then be produced by UK regulators such as ICO, Ofcom, CMA, MHRA, and EHRC in order to enhance the ‘situation-specific’ regulatory guidelines. The EU’s proposed AI Act, which was unveiled last year, aims to implement a more stringent and consistent approach to AI legislation across all sectors. In contrast, the UK looks to be heading toward a risk-based strategy focused on proportionality and a more light-touch approach. The industry and setting in which the AI system is implemented will establish the practical requirements that organizations must adopt.

Benefits throughout the UK and nation have been unlocked by AI technology, from tumour surveillance in Glasgow to better animal care on Belfast dairy farms to speeding up English property sales. Artificial intelligence will be used by over 1.3 million UK enterprises by 2040, according to a study this year. It might be difficult for organizations and smaller enterprises to understand the degree to which current regulations apply to AI. Because of the discrepancies and inadequacies in the present methods of regulators, organizations and the general public may be less confident in the regulations governing the use of AI.

The plans to regulate the use of AI from the government are similar to how they regulate PayPal casinos in the UK. The ability of regulators to safeguard the public might be jeopardized if the regulations concerning AI in the UK are not updated to keep up with the rapid pace of technological change. In contrast to the EU’s AI Act, which determined to give AI governance to a single regulatory agency, the government’s plans would enable individual authorities to adjust their approach to AI usage in various situations. As the usage of AI expands across a wide variety of industries, this is more accurate. This strategy will provide reasonable and adaptive legislation so that AI continues to be swiftly embraced in the UK in order to increase productivity and growth.

The Applicability Range

These are two of the most divisive aspects of the European Union’s AI Act: the definition of an AI system that comes within its purview and the manner in which the regulation’s many stakeholders enforce it. On the basis of these two essential aspects, the UK proposal tries to define AI in a more flexible manner. These are the ‘adaptiveness’ of the technology to new surroundings and scenarios, and its autonomy can make judgments and conclusions once it is created.

In contrast to the EU AI Act, the UK government is considering a more context-specific strategy, where the regulatory criteria would apply to the player in the AI lifecycle that causes the relevant risks. To provide businesses with appropriate legal certainty, it is probable that more clarification will be needed either from the government or regulators in this area.

The Six Principles of the AI Rulebook

The rulebook’s six main principles are meant to address a number of critical difficulties highlighted by the government, along with ambiguity on the application of current UK legislation in this sector and the inadequacy of those laws to address particular dangers related to AI appropriately. To name a few, there are these:

· The Safety of the Product: An appropriate response is required to the risks that artificial intelligence systems pose to human life, such as in vital infrastructure and healthcare.

· Security and Dependability in Technology: It’s critical that AI systems work consistently under normal circumstances and resist security threats if we want to give customers and the public trust in their correct operation. These metrics have to be examined and validated. The data utilized for training and deployment must be appropriate, high-quality, and contextualized.

· Clarity and Comprehensibility: To understand the results of AI systems, they must provide sufficient transparency and explanation. Information on the AI system’s objective, the data used to train it, and the logic and techniques that were employed in its development may be provided.

· Justice: When the outputs of AI systems have the potential to have a significant influence on humans, they should be justified and not arbitrary. Regulators will define fairness in the context of their industry or jurisdiction.

· Legal Responsibility: The legal person responsible for the results of AI systems should always be recognized or identified.

· Redress and Contestability Rights: When an individual’s rights have been violated, organizations should guarantee that AI judgments can be disputed in a manner that is reasonable and contextually relevant.

Each of these six principles requires regulators to examine their applicability to the sector or area in which they operate and to establish appropriate advice and standards to facilitate practical application and operationalization by organizations.

What’s Next?

In the wake of the publishing of the suggestions, organizations have a better understanding of the areas they should be working on, especially in terms of creating governance mechanisms to manage the risks connected with AI technologies that they have been developing and implementing. There will be more information on the six guiding principles and the larger framework for AI regulation released as part of a UK government white paper at the end of the year. Meanwhile, the Office for Artificial Intelligence is accepting public comments on its current draft ideas until September 26, 2022.

Share This Article