For Further Information Contact:
The US of A(I)? – A look at the White House’s “Blueprint for an AI Bill of Rights”
25/10/2022Earlier this month, the US White House Office of Science and Technology Policy (“OSTP”) released its Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People. (the “Blueprint”) This document provides a framework on the roles of the US government, tech companies and all citizens in ensuring accountability in the field of AI. In this blog, we’ll take a look at the content of the Blueprint and where this sits in the context of other recent approaches to AI regulation.
What is in the Blueprint and when does it apply?
The Blueprint contains non-binding principles. These are not given the status of legislation or regulation and are not enforceable. The principles are supported with guidance for their responsible implementation and are intended to “help guide the design, use, and deployment of automated systems to protect the American public”.
The principles are:
- Safe and Effective Systems: Individuals should be protected from unsafe and ineffective systems.
- Algorithmic Discrimination Protections: Individuals should be protected from discrimination through algorithms, and systems should be designed in a way that is equitable.
- Data Privacy: Individuals should be protected from abusive data practices, through protections which are built into systems and people should have control over how their data is being used.
- Notice and Explanation: Individuals should be given fair notice and explanation as to when an AI system is being used and how it contributes to outcomes that affect those using it.
- Alternative Options: There should (where appropriate) be an ability to use human alternatives to the AI system to resolve problems as a fallback.
The Blueprint contains a number of examples of potentially problematic AI systems/uses. However, the Blueprint is only intended to apply to (i) automated systems that (ii) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services. There are examples given in relation to money-lending, surveillance, and HR, among other more risky uses of AI.
How does this approach compare to what we’ve seen in the UK and Europe?
There is certainly a degree of overlap between the principles and ethical frameworks identified elsewhere. For example, the 2019 EU Ethics guidelines for trustworthy AI contained many of the same points as the principles (and some more).
In addition, some of the particularly problematic systems (and the focus on the “meaningful impacts” mentioned above) identified in the Blueprint mirror the risk-based approach seen in the EU’s Draft AI Regulation, the world’s first all-encompassing AI regulation, which bans certain particularly “high-risk” systems, and includes strict compliance requirements on others. However, unlike the Draft Regulation, the Blueprint does not have statutory footing and does not have regulatory enforcement behind it. It also doesn’t strictly ban any of the more problematic systems identified.
Like the UK Government’s recent policy paper on AI regulation (summarised here) there is a focus on high level principles, with a significant degree of interpretation involved, allowing those using the technology to assess the best approach. Both highlight the need for proportionality in any measures taken to regulate AI, and the UK plan seems to be that these determinations will be made by sector regulators (something which the Blueprint does not focus on). The UK policy, though preceded by a significant number of reports and consultations from various bodies, does not contain the same level of explanatory guidance as the Blueprint, though this may come as, with its focus on regulators, a regulatory landscape starts to develop. Further information on the UK approach is due in the coming months which may also build on the policy paper.
Reaction and Analysis
Reaction to the Blueprint has been mixed. Much of the criticism has focussed on the fact that the principles (even with the explanatory guidance) are vague and, crucially, do not have the force of law or the related bans and enforcement. Many outlets have contrasted the Blueprint approach with the Draft AI Regulation, lamenting the former’s lack of “teeth”.
Others have said that the approach taken in the Blueprint is the correct one, as it allows more freedom of innovation, without fear of enforcement or bans. There has also been praise for the focus on protections against racial profiling.
There is general acceptance that the Blueprint is not the perfect solution to regulating AI, but it may be a useful starting point, bearing in mind that it is the first nationwide approach to AI seen in the US.
Conclusion
In recent times, many jurisdictions have been considering how to adapt their laws and policies to the ever-changing landscape of AI. The recurring argument seems to be around balancing innovation with the safety and confidence of citizens.
Against this backdrop, we have seen a number of principles-based approaches, with the Draft AI Regulation being seen as an outlier of sorts. It seems likely that the approaches to regulation will need to be kept under review as time goes on and whether the approach is based on principles or legislation, there will often be a need for regulatory discretion in enforcement.
We are in the early stages of seeing AI regulation. This will develop over time, and it is unsurprising that there have been diverging approaches in different areas. It remains to be seen if these will be harmonised to allow more seamless trade and how governments will manage consumer demand for innovation with their expectation of trust and accountability.
First published by the AI Alliance
By Burness Paull LLP, Scotland, a Transatlantic Law International Affiliated Firm.
For further information or for any assistance please contact ukscotland@transatlanticlaw.com
Disclaimer: Transatlantic Law International Limited is a UK registered limited liability company providing international business and legal solutions through its own resources and the expertise of over 105 affiliated independent law firms in over 95 countries worldwide. This article is for background information only and provided in the context of the applicable law when published and does not constitute legal advice and cannot be relied on as such for any matter. Legal advice may be provided subject to the retention of Transatlantic Law International Limited’s services and its governing terms and conditions of service. Transatlantic Law International Limited, based at 42 Brook Street, London W1K 5DB, United Kingdom, is registered with Companies House, Reg Nr. 361484, with its registered address at 83 Cambridge Street, London SW1V 4PS, United Kingdom.