A shift in sustainable development: Understanding biodiversity net gain, hydrology, ecology, and landscape
by Helena Preston
View post
Over the last decade we’ve seen growing adoption of AI systems and technologies as companies and governments alike look to create efficiencies and innovations in how things are done. AI systems are now being readily used in diagnosing diseases, executing data analyses and supporting customers with their requests[1]. It is present in all industries and sectors. This increase in use has also seen an emergence of criticisms aimed at the developers and users of AI technology. This has especially been due to the growing acceptance in society that AI technology carries both positive and negative human rights impacts.
Before going further, it is worth considering what we mean by the term ‘AI’. There is currently no universally accepted definition of AI, so instead let’s highlight the approach adopted by the OECD (Organisation for Economic Co-operation and Development):
“AI is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”[2]
AI creates both positive and negative human rights impacts. On the positive side it provides society with opportunities from enhancing access to education and health information, to tackling human trafficking and helping to diagnose cancer [3], while on the negative side there are many human rights implications. These include [4]:
These examples show just how wide-ranging the issues involved are. For ease, AI risks can be categorised into two types: structural risks, that stem from the nature and design of AI itself, and functional risks, which result from AI’s transformative effect on our daily lives through use. This difference drives how these risks should be managed. Structural risks can be managed more through reviewing AI governance and functional risks need to be approached differently dependent on the context of the risks themselves.[5]
Currently many local, national and international governments worldwide are considering how best to legislate the growing emergence of AI technology in view of its human rights impacts. The EU is ahead of the game with its AI Act creating the first-ever legal framework (approved in May 2024). Its focus is to make sure that “AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”[6]
The Act acknowledges that, although most AI systems pose only a limited or no risk, certain AI systems create unacceptable or very high levels of risk. These include:
The new rules establish obligations for providers and users depending on the level of risk posed. For high-risk AI systems, companies will be required to create:
The focus of the EU Act is to create a human-centric approach to ensure AI applications comply with fundamental human rights legislation and that there is accountability and transparency requirements for the use of high-risk AI systems, combined with improved enforcement.
Other international organisations have also been responding in growing numbers. UNESCO launched the “first-ever” global standard on AI ethics ‘Recommendations on the Ethics of Artificial Intelligence’ in November 2021. The framework was adopted by all 193 Member States. [7] The OECD has developed the AI Principles promoting the use of AI that is “innovative and trustworthy and that respects human rights and democratic values” [8]. The United Nations has launched the Generative AI project to demonstrate the ways in which the UN Guiding Principles on Business and Human Rights (UNGPs) can guide more effective understanding, mitigations and governance of the risks of AI [9]. Joining the chorus, in 2023 G7 leaders agreed a set of International Guiding Principles on Artificial Intelligence (AI); a voluntary Code of Conduct for AI developers under the Hiroshima AI process was accompanied by a call to action on AI developers to sign up and implement [10].
Progress at national government levels has been much slower with governments approaching the topic very cautiously, sometimes due to lack of understanding of the complexity of the topic and sometimes for fear of harming their country’s innovation and competitivity in the AI race. In February 2024 the UK launched its response to a 2023 White Paper consultation on regulating AI with a focus on providing principles based and non-statutory framework [11]. In the US the lack of action of Congress has resulted in Biden in 2023 releasing an executive order on AI placing new safety obligations on developers and “calling on a slew of federal agencies to mitigate the technology’s risks while evaluating their own use of the tools.” [12] The order requires that companies building the most advanced AI systems perform safety tests and notify the government of the results before rolling out their products. It is not clear, though, how deeply the order will impact the private sector given the focus on federal agencies.
Prior to the increased focus on legislation in the EU, companies have been responding to growing stakeholder concerns around AI impact on human rights by developing what is widely known as AI Principle frameworks. We see this approach across all sectors and types of industries, with technology companies often identified quite rightly as early adapters. Examples include Google with its AI Principles [13], Microsoft with its Responsible AI standard [14] and Sony Group AI Ethics Guidelines [15].
These frameworks outline sets of values, principles and practices that companies are adapting to prevent, mitigate and remediate the impacts of their AI technologies on human rights. Due to the voluntary nature of these frameworks, the quality and robustness is varied and it is not always evident what processes, systems and governance sits behind these.
Here are some of the principles typically outlined in these frameworks:
One big gap that continues to exist is the question of remedy. The UN Basic Principles and Guidelines lay out the right to a remedy and reparation for victims of violations of international human rights law. To meet these standards, complaint mechanisms need to be easily and directly accessible to those that are impacted, and remedy needs to be timely and effective [17]. However, this is not an area where much action is being taken - which may be a cause for concern for more marginalised groups who may lack the voice and influence to create change.
Here we have set out some of the current approaches being taken by governments and companies to manage human rights impacts caused by increased use of AI technology. We show how companies have been responding in absence of government legislation by developing their own voluntary AI principles frameworks, although the resulting the lack of standardisation and robustness of these frameworks creates varied standards in the marketplace. More worrying is the fact that while these frameworks look to largely prevent human right abuses caused by the development and use of AI, they do not focus on another fundamental principle of the UN Principles and Guidelines – namely the right to remedy and reparation for victims.
SLR is ready to support your business across the full scope of environmental, social, and governance solutions.
Get in touchThis article was written by Petra Parizkova.
-------------------------------------------------
References
[1] https://www.ibm.com/blog/breaking-down-the-advantages-and-disadvantages-of-artificial-intelligence/
[2] https://oecd.ai/en/wonk/ai-system-definition-update
[4] https://www.sciencedirect.com/science/article/pii/S2666659620300056#sec0004
[5] https://www.eui.eu/news-hub?id=research-beyond-the-hype-the-interplay-between-human-rights-and-ai
[7] https://www.unesco.org/en/artificial-intelligence/recommendation-ethics?hub=32618
[8] https://oecd.ai/en/ai-principles
[9] https://www.ohchr.org/en/business-and-human-rights/b-tech-project
[12] https://www.washingtonpost.com/technology/2023/10/30/biden-artificial-intelligence-executive-order/
[13] https://ai.google/responsibility/principles/
[14] https://www.microsoft.com/en-us/ai/principles-and-approach/
[15] https://www.sony.com/en/SonyInfo/sony_ai/responsible_ai.html
[17] https://cdt.org/insights/access-to-justice-and-effective-remedy-in-the-eu-ai-act-the-state-of-play/
by Helena Preston
by Ida Bailey
by Peter Polanowski, Megan Leahy Wright, Armin Buijs