Understanding the human rights issues associated with artificial intelligence

Post Date
19 July 2024
Read Time
12 minutes

Over the last decade we’ve seen growing adoption of AI systems and technologies as companies and governments alike look to create efficiencies and innovations in how things are done. AI systems are now being readily used in diagnosing diseases, executing data analyses and supporting customers with their requests[1]. It is present in all industries and sectors. This increase in use has also seen an emergence of criticisms aimed at the developers and users of AI technology. This has especially been due to the growing acceptance in society that AI technology carries both positive and negative human rights impacts.

Defining what we mean by ‘AI’

Before going further, it is worth considering what we mean by the term ‘AI’. There is currently no universally accepted definition of AI, so instead let’s highlight the approach adopted by the OECD (Organisation for Economic Co-operation and Development):

“AI is a machine-based system that, for explicit or implicitobjectives,infers, from the input it receives, how to generate outputs such aspredictions,content,recommendations, or decisionsthatcaninfluencephysical or virtual environments.DifferentAI systemsvary in theirlevels of autonomyand adaptiveness after deployment.”[2]

What are the human rights impacts associated with AI?

AI creates both positive and negative human rights impacts. On the positive side it provides society with opportunities from enhancing access to education and health information, to tackling human trafficking and helping to diagnose cancer [3], while on the negative side there are many human rights implications. These include [4]:

  • Lack of algorithm transparency (lack of accountability, fairness and transparency) of decision-making in the use of AI (e.g., when denying people jobs or refusing loans based on AI algorithms.)
  • Unfairness, bias and discrimination that can emerge from use of algorithms and automated decision making (e.g., design of the system itself can establish bias parameters to be used in decision making). This can, for example, result in discrimination for protected groups (e.g., women, persons of disabilities, persons of ethnic or certain racial profiles)
  • Lack of contestability and accountability of AI system ‘owners’. Unlike the current EU GDPR law which outlines that data subjects have the right to ‘obtain human intervention on the part of the controller, to express their point of view and to contest the decision’, no such option is available to those affected by AI systems. This impacts the victim’s right to effective remedy.
  • Adverse effects on workers through the changes to the requirements for future employees, lowering in demand for workers resulting in dismissal of employees or changes in the structure of unions and autonomy of workers. The impact is on a number of rights, including the right to work, the right to equal pay for equal work and the right to just and favourable conditions of work.
  • Privacy and data protection issues such as the right to informed consent and infringement rights of data protection, and rights of individuals resulting from an accountability gap.
  • Lack of liability for damage to persons and property (e.g., damage caused by a drone or a driverless car) as there are many parties involved in an AI system – the data provider, designer, manufacturer, developer, user and AI system itself. This results in an impact on the right to life and the right to effective remedy.

These examples show just how wide-ranging the issues involved are. For ease, AI risks can be categorised into two types: structural risks, that stem from the nature and design of AI itself, and functional risks, which result from AI’s transformative effect on our daily lives through use. This difference drives how these risks should be managed. Structural risks can be managed more through reviewing AI governance and functional risks need to be approached differently dependent on the context of the risks themselves.[5]

What are governments currently doing to manage the impact on human rights?

Currently many local, national and international governments worldwide are considering how best to legislate the growing emergence of AI technology in view of its human rights impacts. The EU is ahead of the game with its AI Act creating the first-ever legal framework (approved in May 2024). Its focus is to make sure that “AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”[6]

The Act acknowledges that, although most AI systems pose only a limited or no risk, certain AI systems create unacceptable or very high levels of risk. These include:

  • Biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
  • Emotion recognition in the workplace and schools
  • Social scoring
  • Predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities.

The new rules establish obligations for providers and users depending on the level of risk posed. For high-risk AI systems, companies will be required to create:

  • Adequate risk assessment and mitigation systems
  • High quality of the datasets feeding the system to minimise risks and discriminatory outcomes
  • Logging of activity to ensure traceability of results
  • Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
  • Clear and adequate information to the deployer
  • Appropriate human oversight measures to minimise risk
  • High level of robustness, security and accuracy

The focus of the EU Act is to create a human-centric approach to ensure AI applications comply with fundamental human rights legislation and that there is accountability and transparency requirements for the use of high-risk AI systems, combined with improved enforcement.

Other international organisations have also been responding in growing numbers. UNESCO launched the “first-ever” global standard on AI ethics ‘Recommendations on the Ethics of Artificial Intelligence’ in November 2021. The framework was adopted by all 193 Member States. [7] The OECD has developed the AI Principles promoting the use of AI that is “innovative and trustworthy and that respects human rights and democratic values” [8]. The United Nations has launched the Generative AI project to demonstrate the ways in which the UN Guiding Principles on Business and Human Rights (UNGPs) can guide more effective understanding, mitigations and governance of the risks of AI [9]. Joining the chorus, in 2023 G7 leaders agreed a set of International Guiding Principles on Artificial Intelligence (AI); a voluntary Code of Conduct for AI developers under the Hiroshima AI process was accompanied by a call to action on AI developers to sign up and implement [10].

Progress at national government levels has been much slower with governments approaching the topic very cautiously, sometimes due to lack of understanding of the complexity of the topic and sometimes for fear of harming their country’s innovation and competitivity in the AI race. In February 2024 the UK launched its response to a 2023 White Paper consultation on regulating AI with a focus on providing principles based and non-statutory framework [11]. In the US the lack of action of Congress has resulted in Biden in 2023 releasing an executive order on AI placing new safety obligations on developers and “calling on a slew of federal agencies to mitigate the technology’s risks while evaluating their own use of the tools.” [12] The order requires that companies building the most advanced AI systems perform safety tests and notify the government of the results before rolling out their products. It is not clear, though, how deeply the order will impact the private sector given the focus on federal agencies.

What are companies currently doing to manage the impact of human rights?

Prior to the increased focus on legislation in the EU, companies have been responding to growing stakeholder concerns around AI impact on human rights by developing what is widely known as AI Principle frameworks. We see this approach across all sectors and types of industries, with technology companies often identified quite rightly as early adapters. Examples include Google with its AI Principles [13], Microsoft with its Responsible AI standard [14] and Sony Group AI Ethics Guidelines [15].

These frameworks outline sets of values, principles and practices that companies are adapting to prevent, mitigate and remediate the impacts of their AI technologies on human rights. Due to the voluntary nature of these frameworks, the quality and robustness is varied and it is not always evident what processes, systems and governance sits behind these.

Here are some of the principles typically outlined in these frameworks:

  • Impact assessments to understand risks - this includes stakeholder and context mapping.
  • Taking actions to prevent the creation or reinforcement of unfair bias - this includes implementation procedures, automated detection tools and extensive review.
  • Being transparent about how AI systems work - this includes having an appropriate level of transparency for the right audience and explaining its intended use.
  • Taking accountability for AI systems - this includes human accountability of development, use and outcomes of AI systems.
  • Establishing data governance systems and controls - this includes ensuring data protection and security policies and procedures.
  • What are some of the gaps in company approaches to manage human rights impacts and what further actions could be taken?
  • A recent research paper on AI governance and human rights produced by International Law Programme outlines some further actions that could be taken by companies to address human rights impacts [16]. These include:
  • Establishing senior oversight and accountability of AI systems and processes.
  • Recruitment of human rights experts to join AI ethics team to encourage multi-disciplinary thinking.
  • Development of a human rights-based approach to AI ethics and impact assessment.
  • Creating a decision-making structure that allows human rights to be monitored and raised on an ongoing basis.
  • Develop internal and external communication on AI to provide explanation and transparency on use so that affected people can understand AI assisted decision making.

Creating a process for complaints and remedies

One big gap that continues to exist is the question of remedy. The UN Basic Principles and Guidelines lay out the right to a remedy and reparation for victims of violations of international human rights law. To meet these standards, complaint mechanisms need to be easily and directly accessible to those that are impacted, and remedy needs to be timely and effective [17]. However, this is not an area where much action is being taken - which may be a cause for concern for more marginalised groups who may lack the voice and influence to create change.

Conclusion

Here we have set out some of the current approaches being taken by governments and companies to manage human rights impacts caused by increased use of AI technology. We show how companies have been responding in absence of government legislation by developing their own voluntary AI principles frameworks, although the resulting the lack of standardisation and robustness of these frameworks creates varied standards in the marketplace. More worrying is the fact that while these frameworks look to largely prevent human right abuses caused by the development and use of AI, they do not focus on another fundamental principle of the UN Principles and Guidelines – namely the right to remedy and reparation for victims.

SLR is ready to support your business across the full scope of environmental, social, and governance solutions.

Get in touch

This article was written by Petra Parizkova.

-------------------------------------------------

References

[1] https://www.ibm.com/blog/breaking-down-the-advantages-and-disadvantages-of-artificial-intelligence/

[2] https://oecd.ai/en/wonk/ai-system-definition-update

[3] https://www.weforum.org/agenda/2021/07/ai-artificial-intelligence-doing-good-in-world/#:~:text=5%20ways%20AI%20is%20doing%20good%20in%20the,...%205%20Tracking%20and%20responding%20to%20COVID-19%20

[4] https://www.sciencedirect.com/science/article/pii/S2666659620300056#sec0004

[5] https://www.eui.eu/news-hub?id=research-beyond-the-hype-the-interplay-between-human-rights-and-ai

[6] https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law

[7] https://www.unesco.org/en/artificial-intelligence/recommendation-ethics?hub=32618

[8] https://oecd.ai/en/ai-principles

[9] https://www.ohchr.org/en/business-and-human-rights/b-tech-project

[10] https://digital-strategy.ec.europa.eu/en/news/commission-welcomes-g7-leaders-agreement-guiding-principles-and-code-conduct-artificial

[11] https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response

[12] https://www.washingtonpost.com/technology/2023/10/30/biden-artificial-intelligence-executive-order/

[13] https://ai.google/responsibility/principles/

[14] https://www.microsoft.com/en-us/ai/principles-and-approach/

[15] https://www.sony.com/en/SonyInfo/sony_ai/responsible_ai.html

[16] https://www.chathamhouse.org/sites/default/files/2023-01/2023-01-10-AI-governance-human-rights-jones.pdf

[17] https://cdt.org/insights/access-to-justice-and-effective-remedy-in-the-eu-ai-act-the-state-of-play/

Recent posts

  • Insight

    20 December 2024

    5 minutes read

    A shift in sustainable development: Understanding biodiversity net gain, hydrology, ecology, and landscape

    by Helena Preston


    View post
  • Insight

    20 December 2024

    9 minutes read

    Synergies in biodiversity evaluation between corporate reporting and international lending standards: Locating and prioritising

    by Ida Bailey


    View post
  • Insight

    18 December 2024

    5 minutes read

    Supply Chain Due Diligence Requirements Coming to Canada

    by Peter Polanowski, Megan Leahy Wright, Armin Buijs


    View post
See all posts