Event Summary

Managing Human Rights Risks in A World of Widespread Use of AI

On September 30, 2025, the BHRLA held a one hour, members-only, discussion on the emerging human rights risks associated with the widespread adoption of AI in today’s workplace; the applicability of the UN Guiding Principles and OECD Guidelines and AI Principles; and emerging regulation at the national, regional and international level. The key points raised in this discussion, governed by Chatham House Rules, are summarized below, along with the speaker roster.

September 30, 2025

Hosted by BHRLA

Discussants:

Rashad Abelson, Technology Sector Lead, OECD Centre for Responsible Business Conduct

Emily Holland, Counsel, Simpson Thacher

Robert Spano, Partner, Gibson Dunn

Alexandria Walden, Global Head of Human Rights, Google

Nicholas Westbrook, Counsel, Simpson Thacher

Virtual
Frameworks: Convergence at the top, divergence in the details
  • High-level risk-based due diligence concepts are broadly aligned across different regimes, including the focus on prioritization, stakeholder engagement, and transparency.
  • Friction arises in definitions and granularity as similar ideas are expressed through different terms and requirements, complicating the implementation of frameworks.
  • Overlapping obligations risk duplication and unclear boundaries between horizontal due-diligence laws and sector/product rules can slow the process.
International due diligence standards provide a common foundation
  • The UNGPs and OECD RBC Guidelines provide a common foundation across jurisdictions, including but not limited to board oversight, executive steering, saliency/human rights impact assessments, and cross-functional execution.
  • One enterprise methodology detailing the scope, scale, and remediability can anchor assessments that later map to AI-specific and platform obligations.
Interplay among CSDDD, EU AI Act, and DSA
  • Treat obligations as complementary:
    • EU AI Act (e.g., Art. 9): risk management for high-risk AI systems;
    • DSA (e.g., Art. 34): systemic risk assessments for very large platforms/search engines;
    • CSDDD (Arts. 10–11): human-rights due diligence duties across the value chain.
  • Aim for a single internal “assessment spine” mapped to each instrument rather than parallel, isolated projects.
  • Expect variation case-by-case: completion of one assessment may cover much but not all elements of another, depending on system risk, stakeholder engagement, and documentation depth.
Due diligence responsibility and OECD tools
  • Responsible AI due diligence includes inputs (data, infrastructure, finance), developers, and deployers/users, and leverage and responsibilities differ by role.
  • OECD is developing AI-specific due diligence guidance looking across the entire value chain that aligns risk steps with multiple regulatory frameworks.
Commercial risks overlap with human rights risks
  • The same issues drive both commercial exposure and human-rights impacts:
    • Data leakage → confidentiality and privacy.
    • Reliability/accuracy → liability exposure and non-discrimination risks.
    • Training data quality/provenance → performance and bias.
    • Explainability/transparency → defensibility of decisions and individual rights to understand/challenge outcomes.
  • Both can be mitigated by effective controls, including request for approvals for new use cases, input restrictions, human oversight for consequential decisions, testing/monitoring, record-keeping, and staff training.
  • Even if a corporation is not directly in the scope of a regulation, supply-chain pressure is rising as customers request information on governance, due diligence, and risk-management processes so they can meet the compliance obligations in their own jurisdictions.
Investing, defense, and dual-use technologies
  • Definitional ambiguity–for example, what counts as “defense,” how proximate a technology must be to a weapon system–complicates screening and engagement strategies.
  • Transparency constraints, including national security and complex development chains, limit assurance and disclosure.
  • In the absence of codified rules, practitioners must apply existing humanitarian law principles to new AI defense technologies on a product-by-product basis.
  • Policy signals may broaden the investable universe, but Do No Significant Harm (DNSM) and Principle Adverse Impact (PAI) expectations point to rigorous, product- and use-specific diligence.
Regulatory coordination and clarity
  • Regulators and standard setters are actively working on interoperability, yet consensus among different regimes remains challenging.
  • Over-regulation without clear interfaces can hinder rights protection by creating duplicative processes, uncertainty, and misaligned enforcement priorities.
  • The practical path forward is clarity and consolidation inside organizations, which features one governance and assessment structure, consistently mapped to changing external requirements.

Event Resources

Video Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore

Video Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore

Video Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore

related NEWs

You might also be interested in:

NEWS

Business and Human Rights Lawyers Association Announces New Executive Officer

Appointment of Meg Roggensack as its first Executive Officer
Read More