Building Global Capacity for Rights-Based Technology
The Challenge
AI systems rapidly reshape decision-making across healthcare, criminal justice, and social services yet most developers lack the frameworks to center human rights in their work. Policymakers struggle to regulate technologies they don’t fully understand. Communities most affected by AI systems remain excluded from conversations about their design and deployment.
Our Approach
The AI & Equality initiative doesn’t just teach principles, we’ve built the methodology, evaluation tools, and implementation frameworks that make rights-based AI actionable.
- Human Rights-Based AI Lifecycle Framework: Our methodology integrates Human Rights Impact Assessments (HRIAs) across the entire AI development cycle from initial design through deployment and monitoring. Validated at EPFL, Sorbonne Center for AI, Cambridge, Technical University Munich, and the EU’s AI Doctoral Academy, this framework provides the analytical tools and checkpoints developers and policymakers need to identify rights implications before systems scale.
- HumRights-Bench: We developed the first benchmark grounded in international human rights law to evaluate how AI systems understand and respect fundamental rights, providing measurable standards for algorithmic accountability.
- Contextualized Toolkits: We translate frameworks into practice through region-specific implementation guides. The African AI & Equality Toolbox, developed in collaboration with the African Centre for Technology Studies (ACTS), provides detailed methodologies across five strategic sectors: agriculture, health, climate, digital safety, and language inclusion. Our Spanish-language course, developed with Chile’s National Centre for Artificial Intelligence CENIA and hosted on the University of Chile’s UBierta platform, expands reach across Latin America with localized case studies.
- Public Procurement Framework: Research-backed guidance helps governments safeguard the public interest when purchasing AI systems, embedding rights considerations into procurement decisions before deployment.
- Global Community of Practice: Through workshops that break down silos between disciplines, we’ve built a network of 700+ researchers and practitioners across 57 countries. Data scientists learn to recognize rights implications in their code. Policymakers gain technical literacy to craft effective regulation. Business leaders discover how human rights frameworks create more robust technology.
The Impact
From EPFL to the Sorbonne Center for AI, from Cambridge to Chile’s CENIA, from Ghana to Uganda to Egypt, we’ve created infrastructure for rights-based AI development: validated methodologies, measurable benchmarks, implementation tools, and a global network of practitioners equipped to build technology that serves democracy and equality.
This isn’t just education—it’s the foundation for ensuring AI systems are designed, evaluated, and governed with human rights at the center.