Women at the Table

Integrating Human Rights Considerations Along the AI Lifecycle A Framework to AI Development

AI & Equality White Paper 

Authored by Emma Kallina, Sofia Kypraiou, & Caitlin Kraft-Buchman
From the Human Rights Toolbox, December 2024

Current research highlights the potential for AI systems to adversely affect individual and collective Human Rights if developed without careful consideration. By incorporating critical analysis and reflection points regarding Human Rights impacts during AI development, such harms can be mitigated or prevented completely. This white paper outlines our <AI & Equality> framework that enables such an approach, going even further and promoting AI development that is driven by the wish to promote human dignity. 

The framework consists of the essential questions and reflection points that are relevant at each of the six stages of the AI lifecycle, ensuring that Human Rights impacts are considered as they become relevant (vs after the system is already completed). Integrating the Human Rights Impact Assessment of the Alan Turing Institute with our practical Human Rights-based approach to the AI LifeCycle and AI Development, this methodology facilitates compliance to upcoming policy requirements such as the Human Rights Impact Assessment of the EU AI Act.

However, our goal is to move beyond mere compliance and towards a paradigm of AI development that proactively promotes the achievement of Human Rights – vs mitigating risks as an add-on or after harms have already occurred. By involving affected communities from the outset and with substantial decision agency, we promote and enable the development of systems that center Human Rights, equality, and inclusion at the core of code, capable of creating new opportunities and innovative correction of inequities. We hope to bring social programs in line with 21st century research and values, united in finding ways to make AI more effective – not merely more ‘accurate’ and ‘efficient’.

Our Advocacy:

• AI shouldn’t  just be about efficiency—it needs instead to be effective and impact quality of life. 

• Our AI and Equality framework moves AI from damage control to a Human Rights-first approach.

• Forget compliance checklists—this is about making AI work for people, not against them.

• AI decisions affect all of us. If the system isn’t built with everyone in mind, it’s built for failure.

• Our framework demands that communities—not just companies—shape AI from the start.

• No more black-box decisions. No more systems that serve the powerful at the expense of the rest.

• AI isn’t neutral. Every system carries consequences—who benefits, who’s harmed, and who gets left behind.

• Our framework makes Human Rights Impact Assessments (HRIA) standard practice, not an afterthought.

• AI creators and policymakers must learn to ask hard questions before launching systems into the world.

• Governments must set the terms for AI—before it sets the terms for us.

• Our White Paper aligns with the EU AI Act, giving policymakers the tools to scrutinize, question, and demand accountability.

• AI is already shaping laws, rights, and freedoms. The public sector has to take control—now.

Last modified: February 7, 2025