Women at the Table were absolutely thrilled to be part of the consultations, and then to have our guidance with concrete recommendations included (below)
in the official practical application of the “Guiding Principles on Business and Human Rights to the activities of technology companies” Report of the Office of the United Nations High Commissioner for Human Rights, here.
The UN Guiding Principles on Human Rights calls for States to enact legal and enforceable frameworks that require companies to respect human rights and encourage businesses to remain accountable by communicating how they address their impacts on human rights.
Our input for OHCHR report on the application of the UNGPs in the tech sector in February 2022 was as follows:
Velocity and scale of digitalisation are transforming every part of society in every part of the globe. It is uncertain whether emerging and data-driven Artificial Intelligence (AI) technologies will move us to the positive, or the negative, side of socio-economic equality, gender and climate justice, stability and peace.
Systemic gender, racial and intersectional bias sit at the core of current AI & Algorithmic Decision-Making (ADM) processes wiring historic bias, inequity and discrimination into our newly digitised economic, governance, and social systems. Algorithms based on incomplete or biased data and the models built from them incorporate assumptions of gender, race and class. This process will evolve into ever more difficult to dismantle inequalities – if we do not act now.
We call on Governments, Private Sector, & Civil Society to adopt human rights-based guidelines to establish accountability and transparency for AI and ADM in public and private sectors so that technology does not embed already biased systems into our future. Combatting and correcting this wired bias and discrimination is urgent in order that the pro-social capabilities of AI & ADM can be activated. Technology must be harnessed to deliver equality outcomes, designed with inclusion at the core to create new opportunities & innovative correction of inequities.
Given that digital bias and discrimination exacerbate inequality at scale, and that AI and ADM systems are increasingly adopted by the public sector for healthcare, education, social protection allocation, law enforcement, criminal justice and border management (to name a few areas) without human rights assessments or any algorithmic due diligence, combatting and correcting this wired bias and discrimination is an urgent issue as public and private sector deployment without impact assessment and due diligence have the clear and present potential to evolve into ever more difficult to dismantle digital colonial and patriarchal social structures and governance. States must act now.
The UN Guiding Principles on Human Rights calls for States to enact legal and enforceable frameworks that require companies to respect human rights and encourage businesses to remain accountable by communicating how they address their impacts on human rights.
Therefore we call on States, as well as the Private Sector and Civil Society to:
- Establish strong legal frameworks to promote and engrain accountability and transparency for the technology sector and its production of algorithmic decision making (ADM) in both the public and private sectors.
- This includes potential expansion of powers for sector specific agencies, or creation of new terms of reference to oversee, audit, and monitor ADM systems for regulatory oversight and legal liability on the private and public sector.
- Institutionalize public and private sector uptake of Algorithmic Impact Assessments (AIA) and rigorous testing across the lifecycle of AI systems
- One of the major flaws of certain forms of AI is its lack of transparency and thus, a lack of explainability. In addition to lacking transparency, companies developing AI have no obligation to explain how their tools work, and what guiding principles were followed during design development. Moreover, public sector entities and private companies using AI have no obligation to explain why they are using AI or to determine its impact.
- Here, companies should be required to disclose the purpose of their AI system and the approach they had in designing it. This disclosure could be included in AI impact assessments, in which companies provide answers to simple, yet essential, questions such as “what is the purpose of this AI?” and “what steps have been taken and will be taken to limit bias?”
- Public Institutions need to develop a self assessment framework for both public and private sectors designed to respect the public’s right to know the AI systems that impact their lives in terms of principles of accountability and fairness.
- Testing and reporting should account for the origins and use of training data, test data, models, Application Program Interface (APIs), and other components over a product life cycle. Testing should cover pre-release trials, independent auditing, certification, and ongoing monitoring to test for bias and other harms. ADM should improve the quality of, not control the human experience.
- Create and promote Gender-responsive procurement guidelines
- Organizations and all levels of government should develop ADM gender equality procurement guidelines with hard targets; and outline roles and responsibilities of those organizations required to apply these principles.
- Improve datasets for equality in open gender disaggregated data, data collection, and inclusive quality datasets
- Actively produce open gender-disaggregated datasets; this better enables an understanding of the sources of bias in AI, to ultimately improve the performance of machine learning systems.
- Invest in controls to oversee data collection processes and human-in-the-loop verification, so that data is not collected at the expense of women and other traditionally excluded groups.
- Engage in more inclusive data collection processes that focus not only on quantity but on quality of datasets.
- Take clear proactive steps to include an intersectional variety and equal numbers of women and girls in the creation, design, and coding of ADM.
- New technologies offer new opportunities including the creation of genuinely new structures that require new ideas and new teams. Gender roles being removed from the real world are being wired into new ADM with old and stereotypical conceptions and associations of gender, race, and class. Innovative and inclusive thinking are necessary. The imagination and skill can be provided by the largest untapped intellectual resource on the planet – women and girls.
- Set targets for gender balance in AI decision making
- Gender balance in decision making should be put on the official agenda of all involved with the funding, design, adoption, and evaluation of ADM.
- Set targets for gender balance in design teams
- Employment of a robust range of intersectional feminists in the design of ADM systems will trigger and assist greater innovation and creativity, as well as potentially detect and mitigate bias and harmful effects on women, girls, and those traditionally excluded.
- Require companies to proactively disclose and report on gender balance in research and design teams, including upstream when applying for grants and procurement. Incentivize teams that are balanced and multi-disciplinary.
- Create a research fund to explore the human rights impacts and solutions of technology and AI on human rights
- A research fund should be established to allow for public consultation and public and private sector research to assess the impacts of gender and AI, machine learning, bias and fairness, with a multi-disciplinary approach beyond the computer science and economic lens. This approach should address new ways of embedding digital literacy, and study the economic, political, and social effects of ADM on the lives of women and those traditionally excluded from rules making and decision taking.
- Establish and enforce Human Rights targets, reporting and legal accountability of the tech sector
- Mass scale correction of skewed data will require multilateral and international cooperation to ensure existing inequalities and infringements on human rights are not embedded into technology businesses and systems.
- This requires adopting data and technical standards applied to ADM and machine learning, grounded in human rights. Practical application involves a UN agencies-wide review of the application of existing international human rights laws and standards for ADM, machine learning, and gender. This can guide and provoke the creative thinking for an approach grounded in human rights that is fit for purpose in the fast changing digital age.
- Development of a set of metrics for digital inclusiveness to be urgently agreed, measured worldwide, and detailed with sex disaggregated data in the annual reports of institutions such as the UN, the International Monetary Fund, the International Telecommunications Union, the World Bank, and other multilateral development banks, and the OECD.
The world is at a critical turning point. In order to innovate and thrive in a rapidly changing global environment, new norms and practical methodologies to implement standards are needed that uphold human rights and accountability of the technology sector to ensure that AI and machine learning do not embed already biased and discriminatory systems into the collective future of humanity.
Last modified: October 6, 2022