Women at the Table

UNECE Conference: A gender perspective on standards for artificial intelligence

This conference shall outline the main types of bias that can exist in standards for artificial intelligence and propose ways forward to address these, eventually leading to a brief guidance on the topic.

08 April 2025 14:00 – 17:00 | Palais des Nation H207-H208-H209 Switzerland

Background: As artificial intelligence (AI) continues to expand rapidly, it is crucial to ensure that the most vulnerable populations are not neglected or rendered further invisible. AI systems, which are increasingly used in decision-making across various sectors such as healthcare, finance, recruitment, and public services, often inherit and amplify historical gender biases present in training data, model design, and algorithmic assumptions. These biases perpetuate systemic disadvantages for women and marginalized gender groups, reinforcing structural inequalities, limiting economic and professional opportunities, and restricting access to essential services.

The Consequences of Gender Bias in AI Systems

  1. Distorted Medical Diagnoses and Health Risks: Gender bias in AI-driven healthcare leads to diagnostic errors, misclassification, and suboptimal treatment due to the underrepresentation of women in clinical datasets. For instance, AI diagnostic tools trained primarily on male patient data often fail to accurately identify conditions like heart disease in women, resulting in delayed or incorrect diagnoses. Similarly, AI models in dermatology and radiology show lower accuracy for individuals with darker skin, disproportionately affecting women of color.
  2. Reinforcing Discriminatory Hiring and Workplace Barriers: AI-driven hiring tools, widely used to screen resumes and predict candidate suitability, often encode biases from male-dominated industries. For example, an AI recruitment tool that penalized resumes mentioning “women’s colleges” or women-dominated professions replicates existing gender biases in hiring. Even when explicit gender indicators are removed, AI models infer gender from proxies like career gaps—often linked to maternity leave—disadvantaging female applicants.
  3. Economic Exclusion Through AI-Driven Financial Services: AI systems play a key role in financial services, particularly in creditworthiness assessments and loan approvals. However, models based on historical data often disadvantage women with nontraditional credit histories. Algorithms prioritizing long-term financial records restrict loan access for women who have taken career breaks for caregiving. In developing countries, AI-driven microfinance systems frequently disadvantage women and marginalized communities due to biased risk assessments that ignore alternative indicators of financial stability.
  4. Algorithmic Exclusion in Public Services and Safety Systems: AI systems in public services, such as welfare distribution, identity verification, and law enforcement, risk excluding women and marginalized groups. For example, facial recognition systems used in border control and policing misidentify darker-skinned women at higher rates than lighter-skinned men, leading to wrongful arrests, travel restrictions, and exclusion from essential services.
  5. Perpetuating Gender Stereotypes in Digital Environments: AI recommendation systems reinforce digital stereotypes through biased job ads and search results. Studies show AI-driven job ads for STEM roles are shown to men 20% more often than to women, reinforcing occupational segregation. Similarly, language models trained on historical text often associate women with domestic and caregiving roles, embedding stereotypes into AI-generated content.

Addressing Algorithmic Bias: To effectively address algorithmic bias, it is essential to move beyond surface-level fixes and tackle its root causes. This requires interdisciplinary collaboration, combining technical expertise with insights from affected communities, as well as standardized evaluations of data sources, transparent model design, and inclusive AI standards development practices. Understanding the origins of bias—whether preexisting, technical, or emergent—is critical to designing AI systems that mitigate, rather than perpetuate, systemic inequalities.

Draft Agenda

Welcome

  • Lance Thompson, Head of Regulatory Cooperation Unit, ECTD, ECE [Bio & PPT]
  • Michelle Parkouda, Chair Team of Specialists on Gender-Responsive Standards (ToS-GRS) [Bio & PPT]

Keynote. TBC

Impact of Gender Bias in AI

  • Sergio Scandizzo, Head of Internal Modelling, European Investment Bank (financial services perspective)
  • Shirin Heldari, President GENDRO (healthcare perspective)
  • Costanza Bonadonna, Dean of the Faculty of Science, University of Geneva (hiring process perspective) [Bio & PPT]

How to address gender bias in standards for AI

  • Caitlin Kraft-Buchnan, Women @ the Table 
  • Cristina Finch, Head Gender and Security Division, Geneva Center for Security Sector Governance (DCAF)
  • Youth perspective

Concluding remarks and wrap up

  • Michelle Parkouda, Chair ToS-GRS
Last modified: March 17, 2025