Categories
Papers Publications

new conference best paper “External Evaluation of Discrimination Mitigation Efforts in Meta’s Ad Delivery”

Our new paper “External Evaluation of Discrimination Mitigation Efforts in Meta’s Ad Delivery” (PDF) will appear at The eighth annual ACM FAccT conference (FAccT 2025) being held from June 23-26, 2025 in Athens, Greece.

We are happy to note that this paper was awarded Best Paper, one of the three best paper awards at FAccT 2025!

Comparision of total reach and cost per 1000 reach with and without VRS enabled (Figure 5a)

From the abstract:

The 2022 settlement between Meta and the U.S. Department of Justice to resolve allegations of discriminatory advertising resulted is a first-of-its-kind change to Meta’s ad delivery system aimed to address algorithmic discrimination in its housing ad delivery. In this work, we explore direct and indirect effects of both the settlement’s choice of terms and the Variance Reduction System (VRS) implemented by Meta on the actual reduction in discrimination. We first show that the settlement terms allow for an implementation that does not meaningfully improve access to opportunities for individuals. The settlement measures impact of ad delivery in terms of impressions, instead of unique individuals reached by an ad; it allows the platform to level down access, reducing disparities by decreasing the overall access to opportunities; and it allows the platform to selectively apply VRS to only small advertisers. We then conduct experiments to evaluate VRS with real-world ads, and show that while VRS does reduce variance, it also raises advertiser costs (measured per-individuals-reached), therefore decreasing user exposure to opportunity ads for a given ad budget. VRS thus passes the cost of decreasing variance to advertisers}. Finally, we explore an alternative approach to achieve the settlement goals, that is significantly more intuitive and transparent than VRS. We show our approach outperforms VRS by both increasing ad exposure for users from all groups and reducing cost to advertisers, thus demonstrating that the increase in cost to advertisers when implementing the settlement is not inevitable. Our methodologies use a black-box approach that relies on capabilities available to any regular advertiser, rather than on privileged access to data, allowing others to reproduce or extend our work.

All data in this paper is publicly available to researchers at our datasets webpage.

This paper is a joint work of Basileal Imana, Zeyu Shen, and Aleksandra Korolova from Princeton University, and John Heidemann from USC/ISI. This work was supported in part by NSF grants CNS-1956435, CNS-2344925, and CNS-2319409.

Categories
Uncategorized

new conference paper: Auditing for Racial Discrimination in the Delivery of Education Ads

Our new paper “Auditing for Racial Discrimination in the Delivery of Education Ads” will appear at the ACM FAccT Conference in Rio de Janeiro in June 2024.

From the abstract:

Experiments showing educational ads for for-profit schools are disproportionately shown to Blacks at statistically significant levels.  (from [Imana24a], figure 4).
Experiments showing educational ads for for-profit schools are disproportionately shown to Blacks at statistically significant levels. (from [Imana24a], figure 4).

Digital ads on social-media platforms play an important role in shaping access to economic opportunities. Our work proposes and implements a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities. Third-party auditing is important because it allows external parties to demonstrate presence or absence of bias in social-media algorithms. Education is a domain with legal protections against discrimination and concerns of racial-targeting, but bias induced by ad delivery algorithms has not been previously explored in this domain. Prior audits demonstrated discrimination in platforms’ delivery of ads to users for housing and employment ads. These audit findings supported legal action that prompted Meta to change their ad-delivery algorithms to reduce bias, but only in the domains of housing, employment, and credit. In this work, we propose a new methodology that allows us to measure racial discrimination in a platform’s ad delivery algorithms for education ads. We apply our method to Meta using ads for real schools and observe the results of delivery. We find evidence of racial discrimination in Meta’s algorithmic delivery of ads for education opportunities, posing legal and ethical concerns. Our results extend evidence of algorithmic discrimination to the education domain, showing that current bias mitigation mechanisms are narrow in scope, and suggesting a broader role for third-party auditing of social media in areas where ensuring non-discrimination is important.

This work was reported on in an article by Sam Biddle in the Intercept, by Thomas Claburn at The Register, and in ACM Tech News.

This paper is a joint work of Basileal Imana and Aleksandra Korolova from Princeton University, and John Heidemann from USC/ISI. We thank the NSF for supporting this work (CNS-1956435, CNS-
1916153, CNS-2333448, CNS-1943584, CNS-2344925, CNS-2319409,
and CNS-1925737).

Data from this paper is available from our website.