Categories
Uncategorized

new technical report “Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes”

We have released a new technical report: “Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes”, available at https://arxiv.org/abs/2410.23394.

From the abstract:

[Imana23c, figure 3]: Detecting racial skew with BISG-based inference is less sensitive (shown by the lower test statistic Z) than either knowing true-race, or using our improved version that reflects potential inference error.  More samples and larger underlying skew make the range of confusion smaller, but do not eliminate it.
[Imana23c, figure 3]: Detecting racial skew with BISG-based inference is less sensitive (shown by the lower test statistic Z) than either knowing true-race, or using our improved version that reflects potential inference error. More samples and larger underlying skew make the range of confusion smaller, but do not eliminate it.

Auditing social-media algorithms has become a focus of public-interest research and policymaking to ensure their fairness across demographic groups such as race, age, and gender in consequential domains such as the presentation of employment opportunities. However, such demographic attributes are often unavailable to auditors and platforms. When demographics data is unavailable, auditors commonly infer them from other available information. In this work, we study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads. We show that inference error, if not accounted for, causes auditing to falsely miss skew that exists. We then propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms. Our method works by adjusting for expected error due to demographic inference, and it makes skew detection more sensitive when attributes must be inferred. Because inference is increasingly used for auditing, our results provide an important addition to the auditing toolbox to promote correct audits of ad delivery algorithms for bias. While the impact of attribute inference on accuracy has been studied in other domains, our work is the first to consider it for black-box evaluation of ad delivery bias, when only aggregate data is available to the auditor.

This technical report is joint work of Basilial Imana and Aleksandra Korolova (both of Princeton) and John Heidemann (USC/ISI). This work was supported by the NSF via CNS-1956435, CNS-2344925, and CNS-2319409 (the InternetMap project).

Categories
Uncategorized

new conference paper: Auditing for Racial Discrimination in the Delivery of Education Ads

Our new paper “Auditing for Racial Discrimination in the Delivery of Education Ads” will appear at the ACM FAccT Conference in Rio de Janeiro in June 2024.

From the abstract:

Experiments showing educational ads for for-profit schools are disproportionately shown to Blacks at statistically significant levels.  (from [Imana24a], figure 4).
Experiments showing educational ads for for-profit schools are disproportionately shown to Blacks at statistically significant levels. (from [Imana24a], figure 4).

Digital ads on social-media platforms play an important role in shaping access to economic opportunities. Our work proposes and implements a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities. Third-party auditing is important because it allows external parties to demonstrate presence or absence of bias in social-media algorithms. Education is a domain with legal protections against discrimination and concerns of racial-targeting, but bias induced by ad delivery algorithms has not been previously explored in this domain. Prior audits demonstrated discrimination in platforms’ delivery of ads to users for housing and employment ads. These audit findings supported legal action that prompted Meta to change their ad-delivery algorithms to reduce bias, but only in the domains of housing, employment, and credit. In this work, we propose a new methodology that allows us to measure racial discrimination in a platform’s ad delivery algorithms for education ads. We apply our method to Meta using ads for real schools and observe the results of delivery. We find evidence of racial discrimination in Meta’s algorithmic delivery of ads for education opportunities, posing legal and ethical concerns. Our results extend evidence of algorithmic discrimination to the education domain, showing that current bias mitigation mechanisms are narrow in scope, and suggesting a broader role for third-party auditing of social media in areas where ensuring non-discrimination is important.

This work was reported on in an article by Sam Biddle in the Intercept, by Thomas Claburn at The Register, and in ACM Tech News.

This paper is a joint work of Basileal Imana and Aleksandra Korolova from Princeton University, and John Heidemann from USC/ISI. We thank the NSF for supporting this work (CNS-1956435, CNS-
1916153, CNS-2333448, CNS-1943584, CNS-2344925, CNS-2319409,
and CNS-1925737).

Data from this paper is available from our website.

Categories
Papers Publications

New conference paper: Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest

Our new paper “Having your Privacy Cake and Eating it Too: Platform-supported Auditing of Social Media Algorithms for Public Interest” will appear at The 26th ACM Conference On Computer-Supported Cooperative Work And Social Computing (CSCW 2023).

From the abstract:

Overview of our proposed platform-supported framework for auditing relevance estimators while protecting the privacy of audit participants and the business interests of platforms.

Concerns of potential harmful outcomes have prompted proposal of legislation in both the U.S. and the E.U. to mandate a new form of auditing where vetted external researchers get privileged access to social media platforms. Unfortunately, to date there have been no concrete technical proposals to provide such auditing, because auditing at scale risks disclosure of users’ private data and platforms’ proprietary algorithms. We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation. The first contribution of our work is to enumerate the challenges and the limitations of existing auditing methods to implement these policies at scale. Second, we suggest that limited, privileged access to relevance estimators is the key to enabling generalizable platform-supported auditing of social media platforms by external researchers. Third, we show platform-supported auditing need not risk user privacy nor disclosure of platforms’ business interests by proposing an auditing framework that protects against these risks. For a particular fairness metric, we show that ensuring privacy imposes only a small constant factor increase (6.34x as an upper bound, and 4x for typical parameters) in the number of samples required for accurate auditing. Our technical contributions, combined with ongoing legal and policy efforts, can enable public oversight into how social media platforms affect individuals and society by moving past the privacy-vs-transparency hurdle.

A 2-minute video overview of the work can be found here.

This paper is a joint work of Basileal Imana from USC, Aleksandra Korolova from Princeton University, and John Heidemann from USC/ISI.