1 share

Algorithmic Fairness in Targeting Social Welfare Programs

There is a widespread concern that automated decision-making systems, as they have become increasingly ubiquitous, can widen social inequalities and systematize discrimination. Thus, defining, measuring, and optimizing algorithmic fairness is on the rise in recent years.
People Impacted
$ 156B
Potential Funding
I have this challenge
the problem
Nature and Context

Absent explicit measures aimed at limiting algorithmic bias, targeting rules can systematically disadvantage population subgroups, such as incurring exclusion errors 2.3 times higher on poor urban households compared to their rural counterparts, or exclusion errors 2.2 times higher on poor elderly households compared with poor traditional nuclear families.

Targeting algorithms of social welfare programs can introduce relevant and systematic disparities in the exclusion errors of population subgroups (i.e., urban/rural and family structure types). We also show that disparities can be attenuated by introducing fairness constraints, yet these may come at significant costs in overall prediction efficiency across the population.

Symptoms and Causes

Automated decision-making systems are largely designed by private companies, meaning their details are proprietary. Thus, a difficulty arises in assessing who is responsible for the outcomes of these decision-making systems.

Self-fulfilling predictions

An increase in decisions made by a program reproduces, on a larger scale, any existing bias.

Historical data, rather than the mathematical function that is the algorithm, can be bias laden. For instance, using historical datasets, an algorithm might learn 'that a particular institution prefers to accept men over women.'

Algorithms can also have an unequal distribution of accuracy across groups within a population. Greater accuracy for one set of the population over another, and reproduced over time, presents a clear issue with the reliance of automated-decision making systems.

the impact
Negative Effects

As machine learning systems begin to predominate the techniques of automated decision-making systems, there is a significant risk that they can introduce bias into the decision-making process. Features of these systems can produce discriminatory outcomes and decisions that a remnants of the models themselves. Using historical data could underrepresent members of protected classes or contaminate with past discriminatory practices.

Legal liability

Developers who don't address potential discriminatory harms produced and reproduced by their decision-making systems are susceptible to legal action in the form of a discrimination suit. Such suits against potential bias in automated systems are possible and ongoing from civil liberties organizations.

Discrimination can also be completely unconscious when a neutral procedure produces decisions that disproportionately and systematically harm protected classes (Brookings). The legal risk in using these systems arises less from the possibility of intentional discrimination and more from exposure to claims of disparate impact.

who benefits from solving this problem
Organization Types
  • Algorithmic fairness organizations

  • Civil liberties advocacy groups

  • Low-income services providers

  • Government agencies

  • Social programs

  • Benefit recipients

  • Automated Decision-Making System manufacturers

Ideas Description

Overcoming algorithmic bias

A BU-MIT research team created a method of identifying the subset of the population that the system fails to judge fairly, and sending their review to a different system that is less likely to be biased. That separation guarantees that the method errs in more balanced ways regarding the individuals for whom it does make a decision.

A question then arises of, 'to what extent are the two systems compatible with the desired notion of fairness?'

Legislative constraints

Some states have moved to require accountability reports for controversial decision-making systems and technologies (such as facial recognition algorithms), particularly when utilized by state or local government agencies (WA State Legislature).

Along with self-reported accountability measures, Washington state has required users of automated facial recognition systems to provide a meaningful human review of the decisions that produce legal or similarly significant effects concerning individuals.

Service providers in Washington state are now required to make available an application programming interface (API) to 'enable independent testing for accuracy and unfair performance differences across distinct subpopulations' (WA State Legislature).

Tech Solutions on X4I that could help combat this issue:
OpenOversight is a Lucy Parsons Labs project that aims to improve police and law enforcement visibility and transparency using public and crowdsourced data. We maintain oversight databases, digital galleries, and profiles of individual officers from departments across the United States that consolidate information including names, birthdates, mentions in news articles, salaries, and photographs

At MaiVERIC, we enhance the capabilities of public safety agencies to further protect the communities they serve. We do so responsibly, under the law, respecting people’s privacy rights.

Data Sources

Algorithmic Fairness and Efficiency in Targeting Social Welfare Programs at Scale -

The Brink, 'Are Computer-Aided Decisions Actually Fair?' -

Brooking Institution, 'Fairness in Algorithmic Decision-Making' -

Washington State Legislature, ESSB 6280 -

Contributors to this Page

Giving Tech Labs Team -

Input Needed From Contributing Editors
(click on any tag to contribute)