UN SDG #9 Industry, Innovation and Infrastructure UN SDG #9
UN SDG #10 Reduced Inequality UN SDG #10

challenge

1 share

Algorithmic Fairness in Targeting Social Welfare Programs

There is a widespread concern that automated decision-making systems, as they have become increasingly ubiquitous, can widen social inequalities and systematize discrimination. Thus, defining, measuring, and optimizing algorithmic fairness is on the rise in recent years.

challenge

1 share

Algorithmic Fairness in Targeting Social Welfare Programs

There is a widespread concern that automated decision-making systems, as they have become increasingly ubiquitous, can widen social inequalities and systematize discrimination. Thus, defining, measuring, and optimizing algorithmic fairness is on the rise in recent years.
52.2M
people impacted
$87B
potential funding
the problem
Nature and Context

Absent explicit measures aimed at limiting algorithmic bias, targeting rules can systematically disadvantage population subgroups, such as incurring exclusion errors 2.3 times higher on poor urban households compared to their rural counterparts, or exclusion errors 2.2 times higher on poor elderly households compared with poor traditional nuclear families.

Targeting algorithms of social welfare programs can introduce relevant and systematic disparities in the exclusion errors of population subgroups (i.e., urban/rural and family structure types). We also show that disparities can be attenuated by introducing fairness constraints, yet these may come at significant costs in overall prediction efficiency across the population.

Symptoms and Causes

Automated decision-making systems are largely designed by private companies, meaning their details are proprietary. Thus, a difficulty arises in assessing who is responsible for the outcomes of these decision-making systems.

Self-fulfilling predictions

An increase in decisions made by a program reproduces, on a larger scale, any existing bias.

Historical data, rather than the mathematical function that is the algorithm, can be bias laden. For instance, using historical datasets, an algorithm might learn 'that a particular institution prefers to accept men over women.'

Algorithms can also have an unequal distribution of accuracy across groups within a population. Greater accuracy for one set of the population over another, and reproduced over time, presents a clear issue with the reliance of automated-decision making systems.

the impact
Negative Effects

As machine learning systems begin to predominate the techniques of automated decision-making systems, there is a significant risk that they can introduce bias into the decision-making process. Features of these systems can produce discriminatory outcomes and decisions that a remnants of the models themselves. Using historical data could underrepresent members of protected classes or contaminate with past discriminatory practices.

Legal liability

Developers who don't address potential discriminatory harms produced and reproduced by their decision-making systems are susceptible to legal action in the form of a discrimination suit. Such suits against potential bias in automated systems are possible and ongoing from civil liberties organizations.

Discrimination can also be completely unconscious when a neutral procedure produces decisions that disproportionately and systematically harm protected classes (Brookings). The legal risk in using these systems arises less from the possibility of intentional discrimination and more from exposure to claims of disparate impact.

Economic Impact
Success Metrics
who benefits from solving this problem
Organization Types
  • Algorithmic fairness organizations

  • Civil liberties advocacy groups

  • Low-income services providers

Stakeholders
  • Government agencies

  • Social programs

  • Benefit recipients

  • Automated Decision-Making System manufacturers

financial insights
Current Funding
Potential Solution Funding
ideas
Ideas Description

Overcoming algorithmic bias

A BU-MIT research team created a method of identifying the subset of the population that the system fails to judge fairly, and sending their review to a different system that is less likely to be biased. That separation guarantees that the method errs in more balanced ways regarding the individuals for whom it does make a decision.

A question then arises of, 'to what extent are the two systems compatible with the desired notion of fairness?'

Legislative constraints

Some states have moved to require accountability reports for controversial decision-making systems and technologies (such as facial recognition algorithms), particularly when utilized by state or local government agencies (WA State Legislature).

Along with self-reported accountability measures, Washington state has required users of automated facial recognition systems to provide a meaningful human review of the decisions that produce legal or similarly significant effects concerning individuals.

Service providers in Washington state are now required to make available an application programming interface (API) to 'enable independent testing for accuracy and unfair performance differences across distinct subpopulations' (WA State Legislature).

Ideas Value Proposition
Ideas Sustainability
attributions
Data Sources

Algorithmic Fairness and Efficiency in Targeting Social Welfare Programs at Scale - https://data.bloomberglp.com/company/sites/2/2018/09/algorithm-fairness-efficiency_.pdf

The Brink, 'Are Computer-Aided Decisions Actually Fair?' - http://www.bu.edu/articles/2018/algorithmic-fairness/

Brooking Institution, 'Fairness in Algorithmic Decision-Making' - https://www.brookings.edu/research/fairness-in-algorithmic-decision-making/

Washington State Legislature, ESSB 6280 - https://app.leg.wa.gov/billsummary?BillNumber=6280&Year=2019&Initiative=false

Contributors to this Page

Giving Tech Labs Team - giving.tech