fbpx
December 28, 2024

Victories and Disappointments in AI and Algorithmic Decision-Making: 2019 Year in Review

Victories and Disappointments in AI and Algorithmic Decision-Making: 2019 Year in Review

AI and algorithmic decision-making raised important civil liberties issues in 2019, with developments good, bad, and in-between.

Starting off in the “disappointing” category, the U.S. Department of Housing and Urban Development (HUD) announced in August that it was considering new rules that would insulate landlords, banks, and insurance companies that use algorithmic models from lawsuits over discriminatory impacts. In response, we told HUD that their proposal was seriously flawed and sought to educate them about the ways that machine learning and other algorithmic tools work in practice. Far from being intrinsically neutral, these tools must be approached very carefully to avoid entrenching and concealing historical discrimination. HUD’s proposed affirmative defenses are dangerous, inconsistent with how machine learning actually works, and would upend enforcement of the Fair Housing Act going forward. 

In the “mixed-bag” category, OpenAI revealed in February that it had created an impressive language model via machine learning called GPT-2. GPT-2 can automatically generate multiple paragraphs of believable text based off only a short prompt. However, OpenAI initially declined to fully release its research due to concerns about malicious applications of the technology, claiming that their decision was an “experiment in responsible disclosure.” As we pointed out at the time, “responsible disclosure” is a term that has a specific meaning in the computer security community—one that doesn’t map well onto the risks associated with advances in AI. Fortunately, after further research and analysis showing that the benefits outweighed the risks, OpenAI released the entire model and backed away from using the term “responsible disclosure”—both positive steps that we applaud OpenAI for taking.

There was also a notable development in the “victories” category: California passed S.B. 36, taking a major step forward for oversight and transparency with regard to pretrial risk assessments (PRAs). PRAs are tools used to recommend whether a defendant should be held in jail or set free before their trial. They can range from simple questionnaires to complex statistical models. S.B. 36 requires agencies to ensure that PRAs have been validated not just for accuracy, but also for disparate impact based on gender, race, and ethnicity. The bill also requires the California Judicial Council to publish statistics about:  how PRAs are used in practice, including data on risk levels aggregated by gender, race, and ethnicity; data on predictive accuracy by gender, race, and ethnicity; and the proportion of cases in which the presiding judicial officer chooses to ignore the PRA and make a different recommendation. With this sort of data, the public will be better-positioned to make informed decisions about PRAs and whether they’re accomplishing their goal of reduced incarceration and decreased disparate impact based on protected classes.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2019.

Looking forward to 2020, we expect things to continue heating up at the intersection of AI, algorithmic decision-making, and civil liberties. EFF will be at the forefront, fighting for your rights—both for coders’ rights to develop AI without hindrance, and for everyone’s right to be free from misuse of AI or algorithmic decision-making, particularly when it is used to make decisions about our rights and freedoms. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2019.

DONATE TO EFF

Like what you’re reading? Support digital freedom defense today


Published December 27, 2019 at 10:59AM
Read more on eff.org

%d bloggers like this: