A Proposed Trump Administration Rule Could Let Lenders Discriminate Through A.I.

The Fair Housing Act could be crippled by a new interpretation that would allow tech companies to sell biased algorithms — and get away with it

A new interpretation of a more than 50-year-old housing law by the Trump administration might encourage the use of biased algorithms in the housing industry, while protecting banks and real estate firms from lawsuits that might result.

According to documents published last week by the investigative reporting outfit Reveal, the Department of Housing and Urban Development (HUD) is considering a new rule that would alter its interpretation of the Fair Housing Act, a 1968 law ushered through Congress in the midst of the civil rights movement that shields protected classes from discrimination. The proposal, which has not been made available to the public, reportedly includes language that would protect companies that use third-party algorithms to process housing or loan applications by providing a specific framework for how they can “defeat” bias claims.

A spokesperson for HUD says that the proposal was submitted to Congress for a required review prior to publication.

“Upon the completion of this review period (very soon), we’ll be publishing that rule in the Federal Register,” the spokesperson says. “Until then, we are limited in what we can say publicly as Congress continues its review.”

Meanwhile, experts who spoke to OneZero say the proposal misunderstands how artificial intelligence might discriminate against people in the housing market, and that the move would remove incentives for companies to audit their algorithms for bias.

Currently, the Fair Housing Act protects against “disparate impact” to what are known as protected classes — meaning you can’t be denied a loan or application because of your race, sex, religion, or other protected statuses. The proposed update would make it more difficult for people to prove disparate impact resulting from automated programs, as opposed to human judgment. Plaintiffs alleging algorithmic bias in a lawsuit would have to prove five separate elements to make a successful case against a company that uses algorithmic tools.

The proposal also provides a framework for how companies can dodge liability for the algorithms they employ.

For example, one element that needs to be proven is that the algorithm isn’t discriminating for a legitimate reason. This means that if an algorithm were used to predict long-term creditworthiness, but it also happened to be highly biased against a specific race of people, it might not meet the criteria for a successful lawsuit. This could be interpreted to mean that some levels of discrimination will be tolerated by HUD as long as the algorithm functions well financially.

Lawsuits would also be required to prove that protected classes are specifically being targeted — a plaintiff would have to be able to show that an algorithm is systematically denying loans to women on the basis of their sex, for example, not that it is denying women loans for other, financially acceptable reasons.

The proposal also provides a framework for how companies can dodge liability for the algorithms they employ. One of those ways is by getting a “qualified expert” to say that their system is not flawed. Another defense listed in the proposal by HUD is “identifying the inputs used in the model and showing that these inputs are not substitutes for a protected characteristic.” As an example, a 2017 ProPublica investigation found that certain ZIP codes with large minority populations were being charged more for car insurance than white neighborhoods with the same levels of risk. In this case, ZIP codes were essentially used as a substitute for race: Though the program wasn’t explicitly designed to gouge minorities, it might as well have been.

But just taking an input like ZIP codes out of the equation isn’t enough to stop discrimination from newer algorithmic technology. Algorithms powered by deep learning come to make decisions by discovering patterns in choices humans have made in the past. The algorithm creates its own set of rules to enforce on future examples, stored as a complex web of long strings of numbers, uninterpretable to humans — even those who created the algorithm.

The danger is that while learning to determine who is worthy of renting an apartment or receiving a home loan through analyzing existing decisions, these deep neural networks will find hidden indicators of race, class, sex, age, or religion, and then begin to effectively discriminate along those lines.

“The whole power of many deep learning systems is that they find correlations that are invisible to the naked eye,” says R. David Edelman, who was a senior technology and economic policy adviser to President Barack Obama and is now at MIT, directing its Project on Technology, Economy, and National Security. “They find connections that we wouldn’t see as people, and then they act upon them in concert with one another.”

“The reason it creates a potentially impossible evidentiary standard is that simply looking at the inputs is not enough,” Edelman adds. “Instead, the only way to be confident is to actually interrogate the model itself: its training data, its operation, and ultimately its outputs.”

The HUD proposal lists another defense for banks, insurers, or other real estate companies that would use algorithms: shifting the responsibility to the third party that built the algorithm. If a bank bought a risk-calculating algorithm from a tech company, then the bank wouldn’t be liable for the algorithm’s decisions, since the bank didn’t make it or maintain it. But the proposal doesn’t say that the tech company is liable — just that the bank is not.

“This is written in a way that it’s pretty much giving the insurance industry broad immunity, [so] there’s no reason for them to try to scramble to make concrete standards.”

The proposal also mentions that the bank in this scenario wouldn’t be liable if the algorithm were an “industry standard.” It goes on to say that “in these situations, the defendant may not have access to the reasons these factors are used or may not even have access to the factors themselves, and therefore may not be able to defend the model itself.”

The problem is that no such standards currently exist, according to Rashida Richardson, director of policy research at A.I. research outfit AI Now.

“There aren’t wholesale standards for model development, or for commercially-sold algorithmic models,” Richardson tells OneZero. “And since this is written in a way that it’s pretty much giving the insurance industry broad immunity, there’s no reason for them to try to scramble to make concrete standards.” In other words, corporations benefit from squishy standards, while loan applicants and customers lose.

It’s important to note that this proposal hasn’t yet been released for public comment, the next step in enacting the rule interpretation. It’s unclear whether this has gone to the Office of Information and Regulatory Affairs for revisions, or how close it is to that public comment, period.

But if this rule went into effect, it would have sweeping consequences for the housing industry. It represents near-total shelter from liability under the Fair Housing Act, because the burden of proof would be so high for those who may have been discriminated against, and the strategies for banks and other companies to defend against the charge of disparate impacts are so clear.

The overarching thrust of the rule is that it benefits those who employ algorithms by limiting their liability for the actions of those algorithms, and makes it harder for those who are potentially being discriminated against to counteract those decisions.

This line of rulemaking further complicates the decision of how to modernize and adopt artificial intelligence that much of the private sector now faces. In many cases, artificial intelligence can only learn from the past decisions of humans, but that human judgment is pock-marked with bias and cultural artifacts of discrimination.

It’s a rule that absolves algorithm operators of liability for the machine’s decisions — an echo of the Silicon Valley ethos of “move fast and break things.” And if companies are trying to use this artificial judgment as any kind of replacement for human judgment, it entirely possible that they will.

All Rights Reserved for Dave Gershgorn

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.