As housing-related decisions are increasingly being made by algorithms instead of individuals, it is critical that the technologies used to make those decisions do not replicate or even worsen patterns of discrimination and segregation. While it may be convenient to believe that bias can be eliminated by putting decision-making authority in the hands of machines instead of people, studies have shown that technologies such as algorithms and machine learning are often infected with bias.
Provisions of the Fair Housing Act (“FHA”) and its accompanying regulations that protect individuals from discriminatory algorithms are under attack from the Department of Housing and Urban Development (“HUD”), the agency responsible for enforcing the FHA. In particular, HUD recently issued a proposed rule that, if enacted, would undermine disparate impact jurisprudence and specifically exempt many housing providers who rely on algorithms developed by third parties. With the FHA under attack from the agency charged with its enforcement, it is particularly important to study how technological advancements might be used to either improve or undermine the law’s effectiveness.
This article describes the advent of big data, algorithmic decision-making, and machine learning, as well as HUD’s recent proposal to specifically immunize housing providers who rely on algorithms from disparate impact liability. It then discusses how the use of big data and algorithmic decision-making has touched all parts of the rental housing market, from advertising to tenant selection processes. Finally, it offers policy prescriptions that could help mitigate the discriminatory impacts of algorithmic decision-making in ways that are aligned with the FHA or, in some cases, that reach further than the protections currently offered under the FHA.
Download the PDF