Document Type
Student Article
Abstract
The issue of biased lending is longstanding and has faced much legislation over the past few decades. When issues of discrimination in the housing market became center stage in the 1960s, Congress passed multiple acts to combat what became known as “redlining,” or systematically denying credit to minority groups of people. Acts such as the Fair Housing Act and the Equal Credit Opportunity Act worked to eliminate this discrimination, but that does not mean bias does not still exist. However, lending companies, due to the efforts of the above-enumerated acts, can no longer act on these biases. But with the introduction of algorithmic lending with artificial intelligence systems, the decades of work to eliminate lending discrimination are threatened. Artificial intelligence systems are trained by humans but often lack human oversight when making actual lending decisions. This human training leads to implicit biases within the algorithms themselves, bringing up the issue once more of discriminatory lending practices, known as “digital redlining.” To combat this discrimination, there needs to be regulation on how algorithms must be trained and the transparency of their decision-making. Without such regulations, protected classes of people will once again face housing discrimination, much like that of those in the 1960s and before.
DOI
10.37419/JPL.V11.I2.4
First Page
331
Last Page
354
Recommended Citation
Sadie Cavazos,
The Impact of Artificial Intelligence on Lending: A New Form of Redlining?,
11
Tex. A&M J. Prop. L.
331
(2025).
Available at:
https://doi.org/10.37419/JPL.V11.I2.4
Included in
Housing Law Commons, Intellectual Property Law Commons, Property Law and Real Estate Commons