Rohit Chopra has seized on nearly every public opportunity as director of the Consumer Financial Protection Bureau to admonish companies about the potential misuse of artificial intelligence in lending decisions.

Chopra has said that algorithms can never “be free of bias” and may result in credit determinations that are unfair to consumers. He claims machine learning can be anti-competitive and could lead to “digital redlining” and “robo discrimination.”

Rohit Chopra

Sarah Silbiger/Bloomberg

The message for banks and fast-moving fintechs is loud and clear: Enforcement actions related to the use of AI are coming, as is potential guidance tied to what makes alternative data such as utility and rent payments risky when used in marketing, pricing and underwriting products, experts say.

“The focus on artificial intelligence and machine learning is explicit,” said Stephen Hayes, a partner at Relman Colfax PLLC and a former CFPB senior counsel. “Because of the perceived effectiveness of AI, at times businesses have used this technology in ways that have gotten out ahead of legal and compliance folks.”

Earlier this month Chopra tweeted: “The @CFPB will be taking a deeper look at how lenders use artificial intelligence or algorithmic decision tools.” Chopra was referring to a situation in which a lender violated fair-lending rules by improperly asking questions of churches applying for small-business loans; the fear is something like religious bias could be programmed into AI tools.

He also made a point of mentioning AI at a joint press conference last year with the Justice Department about the federal government’s redlining settlement with $17 billion-asset Trustmark National Bank, in Jackson, Miss.

“When consumers and regulators do not know how decisions are made by the algorithms, consumers are unable to participate in a fair and competitive market free from bias,” Chopra said.

Other CFPB officials have weighed in about AI, too.

This month, Lorelei Salas, the bureau’s assistant director for supervision policy, warned in a blog post that the CFPB plans “to more closely monitor the use of algorithmic decision tools, given that they are often ‘black boxes’ with little transparency.”

And Eric Halperin, the CFPB’s new enforcement chief, gave a speech in December to the Consumer Federation of America suggesting some uses of AI may constitute “unfair, deceptive and abusive acts and practices,” known as UDAAP violations.

Still, the messaging by Chopra — a Biden nominee who was sworn in in October — and his lieutenants conflicts with the CFPB’s past efforts. Under the Trump administration, the CFPB sought to encourage innovation and address regulatory uncertainty in enforcing the Equal Credit Opportunity Act, which prohibits discrimination in credit and lending decisions.

In a 2020 blog post, CFPB officials encouraged lenders to “think creatively about how to take advantage of AI’s potential benefits.” The bureau also gave examples of how the CFPB would provide what it called “flexibility” in how AI models may be used by lenders.

While there is some evidence to suggest that “alternative data”— information not traditionally found in credit files and not commonly provided by consumers on credit applications — can help lenders make accurate underwriting and loan pricing decisions, there also is evidence of bias as well.

Even as banks and fintechs plow forward with AI and machine-learning projects that they think will expand credit to more consumers, many expect a clampdown.

“The CFPB will be digging hard on documentation of AI models,” said Christopher Willis, a partner at Ballard Spahr. “There is an effort to warn the industry — and particularly the fintech side of the industry— not to get too creative with the data they ingest and use in making credit decisions.”

Critics see risks in the use of AI related to the accuracy of alternative data, how it may be used to weed out certain potential applicants or to screen applicants for certain products.

Proponents of AI and machine learning think widespread adoption is inevitable as computers are able to replace humans and learn from the programs and data they are being fed. Fintech and other backers of AI think computers will ultimately make better, more accurate credit underwriting decisions based on more data.

“The bureau is right to focus on the risks from AI and machine learning, but they should also focus on the potential benefits,” said Eric Mogilnicki, a partner at the law firm Covington & Burling LLP.

Chopra’s interest in AI coincides with an effort by federal regulators to get a handle on whether it is being used in a safe and sound manner, and whether financial institutions are in compliance with consumer protection laws. The request is a joint effort of the Federal Reserve, Federal Deposit Insurance Corp., Office of the Comptroller of the Currency, National Credit Union Administration and the CFPB.

Many consumer advocates laud the expansion of credit to unbanked or underbanked consumers. But there are concerns that borrowers may be subjected to targeted marketing campaigns that use AI models.

Many expect the CFPB will be asking companies to provide a fair-lending analysis of AI to ensure they are transparent and easily explained.

“Any use of newer algorithm technologies like machine learning needs to be thoroughly documented in terms of its business purpose, accuracy and transparency, and it needs to be subjected to a rigorous fair-lending analysis,” said Willis at Ballard Spahr.

Some experts suggest that financial firms also need to look at how traditional data is being used as well.

“Part of [Chopra’s] point is that even more traditional data can be used and manipulated in ways that perpetuate existing discrimination,” Hayes said. “This is a shot across the bow that the CFPB wants institutions to take these issues seriously, provide some transparency about how models are being used, and guard against discrimination risks.”