Would using location data in AI-based credit models improve fairness?

Would using location data in AI-based credit models improve fairness?

Two experts at the Federal Reserve Bank of Philadelphia have tested an idea for improving credit access for people with low incomes: letting lenders use location data in AI-based credit models.

For lenders, this would unwind decades of not being allowed to consider location in lending decisions, due to the Equal Credit Opportunity Act of 1974. The thinking in the seventies was that knowing where a person lives could trigger what’s called taste-based discrimination, or bias caused by unfavorable attitudes toward certain neighborhoods or ethnic groups. 

But in their research, Larry Santucci, senior research fellow, Vitaly Meursault, machine learning economist, and Daniel Moulton, data science and engineering manager at the Philadelphia Fed, found that with “fairness constraints” — essentially lower credit thresholds for people who live in low-income areas — machine learning-based credit models fed location data could be used to extend credit to more lower-income people and people of color.

“Machine learning adoption in underwriting is happening,” Meursault said in an interview. “And we believe that machine learning in underwriting will become ubiquitous, just like statistical credit scores are now. The big question that motivated us is, whom will it benefit the most? As things are looking now, lenders will definitely benefit.”

Existing research shows that newer, AI-based credit models such as the XGBoost algorithm, can predict default better than the older logistic regression models many banks use. This improvement naturally translates into higher profits, he said. These higher profits can help finance the fairness constraints the authors propose.

The combined effects of better models fed with location data and lower credit score requirements for people in low-income areas “suggest a way forward that blends the best of both worlds,” the authors state.

In their research, Santucci and Meursault aimed to get a “true positive rate,” or percentage of people who pay back their loans, as high as possible and a “false positive rate,” or percentage of eventual defaulters who receive credit access, as low as possible.

“The goal of our approach to managing machine learning innovation in underwriting is to ensure that lower income areas benefit from machine learning introduction by shrinking the gaps in true positive rate relative to higher income areas,” Meursault said. “This is done by adding fairness constraints to how lending decisions are made.” 

In an example of a fairness constraint, a lender that normally requires a 650 credit score to get a credit card might let people in low income neighborhoods have a card with a 620 score.

See also  Mercedes-Benz Is Still Getting Sued in Germany Over Dieselgate

Industry reactions

Bankers greeted these ideas with cautious optimism.

“I think overall any data should be used as long as it is used to enhance the odds of approving people for credit,” said Marc Butterfield, senior vice president at First National Bank of Omaha. 

But there can be unintended consequences to letting models ingest location data, he said. For instance, there is a potential for machine learning models to pick up unintentional bias.

“I think we’re still in the early innings of using machine learning models,” he said. Lenders need to get better and more disciplined at using them. 

“We’re getting there, but I think we should only feed location data to models for the purposes of allowing them to make a better decision, getting to a yes. I’m still not sure how location data on a borrower is going to get somebody to make an inclusionary decision without being biased. I’m skeptical of using location data for that.”

The largest unintended consequence would be redlining, he said.  

Another objection: Lower credit thresholds for people in low-income neighborhoods could be hard to implement, Butterfield said. Banks tend to tighten credit standards when they think a recession is coming, for instance. 

“If you have different criteria for different geographic regions, that becomes very difficult to manage when economic conditions change, as they always do,” he said. 

The paper is important because it is a “concession by very senior level federal [researchers] that conventional credit scores are overfit to the majority population and differentially predictive for these subpopulations who live in these low to moderate income neighborhoods,” said Kareem Saleh, CEO of FairPlay, a maker of software that tests loan decisions for fairness and disparate impact, in an interview. “That’s a very big deal.” 

It also puts forth a different approach to fairness, he noted.

“We’ve tried to achieve fairness in financial services through blindness,” Saleh said. “This idea that we’re just going to look at these factors which are ‘neutral and objective.’ And what these guys are saying is no, if you use fairness through awareness, consciousness that somebody is in an LMI neighborhood, you can better risk-rank that population. That should send a shockwave through the industry.”

Location data is just one input that has what Saleh calls a “disparity driving effect.” Consistency of employment is another, because it negatively affects women who left the workplace for a time to raise children. Bank account data is another, because it is harder for people in some minority groups to obtain them. 

See also  Your Guide to Buying a Used Car

“The truth is, there will never be a list of variables long enough that make sense to prohibit,” he said. “The inputs are biased for all kinds of reasons. And so the right answer is, use it all but de-bias it, rather than attempting to make judgments about, well, this variable is permissible and this variable isn’t.” Race, gender and age can’t be used explicitly in lending decisions under the law, he acknowledged. 

One way to de-bias data is to optimize the relative weights on the variables in ways that preserve their predictive power, but lessen their disparity driving effect.

“The example that we give a lot is, if you’re relying very heavily on consistency of employment and that has a disparity driving effect for women, maybe what you ought to do is tune down the influence of consistency of employment and tune up the influence of other variables which are predictive, but have less of a disparity driving effect,” Saleh said. 

Lowering the credit score threshold for people in low-income neighborhoods is another way to de-bias the data, Saleh said, given that credit scores tend to overstate the riskiness of such people. 

“Lenders have been conditioned their whole lives never to think about race, gender, age, sex [in lending decisions], because the Equal Credit Opportunity Act prohibits the express consideration of those factors when you’re making a credit decision,” Saleh said. “But the Equal Credit Opportunity Act doesn’t say you can’t use some consciousness of race, gender, or age to avoid being racist, sexist, or ageist.”

By writing this paper, the authors are “nudging the industry out of this blaze of fear to say, hey, there might actually be legitimate uses of this information,” Saleh said. “Maybe we’ve overread the prohibition on using this information, and maybe that prohibition has outlived its usefulness because here’s a model that does better within your risk tolerance of serving these communities.”

Changing the rules on using location data in lending

When Representative Bella Abzug (D-NY) introduced the Equal Credit Opportunity Act in 1973, she wanted to let women get credit cards in their own names. But the law is more sweeping than that: It prohibits discrimination on the  basis of race, color, religion, national origin, sex, marital status, age, receipt of public assistance or good faith exercise of any rights under the Consumer Credit Protection Act. Under the law, creditors can only consider relevant financial factors such as credit score, income, credit history and debt load. 

See also  1985 Renault R5 Turbo II Is Our Bring a Trailer Auction Pick of the Day

“My understanding is that these laws were passed primarily to address taste-based discrimination,” Meursault said. “And of course that is a very legitimate concern. But since the seventies, we have learned a lot about how these models operate. And there is research that shows that credit scoring models, no matter whether it’s a statistical credit scoring model or a more sophisticated machine learning model, have higher predictive power in higher income communities than lower income communities, in racial majority communities versus racial minority communities.”

This is “because wealthy people are well represented in the data and folks who’ve been historically either excluded from the financial system or preyed upon by the financial system are less well represented in the data, or the data that is available about them tends to overstate their riskiness,” said John Merrill, chief technology officer at FairPlay, in an interview. 

This problem can’t be addressed by removing sensitive attributes for the data; it has to be actively corrected, Meursault said. 

A simple way to equalize true positive rates for wealthy and poor neighborhoods is to reduce lending thresholds for people living in low-income neighborhoods like New York City’s South Bronx, Santucci and Meursault say. 

“We are very cognizant that that means that on average more eventual defaulters from South Bronx will also be given loans,” Meursault said. “This is why it is crucial to combine the introduction of fairness constraints with better machine learning models that will allow lenders to predict default better and compensate the costs of introduction of fairness constraints, while at the same time reducing credit access gaps to creditworthy consumers.” 

One of the authors’ fairness constraints is that location data can only be used to lower credit thresholds, not to raise them. 

“People in the South Bronx can only get better credit as a result of our policy,” Meursault said. “And people on the Upper East Side are not affected at all.” 

The geographic location information is only to be used at the moment of the lending decision, not to train algorithms, he said. Lenders would be required to monitor outcomes and see whether they are consistent with fairness constraints imposed by regulators. 

“If we allow them to innovate, but also impose these fairness constraints, then they can potentially make these gains from machine learning innovation, reducing the risk of their portfolios at the same time, and they can afford to pay for this fairness,” Santucci said.