White House push for consumer rights for AI hits hiring and lending 

White House push for consumer rights for AI hits hiring and lending 

When the White House issued a blueprint for an AI Bill of Rights in October, it set the tone for regulation companies could expect to start seeing around their use of artificial intelligence. Though it’s still early days, the effects of this are starting to be felt in the way government agencies look at credit scoring, fair lending and hiring, according to speakers at an event hosted by the Brookings Institution on Monday.

The AI Bill of Rights is the White House’s attempt to protect American consumers as organizations continue to use machine learning and other forms of AI that could potentially perpetuate discrimination, by relying on data gleaned from past biased decisions or by leaning on data in which some groups are underrepresented.

“It seems like every day we read another study or hear from another person whose rights have been violated by these technologies,” Sorelle Friedler, assistant director for data and democracy in the White House Office of Science and Technology Policy, said at the event. “More and more we’re seeing these technologies drive real harms, harms that run counter to our core democratic values, including the fundamental right to privacy, freedom from discrimination, and our basic dignity.”

The blueprint’s core principles

The blueprint for an AI Bill of Rights lays out five core protections from the potential harms of AI. The first is protection from unsafe or ineffective systems. The second is protection against algorithmic discrimination.

“You should not face discrimination by algorithms and systems should be used and designed in an equitable way,” Friedler said. 

See also  These Are The Best Things You’ve Done With Your Cars

Data privacy is the third principle. Agency, notice and explanation of how data is used is the fourth. The fifth is the need for human alternatives, consideration and fallback. 

“You should be able to opt out where appropriate and have access to a person who can quickly consider and remedy problems,” Friedler said. 

Leaders across the U.S. federal government are already taking action on these principles, she said, “by protecting workers’ rights, making the financial system more accountable, and ensuring healthcare algorithms are non-discriminatory.”

How banks could be affected

One area of financial services where the speakers at the event see the AI Bill of Rights already starting to take effect is the use of AI in credit assessment. 

“The Consumer Financial Protection Bureau is taking steps around transparency in how you get credit scores,” said Alex Engler, a fellow in Governance Studies at the Brookings Institution.

Another example of the government starting to execute on the Bill of Rights in banking is that “HUD made a new commitment to promise to release guidance around tenant screening tools and how that intersects with the Fair Housing Act,” said Harlan Yu, executive director of the nonprofit organization Upturn.

A crackdown on the use of AI in hiring could also affect lenders. 

Friedler, a former software engineer, said “proactive equity assessments” should be baked into the software design process for AI-based recruiting and hiring software. This is needed because there have been problems with hiring tools that “learn the characteristics of existing employee pools and reproduce discriminatory hiring practices,” she said. 

See also  Driving Civic and Elantra Hybrids, and big Ford Maverick updates | Autoblog Podcast #842

The Equal Employment Opportunity Commission and the Department of Labor have been working on various aspects of how new hiring technologies are being used in the private sector, Yu said.

“There’s just so much more potential for the White House to coordinate and to encourage and to get federal agencies to really move proactively on these issues in ways that I feel like they haven’t before,” Yu said. 

In a letter to several banking regulators last year, Upturn, ACLU, the Leadership Conference on Civil and Human Rights, the National Consumer Law Center, the National Fair Housing Alliance and a coalition of other organizations spelled out how they would like the White House to bring racial equity into its AI and technology priorities.

The groups asked the agencies to set updated standards for fair-lending assessments, including discrimination testing and evaluation in the conception, design, implementation and use of models. 

When banks think about AI model risk, they should consider the risk of discriminatory or inequitable outcomes for consumers, rather than just the risk of financial loss to a financial institution, the letter stated. 

The letter urged government agencies to encourage the use of alternative data for underwriting, as long as it is voluntarily provided by consumers and has a clear relationship to their ability to repay a loan. The groups pointed out that traditional credit history scores reflect racial disparities due to extensive historical and ongoing discrimination. Black and Latinx consumers are less likely to have credit scores in the first place, limiting their access to financial services. 

See also  Plan Insurance Brokers Launches the “Plan Portal”

The groups also cautioned that not all kinds of data will lead to more equitable outcomes, and some can even introduce their own new harms.

“Fringe alternative data such as online searches, social media history, and colleges attended can easily become proxies for protected characteristics, may be prone to inaccuracies that are difficult or impossible for impacted people to fix, and may reflect long standing inequities,” the letter stated. “On the other hand, recent research indicates that more traditional alternative data such as cash flow data holds promise for helping borrowers who might otherwise face constraints on their ability to access credit.”

The groups also called on the CFPB to issue new modernized guidance for financial services advertising.

“For years, creditors have known that new digital advertising technologies, including a vast array of targeting techniques, might result in illegal discrimination,” the letter said. “Moreover, recent empirical research has shown that advertising platforms themselves can introduce significant skews on the basis of race, gender, or other protected group status through the algorithms they use to determine delivery of advertisements — even when advertisers target their advertisements broadly.” 

The speakers at the event said the AI Bill of Rights is just a start in a push to bring equity and democracy to AI.

“This document represents mile one of a long marathon, and it’s really clear that the hard work is still in front of the federal agencies and in front of all of us,” Yu said.