Natasha Lamb made big banks report gender pay gaps. Now she's after AI.

Natasha Lamb made big banks report gender pay gaps. Now she's after AI.

You don’t want to upset Natasha Lamb.

Five years ago, Lamb, managing director at Arjuna Capital and an activist investor, took on big U.S. tech companies and banks and got them to publish their gender pay gap numbers. She didn’t stop there — she pushed them to drop the adjusted pay gap nonsense some were trying to pass off as real numbers at the time. 

Arjuna still puts pressure on large companies in this area. In its latest racial and gender pay scorecard, which is mainly based on racial and gender pay gap numbers, BNY Mellon and Citi both got As; Wells Fargo and Bank of America got Bs; JPMorgan Chase, Cigna, Progressive Insurance and KeyBank got Cs; Goldman Sachs and The Hartford received Fs.

Lamb is now directing a shareholder activist campaign against companies’ irresponsible use of AI, starting with Microsoft and its OpenAI’s controversial ChatGPT generative AI search engine. She’s led a group of Microsoft investors to file a formal shareholder proposal calling on Microsoft’s board to issue a report on the material risks of its generative artificial intelligence technology, its ability to spread misinformation and disinformation and the effectiveness of Microsoft’s efforts to remediate those harms. Investors will have the opportunity to vote on the shareholder proposal at Microsoft’s annual meeting, which will likely be held in December

American Banker spoke with Lamb about the gender pay gap campaign, the new ethical AI campaign, and what companies should be doing to handle advanced AI responsibly.

I give you a lot of credit for getting banks to report the true gap in pay between female and male employees, rather than “adjusted” pay gap numbers. On reflection, why do you think your campaign worked? Was it something specific you did? Was it good timing?

NATASHA LAMB: That’s a great question. Yes and yes. When we started that campaign, there were zero companies in the United States that were reporting their pay gaps. We started approaching first the technology companies and then the Wall Street banks because there was such a lack of representation of women and people of color in those organizations, but also in higher paying roles, across middle-tier jobs and upper level management. I think posing the question, making the business case for why it was important, was essential. But society was also ripe for greater change and accountability. And some of the timing coincided with the #MeToo movement, which was shining a light on the discrimination that women were facing within corporate America. 

I always think that things change when there’s a confluence of a public conversation, of a dollars-and-cents reason to change and political changes afoot, so you have this confluence between society and politics and individual corporate actors. Also, you saw some companies step out and take a leadership position and others followed, and that’s essential. 

See also  Apple offers a deeper dive into Crash Detection

I remember Citi was the first bank to report real numbers. 

It’s remarkable. And honestly, that bank has made such a turnaround culturally since that time, naming the first female CEO on Wall Street and being vocal about the importance of diversity within, not just Citigroup, but in other organizations as well. The company really became an advocate. 

In some ways, we’re at a similar point with AI, especially ChatGPT, where there’s a lot of public conversation about it and there are political interests. There have been hearings in Congress about it. The FTC recently started investigating OpenAI. What are your biggest concerns about ChatGPT? What do you think is the worst that could happen? 

We’re currently in an AI arms race between the tech companies, and there’s potentially a lot of money to be made and potentially a lot of money to be lost if it’s done irresponsibly. The big risk here is that companies are moving too fast. They’re not putting the right safeguards in place. They’re not putting the right ethical guidelines in place. They don’t know what they don’t know. Because there is such a competitive landscape, we’re almost at that Mark Zuckerberg idea of “move fast and break things.” 

And the risk of breaking things is higher because misinformation and the dissemination of disinformation has become a real risk to our society, to our democracy. And we’ve seen some of that, certainly with social media, because it has been used for election interference by the Russians. But when ChatGPT enters stage left, it’s a whole other ballgame. The large-scale dissemination of misinformation is a huge risk. 

So there’s big societal risks and then, for the companies themselves that are putting out this technology and are, in fact, responsible for the content that they’re disseminating. A lot of these social media companies and tech companies have hidden behind section 230 of the Communications Decency Act, to say, we’re not liable for what our users put on our platforms. [Section 230 is a section of Title 47 of the United States Code that was enacted as part of the Communications Decency Act of 1996, and generally provides immunity for online computer services with respect to third-party content generated by users.] Well, ChatGPT’s results are not put forward by users. They’re put forward by the technology. And we already know that it generates falsehoods and there’s a big potential for abuse and risk to the companies. 

See also  30 Forensic Engineering Appoints Jason Kumagai, M.Sc., CCPE, CHFP as Practice Lead – Human Factors Group

What would you like to see Microsoft’s OpenAI [which manages ChatGPT] do about this? Would you like to see ChatGPT simply walled off, no longer available to the public? 

I think the horse is already out of the barn. So now it’s about what are the ethical guidelines you’re putting in place? What are the structures? Are you fully taking into account the risk and what can be done about it? What are the guardrails that need to be put in place? We saw with Microsoft this past year that they disassembled their ethics and society team, and they’ll say, ‘well, we’ve integrated that into everything we do’. But that gets into the realm of “trust us,” which is never a good place to be. And, we heard that with pay equity as well. And then you’ve had folks internal to Microsoft, ethicists and employees that have raised concerns about the possibility of disinformation that would erode the factual foundation of modern society. 

So there have been calls for action and concern from within Microsoft itself. You also saw that at Google. Right now there’s this call for a pause on the deployment of AI. I’m not saying that that’s exactly what should happen, but there needs to be the right ethical, structural and regulatory frameworks in place in order to govern the technology because it could go sideways very quickly. And next year is going to be a litmus test for that with these elections on the horizon. 

What would convince you that Microsoft has the right guidelines in place and is actually following them? I think every company says, we’re very focused on responsible AI and ethical AI. What’s the proof that they actually are doing something? 

We’ve asked them to lay out what the risks are to public welfare from the technology, from facilitating disinformation that’s disseminated or generated by artificial intelligence. And then, what steps do they plan to take to remediate those harms, and then put those steps in place and then report annually on the effectiveness of such efforts. So we’re looking for an accountability mechanism more than anything else for them to actually do the analysis, be honest about the risks, plan accordingly to remediate those risks and then report out on whether they’re able to or not. The big risk for Microsoft and other AI companies is that things go terribly wrong and they end up liable with investors holding the bag. We could talk for an hour about what could go wrong with AI. At this point, we know that the potential harms are enormous and that the technology is just moving fast without any guardrails in place. 

See also  Insurance Brokerage Abruptly Closes Amid Allegations of Converted Premiums

Then how are users harnessing the technology to do good or to do ill? That’s an open question. The promise of AI is that we can make big advances in healthcare and work to fight climate change. And there’s all kinds of reasons to be optimistic. But then there’s this unknown downside. I think our number one concern going into 2024 is how this is used for disseminating disinformation. We’re already facing a crisis, as a nation, as a globe of disinformation and differing views of reality. I think that’s only going to get worse with this technology. 

In banking and in insurance, AI is used in a lot of places, for instance, to detect fraud, to detect cybersecurity breaches. But there are more controversial uses, like banks are increasingly using AI to make credit decisions and carriers are increasingly using AI to make decisions about who to cover. In all these cases, the companies say they’re going to be more inclusive by using AI instead of people, that people are biased, not models. Using AI models, companies can take in more data, be less reliant on FICO scores or other traditional measures and give access to more people. But then of course, people worry about the potential downside of some of that. The classic bank example is the worry that an AI system will see that people who belong to a certain country club are a good credit risk, and therefore favor members of that country club. Do you worry about things like that? What would it take to trigger your interest in what a bank or insurance company is doing along these lines? 

We know that algorithms are biased. We know that ChatGPT is reading things on the internet, it doesn’t have ethical guidelines in place. There has to be oversight there. It may be a cheaper solution. But if that technology is just trusted to basically make judgment calls on who has access to financing and who doesn’t, there’s a big risk there. I can already see into the future where there would be a report on how AI was used for this and all these people were denied financing. 

There are places where AI could have a really positive impact, help companies reduce costs. On the other hand, it’s probably going to lead to less folks having jobs. But the big thing I think is bias in the technology and false answers, and how are you ensuring that that’s not in the fabric of what you’re doing as an organization.