Regulators take aim at AI to protect consumers and workers
As concerns grow over increasingly powerful artificial intelligence systems such as ChatGPT, the nation’s financial watchdog says it’s working to ensure that companies follow the law when they’re using AI.
Already, automated systems and algorithms help determine credit ratings, loan terms, bank account fees and other aspects of our financial lives. AI also affects hiring, housing and working conditions.
Ben Winters, senior counsel for the Electronic Privacy Information Center, said a joint statement on enforcement released by federal agencies last month was a positive first step.
“There’s this narrative that AI is entirely unregulated, which is not really true,†he said. “They’re saying, ‘Just because you use AI to make a decision, that doesn’t mean you’re exempt from responsibility regarding the impacts of that decision.’ ‘This is our opinion on this. We’re watching.’â€
In the last year, the Consumer Financial Protection Bureau said it has fined banks over mismanaged automated systems that resulted in wrongful home foreclosures, car repossessions and lost benefit payments, after the institutions relied on new technology and faulty algorithms.
Companies all over are laying off workers while experimenting with new AI-powered productivity tools. The connection is both obvious and maddeningly hard to pin down.
There will be no “AI exemptions†to consumer protection, regulators say, pointing to these enforcement actions as examples.
Consumer Financial Protection Bureau Director Rohit Chopra said the agency has “already started some work to continue to muscle up internally when it comes to bringing on board data scientists, technologists and others to make sure we can confront these challenges†and that the agency is continuing to identify potentially illegal activity.
Representatives from the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Department of Justice, as well as the CFPB, all say they’re directing resources and staff to take aim at new tech and identify negative ways it could affect consumers’ lives.
“One of the things we’re trying to make crystal clear is that if companies don’t even understand how their AI is making decisions, they can’t really use it,†Chopra said. “In other cases, we’re looking at how our fair lending laws are being adhered to when it comes to the use of all of this data.â€
Under the Fair Credit Reporting Act and Equal Credit Opportunity Act, for example, financial providers have a legal obligation to explain any adverse credit decision. Those regulations likewise apply to decisions made about housing and employment. Where AI make decisions in ways that are too opaque to explain, regulators say the algorithms shouldn’t be used.
Artists, journalists and screenwriters are leading the fight against employers who would seek to replace them with the products of ChatGPT and other generative AI software.
“I think there was a sense that, ’Oh, let’s just give it to the robots and there will be no more discrimination,’†Chopra said. “I think the learning is that that actually isn’t true at all. In some ways the bias is built into the data.â€
EEOC Chair Charlotte Burrows said there will be enforcement against AI hiring technology that screens out job applicants with disabilities, for example, as well as “bossware†that illegally surveils workers.
Burrows also described ways that algorithms might dictate how and when employees can work in ways that would violate existing law.
“If you need a break because you have a disability, or perhaps you’re pregnant, you need a break,†she said. “The algorithm doesn’t necessarily take into account that accommodation. Those are things that we are looking closely at. ... I want to be clear that while we recognize that the technology is evolving, the underlying message here is the laws still apply and we do have tools to enforce.â€
OpenAI’s top lawyer, at a conference this month, suggested an industry-led approach to regulation.
ChatGPT’s creator and IBM’s privacy chief call for more regulation of AI technologies that are raising ethical, legal and national security concerns.
“I think it first starts with trying to get to some kind of standards,†Jason Kwon, OpenAI’s general counsel, told a tech summit in Washington, D.C., hosted by software industry group BSA. “Those could start with industry standards and some sort of coalescing around that. And decisions about whether or not to make those compulsory, and also then what’s the process for updating them, those things are probably fertile ground for more conversation.â€
Sam Altman, the head of OpenAI, which makes ChatGPT, said government intervention “will be critical to mitigate the risks of increasingly powerful†AI systems, suggesting the formation of a U.S. or global agency to license and regulate the technology.
While there’s no immediate sign that Congress will craft sweeping new AI rules, as European lawmakers are doing, societal concerns brought Altman and other tech CEOs to the White House this month to answer hard questions about the implications of these tools.
Winters, of the Electronic Privacy Information Center, said the agencies could do more to study and publish information on the relevant AI markets, how the industry is working, who the biggest players are, and how the information collected is being used — the way regulators have done in the past with new consumer finance products and technologies.
“The CFPB did a pretty good job on this with the ‘Buy Now, Pay Later’ companies,†he said. “There are so may parts of the AI ecosystem that are still so unknown. Publishing that information would go a long way.â€
More to Read
Inside the business of entertainment
The Wide Shot brings you news, analysis and insights on everything from streaming wars to production — and what it all means for the future.
You may occasionally receive promotional content from the Los Angeles Times.