As concerns develop over increasingly potent artificial intelligence systems such as ChatGPT, the nation’s financial regulator has stated that it is working to ensure that companies comply with the law when employing AI.
Automated systems and algorithms currently assist in determining credit scores, loan terms, bank account fees, and other aspects of our financial lives. AI impacts employment, accommodation, and working conditions.
Ben Winters, senior counsel for the Electronic Privacy Information Center, characterized a joint statement on enforcement issued by federal agencies last month as a positive initial step.
Lawmakers are targeting AI for safety reasons, both for customers and employees.
“There is a myth that AI is completely unregulated, which is not true,” he explained. “They are saying, ‘Just because you use AI to make a decision does not absolve you of culpability for that decision’s consequences. This is our position on the matter. We are observing.'”
In the past year, the Consumer Financial Protection Bureau has fined banks for mismanaged automated systems that led to unlawful home foreclosures, vehicle repossessions, and lost benefit payments, as a result of the institutions’ reliance on new technology and flawed algorithms.
No “AI exemptions” to consumer protection will exist, say regulators, citing these enforcement actions as examples.
Rohit Chopra, director of the Consumer Financial Protection Bureau, stated that the agency has “already begun some work to continue to beef up internally by bringing on board data scientists, technologists, and others so we can face these challenges” and is continuing to identify potentially illegal activity.
Representatives from the Federal Trade Commission, the Equal Employment Opportunity Commission, and the Department of Justice, as well as the Consumer Financial Protection Bureau, have all stated that they are allocating resources and personnel to target new technologies and identify negative consumer impacts.
Chopra stated, “One of the things we’re trying to make crystal clear is that if businesses don’t understand how their AI makes decisions, they can’t use it.” Regarding the use of all of this data in other instances, we are examining compliance with our fair lending laws.
In accordance with the Fair Credit Reporting Act and Equal Credit Opportunity Act, for instance, financial institutions are required by law to explain any unfavorable credit decision. These regulations are also applicable to housing and employment decisions. Where artificial intelligence makes decisions that are too opaque to explain, regulators argue that the algorithms should not be used.
Chopra stated, “I believe there was a sentiment of ‘Oh, let’s just give it to the robots and there will be no more discrimination.'” “I believe the lesson is that this is not true at all. In certain respects, the bias is inherent in the data.”
Charlotte Burrows, chairwoman of the EEOC, stated that AI hiring technology that excludes job applicants with disabilities and so-called “bossware” that unlawfully monitors employees will be subject to enforcement.
Burrows also described instances in which algorithms could dictate how and when employees can work in a manner that violates existing law.
The senior attorney at OpenAI proposed an industry-led approach to regulation at a conference.
“You need a break if you’re pregnant or if you have a disability,” she said. “This accommodation is not necessarily taken into account by the algorithm. These are the matters we are attentively examining… While we recognize that technology is evolving, the underlying message is that the laws still apply and we have the means to enforce them.”
Jason Kwon, general counsel for OpenAI, stated at a tech summit hosted by the software industry group BSA in Washington, D.C., that “I believe it begins with attempting to establish standards.” “They could begin with industry standards and then coalesce around them.” And decisions about whether or not to make them mandatory, as well as the process for amending them, are likely fertile ground for further discussion.”
Sam Altman, the CEO of OpenAI, the company that produces ChatGPT, stated that government intervention “will be essential” to mitigate the dangers posed by increasingly potent AI systems and proposed the establishment of a U.S. or international agency to license and regulate the technology.
Societal concerns brought Altman and other tech CEOs to the White House this month to address tough questions about the ramifications of these tools, despite the absence of any imminent indication that Congress will enact comprehensive new AI regulations, as European legislators are doing.
The Electronic Privacy Information Center’s Winters stated that the agencies could do more to study and publish information on the relevant AI markets, how the industry operates, who the major players are, and how the information collected is being used — similar to what regulators have done in the past with respect to new consumer finance products and technologies.
He stated that the CFPB did an excellent job with the “Buy Now, Pay Later” companies. “There are so many undiscovered components of the AI ecosystem. Publishing this information would be extremely beneficial.”