
Graeme Sloan/Bloomberg
As fintech industry experts closely monitor the ripple effects of the White House’s executive order on artificial intelligence, executives remain confident in their companies’ ability to adapt to changes in an already highly competitive environment. regulated.
The scope of Order from President Biden is wide. The order requires creators of “fundamental dual-use” AI models to be qualified, trained on large datasets, self-supervised, and exhibiting high levels of performance in tasks “that pose a serious security risk, national economic security (or) the national public.” health or safety” to provide the government with recordings of training sessions and results of systems cybersecurity testing.
This level of control extends to purchasers of the models, who are required to declare their purchase as well as “the existence and location of these clusters and the amount of total computing power available in each cluster.” Users must also respect new standards on cybersecurity, consumer data privacy, bias and discrimination.
Lawmakers remain divided on how to govern effectively the evolving landscape of generative AI and its new applications, pushing leaders of cutting-edge institutions to test individual use cases and create their own governance frameworks.
But in addition to banks And credit unions integrating AI-based tools into operations, the fintech vendors developing the products are no strangers to the growing number of barriers to compliance hinder adoption across the industry.
Yolanda McGill, vice president of policy and government affairs at Zest AI in Burbank, Calif., said conversations with lawmakers are often driven by fear that AI models will go off track or start to fail. replace human employees.
“In our (industry) there is concern about having a very good understanding of what an algorithm actually does. …I was concerned that (these fears) would mean we wouldn’t be able to have conversations about practical use cases. that are happening now and impacting people’s lives every day, for better or worse, McGill said.
The regulators with the Consumer Financial Protection Bureau are increasingly complaining about the lack of transparency in how “black box” AI algorithms are both constructed and generate conclusions – including those used in underwriting models.
The CFPB’s emphasis on eliminate the potential for bias rooted during the development stages is shared by Zest leaders, who use contradictory denigration as part of the company’s race prediction tool to verify systems before they go live. In doing so, McGill says Zest can comply with existing guidelines while waiting for the new mandates to take effect.
“The financial services industry has been grappling with these questions for a long time,” McGill said. “Algorithms are not new in financial services, AI is not new in financial services and companies have been innovating for some time by respecting the guardrails of mandatory consumer disclosures, mandatory versions of the ‘explainability, prohibitions on discrimination and other requirements.’
Beyond government reporting requirements for cybersecurity training and standards, some fintech experts are unsure whether federal agencies are capable of distinguishing productive models from those that could threaten national security.
The order “immediately raised the question of how to identify these patterns without some kind of self-regulation” said Amitay Kalmar, co-founder and CEO of AI-based auto loan fintech Lendbuzz in Boston.
“From the outside, it’s going to be difficult for the U.S. government to identify and I think ultimately there will be some self-regulation within companies that are working on very powerful foundation models,” Kalmar said . “I think it gives direction to regulators to focus on this topic, but specific areas need to be well defined.”

Lendbuzz’s software-as-a-service platform deploys machine learning algorithms to analyze consumer financial data such as banking transaction history and establish a credit score for eligible borrowers. The company then takes care of underwriting the loan at the point of sale, which is backed by funds provided by Lendbuzz’s partner banks.
Kalmar strives to keep Lendbuzz ahead of the decree’s guidelines by strengthening cybersecurity and improving transparency in model building for stakeholders and regulators.
“The price that a company or financial institution could pay for breaches and failures in this area is very significant, and I believe that AI will present new risks in this area that we have not been exposed to in the past,” he said.
The popularity of generative AI-based products has continued to grow in recent months, but sticking points have caused many organizations to deprioritize their use.
A survey A study of 179 experts in the financial services and insurance industries conducted this year by Arizent, American Banker’s parent company, found that 40% of financial institutions surveyed said a lack of resources was the main barrier to innovation. Other factors included existing systems, regulatory burden and competing priorities.
A significant portion of the order’s recommendations for regulators to increase their oversight emphasize data collection during development phases. Agencies in Europe and the United Kingdom approached the industry from a different angle, focusing instead on individual use cases.
Ed Maslaveckas, co-founder and CEO of London-based open banking and data intelligence company Bud Financial, said UK agencies are best placed to create guidelines for the use of AI in financial services by examining real-world applications.
“I’m glad people are taking this very seriously and we’ve seen all the activity happening in the US, UK and Europe. The main concern was letting the world go wild for years, then when something bad happened, we would write a regulation. … I think we’re moving in a positive direction, but the results” produced by the models in question are the No. 1 thing, Maslaveckas said.
Bud Financial, which expanded into the US earlier this year, participated in the Financial Conduct Authority’s first-ever regulatory sandbox in 2015, shortly after the company was founded. By working with the FCA, which oversees the UK’s financial services sector, the fintech has better understood regulatory expectations.
“I like the way the UK sees surveillance as being driven by use cases, and because it’s driven by use cases, it’s regulated by subject matter experts,” who have the technological understanding to lead campaigns for change, Maslaveckas said.
Fintech executives closely monitor the next steps of changemakers as they formulate plans for future innovation.
“The CFPB and the Federal Trade Commission have done a solid job of highlighting the broad goals and principles around the loan itself…I think the main goal now is to implement that framework,” Vinay Bhaskar said , Director of Operations and Head of AI and Compliance. initiatives at Scienaptic AI.