How Three Financial Services Data Leaders Drive AI-Readiness with Trusted Data
Table of Contents
Speed might be the moat for AI startups, but for financial services institutions, compliance and precision are the real differentiators. When the choice is between “move fast and break things” and “maintain regulatory compliance,” compliance wins every time in the world of FinServ.
While many verticals are racing to deploy AI as quickly as possible, strategic data leaders in financial services are quietly figuring out how to deploy AI safely. They’re finding avenues to harness AI capabilities while maintaining the security and regulatory standards their industry requires.
At Snowflake Summit 2025, Barr Moses, CEO & Co-founder of Monte Carlo, hosted a refreshingly honest panel with three data leaders who’ve been in the trenches of financial services data transformation and are currently navigating this exact challenge: Cara Dailey, Chief Data Officer at T. Rowe Price, Durgesh Das, Vice President, Data, Analytics & Governance at Intercontinental Exchange / New York Stock Exchange, and Lee Davidson, Former Chief Data and Analytics Officer at Morningstar.

Read on for their key takeaways and practical advice.
Table of Contents
Takeaway 1: Financial services face unique AI implementation challenges
Across the board, the panelists were quick to acknowledge that financial services institutions operate under constraints that most other industries don’t face, which creates unique hurdles for AI implementation. For example, most financial services orgs at least partially rely on on-prem data storage — when Cara asked the audience who was ‘fully, fully, fully cloud,’ only one brave person raised their hand.
Lee emphasized that the bar for accuracy is higher in financial services than anywhere else. A single data error that might be a minor inconvenience in other industries can trigger regulatory investigations, customer lawsuits, and significant financial losses in banking or investment management.
“The demands on the quality and accuracy of single data points is just way higher than other industries,” Lee says. “When we get a return or an exchange rate wrong, the stakes are a lot higher and it can cause big problems.”
Takeaway 2: Business accountability drives every AI decision
Another imperative every leader agreed on? It’s essential to connect AI initiatives to concrete ROI. Adopting AI for the sake of adopting AI doesn’t mean anything —an AI initiative has to be tied to a measurable business outcome to be successful.
Durgesh shared a recent implementation at Intercontinental Exchange (ICE), one of the top mortgage technology providers in the US. His team faced a massive bottleneck in loan origination. Their system had to process over 2,500 different types of documents for each loan application, creating a 60-90 day origination cycle that frustrated both lenders and borrowers. Durgesh and his team implemented an AI-powered solution that automatically classifies these documents (identifying bank statements, W2s, and other required paperwork) and reads income statements for underwriters, with the goal of cutting the entire process down to 30 days.
“Our end goal is not that we need to include AI in our solution,” says Durgesh. “The goal is to reduce the number of days the loan stays in the origination process…that can only be done through automation, and that automation might come from AI or might come from something else.”
This business-first approach is embedded in how ICE structures their AI initiatives. Durgesh described their AI Center of Excellence, which operates on a hub-and-spoke model where a central “hub” team builds models, platforms, and best practices, while product and technology teams implement these solutions as spokes.
“You can have hundreds of ideas, but every idea has to go through our AI Center of Excellence,” Durgesh explained. “There is a process for ideas to be initiated and go through the process. When our Center of Excellence says that there is a definite ROI and solves a customer’s problem, then that idea goes through to the next level.”
Cara has developed a similarly disciplined approach at T. Rowe Price. She described building her data strategies around OKRs that tie directly to business outcomes, knowing that every AI initiative will eventually face the ultimate accountability test:
“At the end of the day, your CFO is going to ask you, ‘When am I going to get my money back?’“ Cara says.
For that reason, while data strategies can take months or even years to fully implement, she advises data leaders to focus on delivering iterative value. When AI initiatives are tied to OKRs and tangible improvements that leadership can track and measure quarterly, it becomes much easier to win support for continued investment.
Lee also shared an example of how AI can help financial services firms go beyond cost-cutting and efficiency to access new business opportunities. Last year, AIG received 750,000 underwriting applications but could only review half with their existing team. The constraint wasn’t market demand — it was processing capacity. That left half of their business potential sitting on the table, untapped. Rather than using AI to eliminate underwriters, AIG implemented a human-in-the-loop system that enabled their existing team to process the full 750,000 applications, capturing revenue that was previously out of reach.
“You can’t cut your way to profitability, otherwise, it’d be like the shiniest ship on the bottom of the ocean,” Lee says. He advises data leaders to ask: “What’s the data strategy for AI to enable growth in the organization?”
Takeaway 3: The shift from build-first to strategic buys
For decades, the complexity of compliance has driven a build-first culture for most financial services organizations — if you want something done right, build it yourself.
The panelists, however, acknowledged that AI is driving a shift in that mindset. AI development is moving so fast and getting so complex that even the most self-reliant organizations are starting to realize they can’t do everything in-house.
Cara described how T. Rowe Price is rethinking their approach. Instead of automatically defaulting to “we’ll build it ourselves,” they’re getting strategic about what actually makes them unique.
“We were a build-culture organization, but we’re starting to shift towards ‘let’s buy the commodity and build the custom’,” Cara explained. “Our secret sauce should be built — and we should be leveraging things that we can buy out in the marketplace.”
However, the vendor evaluation process for most financial services teams remains incredibly rigorous — and time-consuming. As Durgesh describes, there’s legal team involvement, AI-specific questionnaires, and thorough vetting to make sure any vendor their team partners with can handle the unique demands of financial services.
When an audience member expressed feeling overwhelmed by the rapid pace of AI technology evolution and vendor selection, Lee offered practical advice about cutting through the noise: relying on candid conversations with peers. They can talk through questions like, What were the bumps in the road? What were the unexpected outcomes?
“Everybody can have a great PowerPoint deck or a great demo, but what happens once you actually get in and install it?” says Lee. “Word-of-mouth is very valuable.”
Takeaway 4: Observability as a foundation, not an afterthought
Throughout the discussion, one final theme kept surfacing: none of the strategic approaches these leaders described — from ROI-driven AI strategies to rigorous vendor evaluation — can work without proper visibility into your data and AI systems. Data + AI observability provides that visibility, giving teams the ability to monitor, understand, and troubleshoot their data and AI pipelines in real-time. In regulated industries like financial services, this visibility is essential for meeting compliance requirements and demonstrating that AI systems are operating as intended.
For Durgesh’s team at ICE, observability isn’t something you add after your AI is working; it’s a prerequisite for even considering deployment in a regulated environment.
“Our core principle is that we need to have observability in place before we can go to market,” Durgesh says.
Lee connected observability back to the executive accountability theme that ran throughout the panel. When your CEO calls asking how the data practice is performing, you need to be able to give a confident, data-driven answer — and that requires comprehensive visibility into your entire data and AI stack.
“That was a very hard question to be able to answer without tools like data + AI observability,” Lee says.
The strategic advantage of moving deliberately
While other sectors are racing to deploy AI as fast as possible, these financial services leaders are taking a different approach — and it’s working. They’re building more sustainable, reliable AI systems by staying laser-focused on business outcomes, doing their homework on vendors, and making sure they have full visibility into what their AI is actually doing.
The common thread? They’re treating AI as a strategic capability, not a shiny new toy for experimentation.
If you’re looking to build that same level of confidence and control into your AI strategy, Monte Carlo’s data + AI observability platform is designed specifically for organizations that can’t afford to get AI wrong. Get in touch to learn how we help financial services teams deploy AI while meeting the highest standards for compliance and reliability.
Our promise: we will show you the product.