Loading...

AI in FinTech, RegTech & SupTech: Reflections from the NY Innovation Community 

Author: Elia Resch, Director of Partnerships, Digital Transformation Solutions

In April and May 2026, Digital Transformation Solutions and Columbia University’s Center for Digital Finance and Technologies co-hosted a two-part closed-door roundtable series at Duane Morris LLP in New York. Held under Chatham House rules, the sessions brought together approximately forty senior professionals from fintech companies, banks, regtech and suptech providers, researchers and regulators to examine the latest developments, opportunities and potential mismatches at the intersection of AI and financial services. The first session, on April 10, focused on fintech and market innovation. The second, on May 1, shifted to compliance, regtech, and supervisory technology. What follows is a synthesis of the key reflections that emerged across both days. 

Stay tuned for announcements of upcoming sessions in this series and request an invitation: https://digitaltransformationsolutions.io/events/innovation-community-roundtables-ai-in-fintech-regtech-and-suptech/  

image

_________________________

The industry has moved beyond the pilot phase

The most consistent signal across the roundtable discussions was that AI deployment in financial services has crossed a threshold, as the conversation has moved on from pilots and proofs of concept. Institutions are running AI in production, from credit decisioning and fraud detection to regulatory reporting, compliance monitoring, and customer operations. However, the scale and pace of that deployment is accelerating in ways that are outrunning governance frameworks and widening the gap between innovation and operational reality on both sides of the regulatory divide.  

What struck participants as particularly significant was the shift in the nature of deployment itself. In 2024, AI was largely a support layer — augmenting human analysts, flagging anomalies for review, and accelerating workflows. By 2026, the framing has changed as agentic AI, systems that take autonomous action across multiple platforms with limited human review, is becoming the dominant model, and demanding a different quality of governance attention. 

The private sector is solving real problems – and creating new ones 

Practitioners across payments, B2B infrastructure, consumer lending, and risk management described concrete applications from automating cross-jurisdictional compliance checks, embedding regulatory requirements directly into product design, using alternative data to extend credit access, and running agent-to-agent workflows that handle regulatory change management with minimal human intervention. AI systems are parsing regulatory updates and auto-generating implementation tickets, combining fraud detection, AML, and BSA compliance into unified risk stacks, and shifting from human-in-the-loop to human-on-the-loop models, where human review is triggered by exceptions instead of being built into every decision. 

Several themes ran through these accounts. Institutions making the most progress had moved beyond off-the-shelf solutions and custom-built tooling, calibrated to specific regulatory environments and risk appetites. Integration across functions – from compliance and engineering to operations risk and model risk – was framed as a prerequisite. And explainability and audit trails were emphasized to be non-negotiable in practice with institutions designing for documentation from the start because they knew they would need to show their work to regulators. 

The candour about limits was equally notable. Even the most advanced deployments encountered hard constraints such as data quality gaps, regulatory ambiguities, and the genuine difficulty of governing probabilistic models. Participants observed that compliance is still ultimately a risk-based human judgment, and current AI systems have not yet replicated the contextual nuance experienced practitioners bring to hard calls. 

Regulatory frameworks are showing their age

Former and current officials spoke with frankness about the challenges regulators face. The talent asymmetry was a recurring concern, as skilled AI and data engineers are moving from public-sector institutions to the private sector, leaving supervisory teams with diminishing capacity to understand — let alone assess — the systems they are asked to oversee. The risk is not just that regulators are behind, but that the gap is structural and existing frameworks are not designed to close it. 

The governance deficit runs deeper than talent, as model risk management frameworks and supervisory guidance were designed for a different era. Recent regulatory guidance has specifically carved out generative and agentic AI from existing model risk management frameworks, leaving a meaningful gap that has not yet been filled. What supervisors actually need — in terms of data lineage, model documentation, and audit trail — to rely on AI-generated output in a supervisory decision remains largely unanswered in practice. 

Participants were emphatic that the fundamentals have not changed: consumer protection law, anti-discrimination requirements, and safety and soundness expectations all apply regardless of the technology involved. The difficulty is applying them to systems that are probabilistic, adaptive, and operating at speeds that make traditional examination approaches inadequate. Sandboxes and structured public-private engagement were seen as the most promising mechanisms for closing the knowledge gap faster than formal rulemaking can. 

Regtech and suptech are evolving on separate tracks

The private sector and the public sector are building parallel AI systems that are not yet talking to each other. Financial institutions are investing in AI-driven compliance infrastructure, including systems that ingest regulatory text, map requirements to internal processes, and generate evidence for audits. Supervisors are investing in AI-driven oversight infrastructure, such as surveillance tools, anomaly detection, and data aggregation platforms. Both tracks are advancing, but they are not interoperable as data formats differ, and reporting structures are misaligned. There is no shared infrastructure that would allow supervisory intelligence to flow naturally from what firms are building on the compliance side. 

This reflects a deeper disconnect in how regulated institutions and their supervisors conceptualise their roles in an AI-driven environment. Addressing it will require deliberate co-design of data standards, reporting frameworks, and governance arrangements — work that cannot be done unilaterally by either side. 

Unresolved issues are coming into focus

Several questions emerged as deserving more rigorous attention. Concentration risk was highlighted as underexplored, as the dominance of a small number of cloud and AI infrastructure providers creates systemic dependencies that current supervisory frameworks are not equipped to monitor. 

The implications of agentic AI for consumer protection generated significant discussion. When an AI agent makes a purchase, enters a contract, or takes a financial action on a consumer’s behalf, the existing frameworks for authorization, liability, and redress were not designed for that scenario. These questions are arriving faster than the legal frameworks can answer them. 

Human rights and ethics dimensions were consistently described as present in the room but underweighted in most industry conversations, crowded out by technical and commercial priorities. And the institutional capacity of supervisory bodies to understand and act on AI-driven intelligence was identified as perhaps the most urgent practical constraint. However sophisticated the tools become, their value depends on the humans on the other side having the knowledge and authority to use them well. 

Underlying many of these open questions is the “Lake Wobegon effect”: every institution believes its AI safety measures are above average, yet there is no standardised way to compare them side by side. Each financial institution and regulator is doing things differently, with no shared benchmarks and limited legibility across systems. Until institutions and their supervisors can actually hold AI safety practices up against a common standard, the confidence that everyone is doing the right thing may be more comforting than it is warranted.

_____________________________

Reach out to events@govspace.io to join future conversations and let us know what topics would be top of mind for you to discuss with your innovation community!  

Related Articles