What Mid-Sized Financial Institutions Get Wrong About AI Adoption
Most mid-sized financial institutions aren't behind on AI because they're ignoring it. They're behind because they're approaching it the wrong way. Here are the five most common mistakes we've seen at The GCC.
Nelson Lee
Software engineer at Shopify who has built AI systems, workflows, and automations for millions of merchants. Previously an 8VC Fellow in San Francisco. Computer Engineering from the University of Toronto with a minor in Artificial Intelligence.
![]()
Most mid-sized financial institutions aren’t behind on AI because they’re ignoring it. They’re behind because they’re approaching it the wrong way.
I’ve worked with executive teams across credit unions, community banks, and regional lending organizations as they navigate their first serious AI implementations. The pattern of mistakes is remarkably consistent, not because these are unsophisticated organizations, but because the way AI gets framed as a technology decision almost guarantees the wrong starting point.
Here’s what I see go wrong, and what getting AI right actually requires.
Mistake One: Starting With Tools Before Strategy
The most common entry point is a vendor demo. Someone sees what a tool can do, gets excited, and the organization starts evaluating AI products before it has answered the foundational question: what problem are we actually trying to solve?
Tools are easy to acquire. The hard work is defining which workflows are genuinely AI-ready, which staff populations will be most impacted, and what success looks like six months from now. Without that clarity, organizations end up with a scattered set of experiments that don’t compound into anything — each team using something different, no shared standards, and no visibility into what’s working.
Strategy first means starting with the business problem and working backward to the AI tool. Not the other way around.
Mistake Two: Leaving Compliance Out of the Early Conversation
Compliance teams are often brought in after an AI tool has already been selected to sign off rather than to shape. That’s backwards. In a regulated financial institution, compliance isn’t a gate at the end of the process. It’s a voice that needs to be at the table when you’re deciding what data the tool can touch, what outputs require human review, and what your audit trail looks like.
Bringing compliance in late creates two problems. First, you lose the institutional knowledge they have about where the real regulatory risk lives. You lose which workflows are sensitive, which vendor relationships require scrutiny, and which outputs could create fair lending or privacy exposure. Second, you create a credibility problem internally. When compliance has to push back after the fact, it reads as obstruction rather than governance. Building them in early means their input shapes the AI design rather than constraining it.
Mistake Three: Underestimating Change Management
AI implementation is a people project that happens to involve technology. The majority of implementations that stall do so not because the technology failed, but because staff adoption never happened.
Front-line employees are often the most skeptical, not because they’re resistant to new tools, but because they’ve seen technology rollouts before. New system, mandatory training, two months of disruption, and then the tool gets quietly abandoned. The reasonable response is to wait and see whether this one is different.
What changes that dynamic isn’t more training sessions. It’s visible leadership engagement, early wins that staff can see and feel, and an honest conversation about what AI is there to do. AI is not there to replace people, but to take the repetitive, low-judgment work off their plate so they can focus on the work that actually requires them.
Credit unions in particular need to be deliberate about this. The member relationship is the competitive advantage. If staff perceive AI as a threat to their role, they won’t use it well, and the member experience will suffer for it.
Mistake Four: Choosing Vendors on Demos Rather Than Contracts
AI vendors have excellent demos. The demo environment is clean, the data is structured, the use case is purpose-built to show the product at its best. The question isn’t whether the demo works. It’s whether the tool works on your data, in your environment, with your staff, at your scale.
Before signing anything, get three things in writing: how the vendor handles your data (are they training their models on it?), what the implementation timeline realistically looks like, and what the support model is after go-live. A vendor who is vague on any of these in the contract stage will be vaguer once you’re locked in.
Also, get a reference from a peer institution that has been live for at least six weeks. Not a logo on a slide deck. An actual conversation with someone who can tell you what the first thirty days looked like with this AI vendor.
Mistake Five: No Policy Before Deployment
This one compounds all the others. When staff start using AI tools before a policy exists, you end up with an inconsistent, ungoverned patchwork. Some people are pasting customer data into public tools. Others are over-restricting and refusing to engage at all. Neither is a strategy.
A policy doesn’t need to be fifty pages. It needs to establish three things: how the organization governs data in AI contexts, how vendors get approved, and who is accountable for AI-assisted outputs. Get that foundation in place before you deploy anything broadly.
I put together a 14-page AI policy template that any organization can use as a starting point. It covers governance, data handling, procurement, acceptable use, and more.
Download the free AI policy template from The GCC →
What Getting It Right Actually Looks Like
The organizations I’ve seen navigate this well start with a clearly scoped pilot — one workflow, one team, a defined success metric — rather than an organization-wide rollout. They bring compliance and operations into the design conversation early. They invest in staff communication as seriously as they invest in technical implementation. And they build the AI policy infrastructure before the tools, not after.
AI is not a technology project. It’s an organizational change that happens to involve technology. The institutions treating it that way are the ones building something durable and compounding.
If any of this resonates, we work with institutions across North America on exactly this. Book a free 30-minute call to talk through where to start.
About The General Consulting Company
The General Consulting Company helps business owners and C-suite executives understand and implement AI. We offer practical training, policy frameworks, and custom tooling so your organization can move on AI with confidence.
Not sure where to start? Book a free consultation with The General Consulting Company and we'll walk through what makes sense for your business.
BOOK A CALL