& more

Transforming Your Acquisition

Episode 6

Transforming Your Acquisition


// Atul Verma

Chief Information Officer, Bank Of Montreal

About the episode

Every digital transformation journey is unique. The goals may be similar, but the requirements, features, and choices made along the lead to new systems unlike any other. That’s all well and good—until you need to merge two systems into one, on a short timetable.

Atul Verma of the Bank of Montreal recounts how he and his team found themselves handling the technical side of one of the largest acquisitions in Canadian banking history—with only a few months to get it done.

About the guests

Atul Verma

Chief Information Officer
Bank Of Montreal


00:02 — Jamie Parker
Imagine you're the Chief Information Officer of a large company and everything is going swimmingly. You've recently modernized your systems. Your teams are making the most of the tools at their disposal. They're evaluating opportunities for future infrastructure updates. They've got seven months to plan for and execute one of the biggest bank mergers in history. Everything is gravy.

00:25 — Jamie Parker
Wait, what? Only seven months to pull off a massive merger? Digital transformation is great when it rolls out smoothly and everyone is familiar with the systems they work with. Hopefully the pace of change doesn't affect that familiarity. But when you have a merger or an acquisition, all of a sudden your teams are going to find themselves with another set of systems they'll need to integrate with their own, and that's not easy.

00:52 — Jamie Parker
Add a strict timeline and that difficult integration turns into a real doozy of a challenge. Atul Verma, Chief Information Officer at Bank of Montreal, shares how merging two gigantic systems on a deadline requires both meticulous planning and quick reaction times. Developing the monumental plan relied on it, as did the numerous customers who trust these banks with their money.

01:26 — Jamie Parker
In February of 2023, the Bank of Montreal announced it had completed its acquisition of the Bank of the West. Legally, the two entities could now become one. Practically, they had until September to get it done. As digital transformation projects go, this one was tough.

01:45 — Atul Verma
Just to give you a sense of the scale: this was the largest bank acquisition in Canadian history. And the banks are 200 year old banks actually, right? So, this was really a big large acquisition. So from that perspective, it was really unique and had its own challenges with that. But we have done some larger programs and transformations before this as well.

02:04 — Jamie Parker
The teams at Bank of Montreal completed other digital transformation projects before this one. That experience was served them well, but that doesn't mean it would be easy. As complex as some of these projects are, getting them done on a tight timeline increases the potential for something to go wrong. And when you're dealing with millions of financial accounts, you really need to get it right. And for a bank acquisition of that size, Atul says there are two main things that need to be achieved.

02:35 — Atul Verma
The first one is to make sure you migrate all the customer data so they have access to the same information when they come over on the other side. And the second thing is you make sure there's no errors in that process. Nobody's money gets missing, right?

02:46 — Jamie Parker
Merge the systems, move all the data to one system and make absolutely sure that all the money is accounted for. Pretty straightforward, right? How much data are we talking about anyways?

02:59 — Atul Verma
So the amount of data that was involved across all different products and services was petabytes of data. Actually, it was a very large data set that we had to map over and move.

03:10 — Jamie Parker
One petabyte is 1,000 terabytes, and there were a few of those to move without mistakes. That's a lot of ones and zeros to keep track of with. Enough planning and practice, the teams at Bank of Montreal and Bank of the West could get it done. They'd completed difficult technological transitions before. The acquisition had first been announced in December of 2021, but the two banks could not get started until final regulatory approvals came through.

03:39 — Atul Verma
What made it a little bit more complex, other than just the data complexity, was the fact that it took us about 13 to 14 months to get regulatory approval for this acquisition. And that meant that... Yeah, that meant that we did not have full access to the data before that approval came through. We were still operating as two separate companies, so we cannot really go in and look at the data. So we had to work with some mock data to complete our mapping and complete our product and both product and account mapping. And we had about seven months after the approval came to actually execute on that status that we put together. So it was a pretty compressed timeline.

04:18 — Jamie Parker
They had a little over a year to plan, but without looking at the actual data, they could only do so much. Finally, in February of 2023, they got that approval and could start the work in earnest. The next step was to identify the best time to carry out the migration.

04:35 — Atul Verma
Given the size and the complexity of this, we were looking for a three-day weekend to complete the conversion.

04:41 — Jamie Parker
When put into action, this kind of project needs the bank's systems to go offline, no changes to accounts while it's in progress. The banks had to stay open for business before the migration and open for business as usual after the migration. People need to be able to access their funds. So regular weekdays were out. It's a massive migration. So going offline for a two-day weekend wouldn't be enough time to get it all done. So they looked at all the three-day weekends available and picked one.

05:12 — Atul Verma
And you only have so many three-day weekends in the year, right? So we landed on our Labor Day of last year as the window that we would complete this conversion into. So that itself is pretty complex, right? Because if we are moving the data across multiple bank systems, it's not just one set of data. So if you think about the way the data is structured on the acquired bank to the acquiring bank, that mapping is pretty complex. So that itself posed a lot of challenge, not just the size of the data, how you move that data in time, but also the sequencing of that data move to completing that three-day weekend.

05:49 — Jamie Parker
They weren't just lifting data from server A and dropping it into server B. They also had to account for how the data are recorded, how to reformat data and figure out the movements, the pivots and the order. All of that needed to be accomplished. Atul shared a few examples of the kinds of data and systems they had to integrate.

06:07 — Atul Verma
Simple example I'll give is your online banking, mobile ID and the password, right? You had a set of passwords in the other side. When that came over, we wanted to make sure that they can use the same ID and password on the BMO side as well. And that itself posed a lot of challenge around how you encrypt that and how you make it available.

06:27 — Jamie Parker
We're not just talking about financial statements and account histories, which are complex enough on their own. This was a top-down integration of two complex and different IT systems that millions of customers used and relied on. Atul and the bank's teams had their work laid out before them. All that was left to do was to get it done. They started by doing as much planning as they could before having access to the data itself. They had to answer lots of questions. How are they going to prepare? How many dry runs would they have? How much time between runs? And how would they distribute all the work they were waiting to get started on?

07:09 — Atul Verma
So before our legal approval, we weren't able to do some of what I would consider as a high-level mapping and planning exercise, right? So we were able to do that. We were not able to look at the data itself, but a lot of planning went in that phase. Once we had access to the data, that's when it became real. And data for this magnitude had lots of interfaces into it that you have to manage as we port over. We went to four mock conversions in the process in those seven months, and these are conversions, which we do to replicate what will happen on the actual conversion day.

07:44 — Jamie Parker
They decided on four mock conversions in the span of seven months. That only gave them a few weeks between each test run. Now that they had access to the data, they had to put the pedal to the metal to make their deadlines. First up was to determine exactly what they were working with. Although both companies are banks, there were significant differences in how they set up their systems. They had different priorities, different features and products, and even defined customers in different ways.

08:12 — Atul Verma
All that translates into the differences in the data sets. Even if some of our systems were the same, the underlying data structure and the schemas were very different. So we went through a very extensive exercise to map that to say, okay, database A or even active field level, field A on this side means a field B on this side. So that mapping took a long time and that validation of that mapping through the mocks is where we found initially errors and we had to go back and fix them, right?

08:42 — Jamie Parker
They had to comb through all the databases and all the applications and figure out what was in common and what was different. And then they had to make notes about it all so they'd know how to translate the information from one format to the other. That's the mapping Atul is talking about. And given the scale of the merger, getting such a complicated process 100% correct on the first try would've been miraculous. Getting all those petabytes of data mapped and correctly translated was their top priority.

09:12 — Atul Verma
Our first mock was a technical mock only, so we did not do what we call this end-to-end. So we just wanted to make sure that the data can come over in time and that it then can be converted so other conversion routines can run on time.

09:26 — Jamie Parker
They got through their first test run and immediately tuned their processes to address the issues they ran into. But that first test run was limited and they only had three left. They had to increase the scope of the next run to stay on schedule.

09:40 — Atul Verma
And from there on we added all the other intricacies about the business processes and end-to-end aspects of the data, right? So if you start from, let's say your checking account, you want to make sure that data flows all the way to your financials and GL and your regulatory reporting, right? So there's a whole spectrum of the same data flowing through the process. That was very incremental. We started with some standalone conversions in mock one and then moved on to mock four where it was truly a replica of what the conversion weekend was supposed to be.

10:11 — Jamie Parker
Each test run revealed the errors they needed to fix before the next one, but also increased the number of things they needed to test until they conducted a full rehearsal of the entire process. And there were issues to deal with in a short period of time. They couldn't do it all quickly all by themselves.

10:31 — Atul Verma
We had about 40 different vendor partners that were also working with us to facilitate this data migration. A lot of this was in-house, but a lot of this was also with the vendor partners. All that orchestration was not perfect from day one or mock one. It took some time for us to really work our conversion day playbook and make sure it's absolutely correct because you only have 83 hours.

10:56 — Jamie Parker
83 hours to carry out the migration, and they had very little time to practice the process. We know modern IT systems are complex. There are a lot of components that interact with each other to make up an application and the infrastructure that it runs on. Hiring the talent to build and manage all those components is difficult, as is coordinating all those teams to work in concert. Add in outside teams to the mix, and that coordination can get really tricky, especially when there are 40 vendors to work with. But they made all the difference because they had the expertise to keep the project moving. Atul and his teams couldn't afford to get stuck. They only had a few weeks between each trial run to fix all the issues they encountered.

11:45 — Atul Verma
That is not long at all, and especially if we talk about those mock events. Each one of them was a conversion. So each one of them are pretty forced to six to eight weeks. Initially we had a little bit longer, I think eight weeks at the beginning, and then they got more compressed as we got closer. So that was not an easy feat, but I think the team was a very well-oiled machine here and the collaboration that happened between the business and the technology teams and all different aspects of technology team.

12:11 — Jamie Parker
With their previous experiences, modernizing systems, and with the help of outside vendors, the banks were able to plan for and carry out a rigorous testing schedule to map and transfer over massive amounts of data. But that data transfer was only part of the work they needed to do. When we come back, we'll hear about how they prepared for the massive influx of new users. It's one thing to transfer petabytes of data in the span of 83 hours. It's another to make sure that everyone can actually access their accounts and check to make sure their information is right. Bank of Montreal had some work to do to make sure their infrastructure could meet the increased demands.

12:57 — Atul Verma
We need to make sure that the infrastructure is scaled up to handle the new warnings. We actually scaled our infrastructure to six times the warning. It was not necessary, but we kept some buffer in there to make sure what we call is the day one experience for our new customers and new clients coming in to this acquisition was absolutely seamless.

13:19 — Jamie Parker
They increased their capacity to accommodate six times the number of people who use their systems. That's a lot. Bank of Montreal was already a large financial institution used to serving a great many customers every day. They were acquiring an admittedly also large bank, but was the new combined customer base six times larger than what they were used to?

13:42 — Atul Verma
Round up way we can say we doubled our volumes, right? So we doubled our number of customers. We doubled our number of accounts, number of transactions, so that was the baseline. So we had to make sure that two times works, right? But if you look at some of these customer facing applications, in banking for example, this was a big conversion. Customers are anxious. So normally you will have a, let's say, X amount of logins in a day. After conversion, we thought everybody will log in, so just to make sure their money is safe on the other side. So that caused us to create not just 2X (which would've been normal) a 6X buffer to handle those peak volumes actually on day one or week one.

14:23 — Jamie Parker
The banks don't typically have all of their customers logging in to check their accounts at the same time. And that's pretty typical for any company with an application. But this situation was different. Akin to many big launches, people wanted to try out the new system to see if their phones had transferred correctly and figure out how to work the new system. That spike of concurrent users would eventually taper off much like with any hip new app. They looked at the historical user data from both banks and peak user counts to estimate how many people might log on at once after the new rollout. And then they added extra buffers just to be safe. Increasing the capacity by six times wasn't an easy task either. They added extra hardware, but that wasn't the bottleneck.

15:11 — Atul Verma
The hardware set of that was fairly straightforward. What was complex is to test the performance of the scaled up infrastructure to that level. And what I mean by that is actually to be in a testing mode and still be to verify that it could actually run at 6X.

15:27 — Jamie Parker
Throwing more servers at the problem is necessary, but it's not enough to bring the system's capacity to six times the scale. With that kind of increase, there are other performance issues to iron out and they had to make sure it could sustain that level of load for an extended period of time.

15:43 — Atul Verma
That performance testing, and then being able to achieve that peak in our test environments and then also to sustain that peak for a period of time because we had no idea how long that peak will last actually, right? It could last an hour. It could last a week actually, right? So there's huge variability in how that could go.

16:02 — Jamie Parker
Maybe people will try logging into the system for a day to try things out and then move on. Maybe they'll log in multiple times during the week because they want to triple or quadruple check that everything was fine. Atul and his teams couldn't know ahead of time how customers would behave, so they had to plan for the biggest challenge. Luckily, the infrastructure side of this project wasn't restricted by regulatory approval.

16:26 — Atul Verma
So a lot of the infrastructure scaling happened before. You really can't do that in that 83 hour weekend, right? So a lot of that infrastructure was a period of time. Actually, we actually started that even before our later approval because that was on our side. We can just do it independently from the hardware perspective.

16:41 — Jamie Parker
Breaking up the different requirements of the project into what they could do, and couldn't do yet, help them get as much done as they could before facing the time crunch. Checking off those parts of the project allowed them to focus on fewer problems at once.

16:57 — Atul Verma
Because of the complexity and the nature of the ecosystem with all our vendor partners, we had hiccups. We had hiccups all the way leading up to the conversion week actually. One good thing we did in the process was we staggered some of our conversions. So we had certain products that we converted a couple of weeks before the actual big conversion day. And that was done because of some of dependencies with the vendor partners, but also to decouple or de-risk the conversion as much as we can the long weekend. But there were issues. There were issues around the accuracy of the data.

17:28 — Jamie Parker
The mock runs helped the team find those errors and correct them before doing the actual transfer. They also helped them get a handle on the timing of the whole process and make sure they could actually complete it within the 83-hour window.

17:41 — Atul Verma
That was a big deal, and we had to perfect that process over time. It was not perfect from day one. So we had issues like that. And the mock events itself during the event, sometimes something runs longer than you think it should run. A conversion process you think should run in four hours and you stack it like that in your 83-hour plan. It takes 10 hours, right? So then everything else backs up. And really, you can't stretch the 83 hours. That's really hard, because the bank has to open.

18:11 — Jamie Parker
After the first few trials. It looked like they'd be able to get it done in time if everything went according to plan. But we all know things don't always go according to plan. So Bank of Montreal chose to include some chaos engineering in the test runs.

18:26 — Atul Verma
So we simulated certain scenarios during our marks and in our infrastructure scaling to see how robust things are, right? Because we had to repeat that again, and it had to be very robust, both from a process perspective and also from a technology infrastructure perspective.

18:43 — Jamie Parker
They developed their process to execute a transfer in a very short window of time. Then they threw wrenches into the works and developed contingencies to beef up that process. Because once they got started, they had to see it through regardless of what the world would throw at them. They felt prepared, but they also felt some nervousness to.

19:03 — Atul Verma
You are always nervous for something like this. We were prepared for that. We had buffer... Like I mentioned, we had buffers in those 83 hours. If something takes longer, we can still catch up. We had everybody on site. So we were very quick in triaging issues and very, very quick and actually fixing issues as they happened during that 83 hour weekend. We had all our vendor partners on site. I think that was really helpful. Issues did happen, but the team's ability to gather around the issue and then triage it and then fix it was pretty amazing, actually. So that made sure that we were able to... We actually finished it before 83 hours.

19:42 — Jamie Parker
It took just under two years from announcing the acquisition in December of 2021 to launching after Labor Day weekend in September 2023. Most of the time was spent on infrastructure upgrades and planning with only about seven months of hands-on practice with the data. They practiced all they could, but it was still a massive project on a tight turnaround. And in the end...

20:06 — Atul Verma
We had almost no post-conversion, customer issues or errors of that nature. And I think that that happened because all the planning that went into this before our regulatory approval and into our mocks and dress rehearsal. That was absolutely essential and critical, and why the conversion was so smooth.

20:28 — Jamie Parker
The Bank of Montreal had undergone several digital transformation projects, and when the acquisition of the Bank of the West was announced, they had to come up with a plan to execute the largest transformation projects they'd ever attempted. They designed their plan for over a year and expanded their infrastructure. And when they finally got the thumbs up from regulatory agencies, they put their plan into action over the course of seven months. That included testing, debugging, and testing again with four mock trials, all to be ready to do the real thing over 2023's Labor Day weekend. Thanks to their previous experience, meticulous planning and help from their vendors, they were able to pull off the largest data migration in Canadian banking history. With time to spare for business as usual, to resume that Tuesday morning.

21:24 — Jamie Parker
That's it for Season 3 of Code Comments. We hope you've enjoyed our journey into the weeds of digital transformation. Season 4 is coming soon. Stay tuned for more of our guests' riveting stories delivered with effortless eloquence. You can learn more at or visit to find our guides to digital transformation.

21:47 — Jamie Parker
Many thanks to Atul Verma for being our guest. Thank you for joining us.

21:53 — Jamie Parker
This episode was produced by Johan Philippine, Kim Huang, Caroline Creaghead and Brent Simoneaux. Our audio engineer is Christian Prohom. The audio team includes Leigh Day, Stephanie Wonderlick, Mike Esser, Nick Burns, Aaron Williamson, Karen King, Jared Oates, Rachel Ertel, Carrie De Silva, Mira Cyril, ocean Matthews, Paige Stroud, Alex Traboulsi Boo Boo Howse, and Victoria Lawton.

22:25 — Jamie Parker
I'm Jamie Parker, and this has been Code Comments, an original podcast from Red Hat.

Chart your journey

Digital transformation is a big undertaking. Everyone’s path is different—but a lot of the obstacles are the same. Find out how to avoid the pitfalls and overcome the barriers that may otherwise slow you down.

quotation mark

It's not just one set of data. So if you think about the way the data is structured on the acquired bank to the acquiring bank, that mapping is pretty complex. So that itself posed a lot of challenge, not just the size of the data, how you move that data in time, but also the sequencing of that data movement to completing that over a 3 day weekend.

Atul Verma

More like this

Code Comments

You Can’t Automate The Difficult Decisions

The tensions between security and operations and developer teams are legendary. DevSecOps is trying to change that, and automation is a big part of making it work.

Code Comments

Scaling For Complexity With Container Adoption

Spinning up a Kubernetes cluster is just the beginning. How do companies get value from container adoption?

Code Comments

Challenges In Solutions Engineering

Tech changes constantly. What does that mean for companies adopting new technology?