Some years ago I was working as chief architect for a large organization. We were looking at implementing an enterprise scale package as part of a business transformation. I knew what capabilities I needed, I knew when I needed them and I knew what flexibility I had over timescales. I had a business roadmap covering business changes and benefits realized, I had mapped the technology capabilities required against it. We had chosen the package to act as our “technology backbone”.
We had a number of technology capabilities that could be implemented as customisations to the package, were provided as add-on modules or could be bespoked. My aims were to keep the IT and commercial landscape as simple as possible, keep license and delivery costs downs, and minimise timelines. This would drive me to a default decision of using the add-on modules.
We would be outsourcing the implementation to a tier 1 consultancy. As part of our implementation partner selection process, we asked them to provide feedback on the viability of this approach and to highlight any risks with their mitigations. There were several modules that our five potential implementation partners had told us were “high risk”. It was interesting that different modules were highlighted by different potential partners. I have summarized their responses:
|Reference data management||It’s new
It doesn’t work
|Buy an alternative product|
|High volume transaction data consolidation and load||It’s new||Bespoke it|
|Demand forecasting||It’s new
It won’t perform
|Buy an alternative product|
|Reporting||It won’t perform||Buy an alternative product|
|Integration||It won’t perform||Buy an alternative product|
I was put under pressure by the business to accept these recommendations because these vendors were the “experts”.
I wasn’t very happy with these responses. I wanted to understand exactly what didn’t work, what was unproven in the market, if there were technical or business workarounds, what the impact on the IT and business solution if we implemented these modules and things went wrong, and how should we manage the delivery risks. We asked each vendor to give more detail to substantiate their claims. We also talked to the package supplier, to existing users of the package and to some analysts.
I was “allowed” to delve into these recommendations further when I explained the potential cost impact of these “mitigations” which would have increased the overall business program costs by over 10%. Plus we would have had more build and integration would have increased the timeline and increased delivery risks. I could quantify the impact of using their solutions but they had challenged my baseline approach without quantifying the impact of the risks they had identified.
Now, let us look at each of these product “risks” in turn…
Reference data management – it turned out that some implementation vendors were basing their assessment on an old version of the module that had acknowledged scalability issues. The new version was a well established product that had been acquired by the package vendor. The real issue was that the implementation vendors did not have any implementation experience. The new mitigation was to require them to have training and support from the package supplier, to limit the implementation to core functionality (i.e. not to try to do anything “clever”), and to get their design and implementation signed off by the package supplier.
High volume transaction data consolidation and load – this module was not new, it had an established track record of success. The implementation vendors had either not implemented it or had very few staff available to implement it. Again we told the implementation vendors that they would be required to get training and support from the package supplier and to get their design and implementation signed off by the package supplier.
Demand forecasting – this module was another recent acquisition for the package supplier so again there was little experience from the implementation vendors. However, this module did have performance issues. We worked with the package supplier and our infrastructure suppliers to develop an architecture that would deliver our non functional requirements. We also put some specific tasks into our implementation plan to test performance early enough in the roll out schedule to enable us to change course if necessary. If necessary we could retain our legacy solution longer than originally planned.
Reporting – we were referred to a number of “failed” implementations of this module. I spoke to the managers at these sites to determine the root cause of the “failures”. In all cases it was poor logical design. The solutions would have failed using any technology. It was simply poor ETL specification, poor data warehouse design, poor data mart design, and poor implementation practice. There were no issues that could be attributed to the package modules. Perhaps the implementation vendors didn’t want to admit that their design work had failed. Again we told the implementation vendors that they would be required to get their design and implementation signed off by the package supplier.
Integration – the package had its own integration solution that linked all its modules together. It was a relatively immature solution when compared to specialized products when used to link in other applications. We judged this to be a critical threat to the program. We decided that our approach would be two pronged.
- We would plan to use the package vendor’s integration solution but we would put some additional tasks into our implementation plan to test performance early enough in the roll out schedule to enable us to change course if necessary.
- We would also work on an alternative solution from an other vendor. In this case, we agreed commercials early to prevent us being held to ransom later in the program.
We went ahead with my original IT landscape, much lower costs than our vendors had recommended and a set of contingencies and risk management actions that secured successful delivery of the business benefits.
Implementation staff from the implementation vendors who have just rolled off a two year enterprise scale implementation program have product knowledge that is two years or more old. And they are not very quick to talk about their failed implementations.
Product vendors want to get their solutions out into the market. They also recognise that successful implementations are critical to their success. You have power over them to give enhanced support if you are going to implement any new features.
A lesson of my previous post is that when you work with analysts you need to understand their methods to get a good understanding of whether they really have a solid grasp of the subject that you have engaged them to advise on. The information gathered about the success or failure of solutions will be based on projects started two or more years ago with previous product versions.
So who were the experts? We were! It was our business, our change program and our IT landscape. We had to invest to learning about the product releases that we would use. We had to be incisive in our questioning of the “experts” and to attempt to understand their agendas and biases. We had to be brave enough to make our decisions rather than take the easy way out and take the consultants’ and analysts’ advice.