www.flickr.com
This is a Flickr badge showing public photos and videos from Chief Architect. Make your own badge here.

Search ...

Powered by Squarespace

Enterprise Architecture Blog

Discussion focusing on how to manage an enterprise architecture function successfully.

Friday
Sep042009

Enterprise Architecture Pitfalls

Brenda Michelson has put together a good summary of the recent Tweets following Gartner’s recent listing of EA pitfalls. 

The EA community seems to have come to the conclusion that Gartner has stated the obvious.  The Tweets have provided “real” EA pitfalls based on the experience of practitioners.

If you read the blogs of the community of practitioners then you will find not only the pitfalls but the solutions to them.  These solutions have been discovered through the sweat and tears of those who have been there, made the mistakes, and got the scars.

The Twitterverse and Blogosphere has enfranchised the “doers” who previously didn’t have a voice.  But more importantly, the current generation of decision makers and key influencers understand the value of the information that is out there and freely available.

If you read the blogs, you won’t get a neatly package “answer” to your problems.  But you will, after some time and effort, get insight.   You will have to work out for yourself what is relevant and what is not. You will have to work a little harder to get to a plan to take your organization further.  But you will learn, and you may just have a plan that can deliver meaningful results.

The criticism of the identification of EA pitfalls follows very close behind similar criticism by the EA community of Tweeters of “Emergent Architecture”.

The analysts need to demonstrate that they talking to the right people, that they are gathering the right information and finding genuine insights based on the experience of practitioners.  If they can deliver better results quicker and cheaper than doing it yourself then they will have a valuable offering.  The recent Twitter discussions suggest that they are behind the curve.

The analysts must engage and embrace those that do and know.  If they don’t then I suspect any deficiencies in their offerings will continue to be highlighted.

Sunday
Aug162009

Enterprise Solution Architecture Decision Making

Some years ago I was working as chief architect for a large organization.  We were looking at implementing an enterprise scale package as part of a business transformation.  I knew what capabilities I needed, I knew when I needed them and I knew what flexibility I had over timescales.  I had a business roadmap covering business changes and benefits realized, I had mapped the technology capabilities required against it.  We had chosen the package to act as our “technology backbone”.

We had a number of technology capabilities that could be implemented as customisations to the package, were provided as add-on modules or could be bespoked.  My aims were to keep the IT and commercial landscape as simple as possible, keep license and delivery costs downs, and minimise timelines.  This would drive me to a default decision of using the add-on modules.

We would be outsourcing the implementation to a tier 1 consultancy. As part of our implementation partner selection process, we asked them to provide feedback on the viability of this approach and to highlight any risks with their mitigations.  There were several modules that our five potential implementation partners had told us were “high risk”.  It was interesting that different modules were highlighted by different potential partners.   I have summarized their responses:

Module Risk Mitigation
Reference data management It’s new
It doesn’t work
Buy an alternative product
High volume transaction data consolidation and load It’s new Bespoke it
Demand forecasting It’s new
It won’t perform
Buy an alternative product
Reporting It won’t perform Buy an alternative product
Integration It won’t perform Buy an alternative product

I was put under pressure by the business to accept these recommendations because these vendors were the “experts”.

I wasn’t very happy with these responses.  I wanted to understand exactly what didn’t work, what was unproven in the market, if there were technical or business workarounds, what the impact on the IT and business solution if we implemented these modules and things went wrong, and how should we manage the delivery risks.  We asked each vendor to give more detail to substantiate their claims.  We also talked to the package supplier, to existing users of the package and to some analysts.

I was “allowed” to delve into these recommendations further when I explained the potential cost impact of these “mitigations” which would have increased the overall business program costs by over 10%.  Plus we would have had more build and integration would have increased the timeline and increased delivery risks.  I could quantify the impact of using their solutions but they had challenged my baseline approach without quantifying the impact of the risks they had identified.

Now, let us look at each of these product “risks” in turn…

Reference data management – it turned out that some implementation vendors were basing their assessment on an old version of the module that had acknowledged scalability issues.  The new version was a well established product that had been acquired by the package vendor.  The real issue was that the implementation vendors did not have any implementation experience. The new mitigation was to require them to have training and support from the package supplier, to limit the implementation to core functionality (i.e. not to try to do anything “clever”), and to get their design and implementation signed off by the package supplier.

High volume transaction data consolidation and load – this module was not new, it had an established track record of success.  The implementation vendors had either not implemented it or had very few staff available to implement it.  Again we told the implementation vendors that they would be required to get training and support from the package supplier and to get their design and implementation signed off by the package supplier.

Demand forecasting – this module was another recent acquisition for the package supplier so again there was little experience from the implementation vendors.  However, this module did have performance issues.  We worked with the package supplier and our infrastructure suppliers to develop an architecture that would deliver our non functional requirements.  We also put some specific tasks into our implementation plan to test performance early enough in the roll out schedule to enable us to change course if necessary.  If necessary we could retain our legacy solution longer than originally planned. 

Reporting – we were referred to a number of “failed” implementations of this module.  I spoke to the managers at these sites to determine the root cause of the “failures”.  In all cases it was poor logical design.  The solutions would have failed using any technology.  It was simply poor ETL specification, poor data warehouse design, poor data mart design, and poor implementation practice.  There were no issues that could be attributed to the package modules.  Perhaps the implementation vendors didn’t want to admit that their design work had failed.   Again we told the implementation vendors that they would be required to get their design and implementation signed off by the package supplier.

Integration – the package had its own integration solution that linked all its modules together.  It was a relatively immature solution when compared to specialized products when used to link in other applications.  We judged this to be a critical threat to the program.  We decided that our approach would be two pronged.

  1. We would plan to use the package vendor’s integration solution but we would put some additional tasks into our implementation plan to test performance early enough in the roll out schedule to enable us to change course if necessary.
  2. We would also work on an alternative solution from an other vendor. In this case, we agreed commercials early to prevent us being held to ransom later in the program.

We went ahead with my original IT landscape, much lower costs than our vendors had recommended and a set of contingencies and risk management actions that secured successful delivery of the business benefits.

Implementation staff from the implementation vendors who have just rolled off a two year enterprise scale implementation program have product knowledge that is two years or more old.  And they are not very quick to talk about their failed implementations.

Product vendors want to get their solutions out into the market.  They also recognise that successful implementations are critical to their success.  You have power over them to give enhanced support if you are going to implement any new features.

A lesson of my previous post is that when you work with analysts you need to understand their methods to get a good understanding of whether they really have a solid grasp of the subject that you have engaged them to advise on.  The information gathered about the success or failure of solutions will be based on projects started two or more years ago with previous product versions.

So who were the experts?  We were! It was our business, our change program and our IT landscape.  We had to invest to learning about the product releases that we would use.  We had to be incisive in our questioning of the “experts” and to attempt to understand their agendas and biases.  We had to be brave enough to make our decisions rather than take the easy way out and take the consultants’ and analysts’ advice.

Sunday
Aug162009

“Nothing new”

Friday
Aug072009

SOA - food for thought...

I think a couple of articles are worth a read if you are interested in making SOA a success. It is much harder this way but it might just create something of value...

The first is an article entitled The 7 dirty words of SOA by Jeff Schneiderin which he talks about the difference between "Business SOA" and "IT SOA".

The second by Boris Lublinsky states that Only 1 in 5 SOA Projects Actually Succeed.

 

Tuesday
Jun092009

Think globally, act locally…

My blog on what good looks like prompted a couple of blog posts that I would like to answer.  My answers will be inline.  They take a pragmatic viewpoint of a practicing executive level solution architect …

The first blog was written by Leo de Sousa and this had the title of What good looks like – follow up.  He asked the questions:

  • How big a project would require this level of artefact creation? For small and possibly medium projects, the work to do the architecture may be more than delivering the project.
    • This is a good point and I should have made the context clearer.  I am not assuming that the artefacts exist as project artefacts but just that they exist.  They may be at enterprise, line of business, program or project level.  In the context of a project, I just want the information.  My opinion is that this information should be created at an enterprise or line of business level and updated by projects and support teams.
  • Is there a subset of these artefacts that would be sufficient for small and medium projects?
    • No!  A smaller project will have fewer decisions, fewer deliverables and therefore the information produced will be less.
  • How would the next solutions architect find and assess the artefacts created?  Need a searchable, secured repository - wiki?, blog?, SharePoint?, network file share?, knowledge base?
    • It doesn’t matter so long as architects, analysts, designers, developers, testers, etc can find what they need when they need it.  All of those approaches listed can work with the appropriate implementations and disciplines and they can all fail.
    • The key to this is not the repository, it is in most organizations the “business as usual” support team.  They are the long term custodians of the IT solution and its relationship with the business.  They keep the knowledge up to date and are responsible for handing that knowledge on to development teams when major changes are required.  Ideally this is well supported by sound processes and tools and not critically dependent on individuals. 

He went on to make the points that the key factors for success are:

  • ensuring that there is time for solution architects and enterprise architects to work together to do peer reviews: 1) pre-project, 2) technical reviews in a project and 3) post-project
    • I absolutely agree and I find that this is one thing that can be  very difficult to achieve.  Enterprise architects tend to be in short supply and have to ration their time.  Solution architects often do not take an enterprise view and fail to highlight critical enterprise architecture issues. A good triage process addresses the issue of rationing. 
  • communication of agreed upon standards and principles is essential to build a common language
    • The standards and principle help address the second issue and educate solution architects on the issues that are of enterprise significance.
    • I would go a little further, we also need common goals and an agreed approach to making rational compromises to help solution architects make the right decisions and recommendations.
  • negotiating with functional managers to ensure time is allocated to every project for architecture
    • This point is very important.  It is easier where there is a recognition that the landscape will change significantly.  It is equally necessary where the project extends an existing capability. 
    • In any significant project there is the potential that a requirement will “break” the architecture (the break could could be in the business, applications, data or infrastructure).  My approach to ensuring there is architectural effort has been to ensure that the project has enterprise and long term requirements captured to justify the “extra” effort.
  • regularly demonstrating value to the organization by taking an enterprise, long term view
    • Every project has an enterprise and long term context.  The key is to identify the stakeholders and requirements to express this.  Then demonstrating value becomes meeting stakeholder requirements.

The second blog was by Nick Malik and was entitled Why good doesn’t happen.  He states that “there is a flaw in the logic” and the “advice is incomplete”.  He examines my list of 10 artefacts (or 14 as he appears to prefer) from the perspectives of :

Viewpoint 1: at the beginning, looking forward, defining project requirements

Viewpoint 2: in the middle, looking back, trying to understand

He goes on to make the following points:

  • If I create an artefact “for the future” that does not mean that the people, in viewpoint 2, will use it.  He states there must be a design process that mandates that the artefact be used. 
    • The existence of a process today does not imply the existence of the same or any process in the future (the reverse applies too) so I am not sure that this point helps us much. 
  • If there is not such a process then the artefact should not be produced because there is no business justification.
    • The business justification is simply based on hard experience which tells me from past and current projects that I need this information to take cost, time and risk out of making significant changes to an existing solution.  The project or program, through its governance process, is at liberty to de-scope these proposed deliverables.  But, in the interests of making a fully informed decision, it is passing on cost, time and risk to the next project.
  • In the context of the maintenance process, we should “draw the requirements for documentation from that development process… not from a wish list.”
    • To be clear, the selection of artefacts is mine based on my experience of making large scale changes to existing architectures.  It is not the “wish list” of an enterprise architect imposing his views on developers.
  • The artefacts identified form “some tiny part of a much larger ecosystem of information”.
    • I absolutely agree.  The management of this eco-system is another subject.  My preference is that we approach this using a federated approach that puts responsibility close to the point of consumption but also ensures coordination where necessary.

Finally, he makes the following suggestions as to how I should complete my advice:

  • The artefacts “need to be findable, consistent, and AUTOMATICALLY linked together in a way that minimizes the “archaeology expedition””
    • This is a worthy aim.  In many organizations this is a distant dream – perhaps a fantasy.  If we are not on that road, I think we should do something today that makes things better, let us just have some information.  And that is what I have tried to describe.  I believe that the issue of inconsistent and out of date information is quicker and cheaper to solve than the problem of no information.
  • “The data describes part of the architecture of the enterprise, and as such, needs to be maintained at the enterprise level, for the sake of engineering.”
    • It does not need to be maintained at enterprise level.  It needs to be maintained where it has value to be maintained.  It needs to be consistent across the enterprise where consistency has value.  We should only engineer to the level where the engineering adds value and no further.  I do not see the need for monolithic centralised approaches that require huge uninformed, unaccountable bureaucracies to operate.