A colleague recently attended a concert at the new 80,000-seat Croke Park football arena in Dublin, Ireland. As she learned, the sparkling state-of-the-art stadium was built in a residential area with little available parking, and accessibility mainly by public transportation. In fact, during the event, the neighborhood was closed for blocks in all directions, allowing pedestrian traffic only. Tens of thousands of concert-goers faced a long walk through the residential neighborhood, arriving from all directions.
The center of the stadium — the seats and the “pitch” — is apparently spacious and comfortable. The sound equipment, stage and seating are all top of the line, and my colleague’s experience at the concert itself was fantastic. But once the concert ended, the crowds were herded back out through narrow residential streets. Most walked miles before finding a train, taxi or bus.
Clearly, planning for this new stadium lacked in certain key areas, all of which impacted the concertgoers’ ultimate experience — and had extraordinary collateral impact on the local community.
Croke Park is a perfect metaphor for something we often see in organizations: inadequate infrastructure technology planning. The latest, greatest mission-critical application is finally in production when something seemingly comes out of nowhere, significantly impacting the infrastructure technology environment and/or the budget.
Recently, a financial services company I know of implemented a new management application that enabled them to push data entry into the field. They followed the traditional methodology in developing this application — defining functional requirements, developing tools, provisioning storage associated with those tools, and taking the application into production. However, they did not address the burden that the additional 150 remote users would place on the network during periods of peak load from other applications. The network simply could not support this broader user community. Just as the application was placed in production an immediate fix was required and the company was forced to invest in extra network bandwidth.
It’s important to note that this financial services company had an extremely sophisticated IT group. This just illustrates that the complexity of today’s IT environments is such that even the best of IT organizations can miss things, using a process designed for an earlier generation of infrastructure technology —especially if they haven’t experienced that particular problem in the past.
It happens, but it can be avoided.
Didn’t we already fix this?
This issue (we call it a lack of infrastructure readiness) is a problem many believed was solved 15 years ago. In fact, at a recent client conference, Forsythe polled nearly 40 of our key clients ― all CIOs, VPs of infrastructure, and other top IT leaders at major companies ― and more than 70% said they had this problem licked. When pressed to examine the issue further, though, many realized that the processes they believe addressed this problem don’t always work.
We see this every day in data centers and IT departments. In fact, as businesses grow more dependent on IT, and as IT gets increasingly complex and interdependent, horror stories are common. Although most companies follow the traditional methodology for developing applications to a “T,” they still miss some of the stickier gotchas, which can cause them significant unplanned spending. This is because the problem has changed and the established methodology often used to develop applications is not effective at preparing shared infrastructure technology to support new applications.
So why does this still happen? It’s not that we don’t have a process. The problem is that the process fixes a model that is 15 years old.
When you design an application, you address a particular function. Fifteen years ago, an infrastructure, like the application it housed, was built to support a particular function or system. The project justification and specification methodology created at that time worked for the architecture that was in place, based on standalone systems. It covered both sides.
Currently, applications are still built with the same objective: one function. However, today’s infrastructure technology addresses multiple applications and functions. Complexity and inter-dependencies are the issues of the day, especially given virtualization, storage consolidation, and shared services. This complexity will only increase with anticipated changes like Cloud computing, unified data center fabric, and service-oriented architecture (SOA).
The model has changed. But the process hasn’t. The fact is by optimizing infrastructure through all these technologies individually you may be creating a problem as big as the one you’ve solved.
Facing the music
Over the last several years, many CIOs have had to answer this question form their CEO: “What happened to the return on investment (ROI) that I was promised when I invested in those new systems?” To understate the obvious, CIOs aren’t always able to answer this question to the business’s satisfaction.
In the early 2000s, many CIOs got burned when traditional project justification models failed. They could not effectively calculate and measure ROI for projects and systems running on shared infrastructure. IT lost credibility, and discretionary budgets were slashed. Today’s CIOs and CTOs are still jumping through hoops to justify budgets and to obtain money for infrastructure investments.
This all occurred because many IT leaders didn’t yet fully grasp, anticipate or explain the real cost of implementing new systems. It’s more than the cost of developing and implementing software. Metaphorically, going back to Croke Park, it’s the cost of the collateral impact on the Croke Park community, from lost property value to lifestyle infringement.
Okay, so how do we fix the problem once and for all? It’s an easy answer. We can’t. Technology doesn’t sit still long enough for us to perfect one solution that works for everything. But we can create and implement a process that addresses today’s model and evolves with the use of technology solutions that optimize infrastructure by addressing multiple applications.
And, if we continue to stay on top of evolving technology, then we can effectively ensure “infrastructure readiness” on an ongoing basis. Preparing your infrastructure can help you avoid spending more than you planned.
First, you have understand all of the various ways that a change can impact people, process and technology. This is a useful exercise not just for application changes, but also for technology changes like virtualization, and business changes like mergers and acquisitions.
We recommend that clients set up an assessment matrix that identifies potential areas of impact for each key technology domain. This matrix should also include baseline performance metrics for each area and a process that identifies changes to these baselines that might come from the business change that is being implemented. This tool then serves as a guide for identifying infrastructure changes to be made in conjunction with the project for implementing the business change.
The “potential areas of impact” listed in the chart below are just a few of the major areas that will be impacted during a transition. Consider that for each of these areas, companies typically already have a strategy in place based on previous needs. That strategy should be revisited based on the change occurring.
Second, enterprise architecture initiatives should have the capability and the flexibility to deal with changes or additions to the system portfolio and the technology landscape, such as virtualization, shared storage, etc. The architecture should be flexible enough to handle significantly different technology models as they become viable.
Third, it’s important to set up a funding model that includes infrastructure readiness as part of any system change. This allows you to appropriately associate the cost of a shared infrastructure with the change. For example, the ROI calculation for an investment should note the full cost of the IT infrastructure, including services for the entire life of the application or asset.
This equitable costing model for the new shared infrastructure should also incent the user to optimize the return (think justification to the CEO). If you develop an effective model using a shared network or virtualization, upfront costs might be 20 percent of those for a traditional model, thus justifying the short- and long-term investment.
Joe Wolke is a director in Forsythe’s IT Strategy practice and works with clients to define and deliver projects based on business needs and objectives. He has more than 25 years of business management experience, and spent the last 15 years in executive IT positions defining, communicating, and implementing IT strategies for global Fortune 500 organizations.