Keeping IT Costs Down

So, you think you got problems? Consider Intel: $1.6 billion in IT spending in 2003; 80,000 employees and over 19,000 engineers in 50 countries; the number of email messages up 247% in four years (that’s about 3.3 million per day); remote access users have risen 555% in the last four years and are now at 52,000; and storage demands grew 30 fold in five years.

In this interview with CIO Update, Busch talks about the changing world of IT and how Intel is coping with it while keeping costs down.

CIO Update: How has IT changed since your early days in the industry?

Busch: The industry has changed immensely. I remember the boss in my first job questioned my PO for two boxes of floppies. “What could you ever save on that much space,” he asked? “That amount of floppy disks will last you forever.” Interestingly, despite the vast increase in capacity since that time, there remains an unwillingness to pay for disk space and storage.

What have you done at Intel to bring IT costs down?

The theme of my first two years in the job was disk capacity. We kept running out of disk space. One of the first things we addressed, therefore, was a server upgrade. But even with that, storage needs continued to spiral and disk space remained an issue.

Therefore, one of our most important adjustments has been to move from inefficient direct attached storage to a more centralized storage network. This has improved performance by two to three times, increased availability, decreased TCO and has enabled us to reduce the number of support personnel required to manage an ever increasing volume of data.

Case in point: our e-business applications consumed 20 TB two years ago. Today they account for over 160 TB. We really had to get out of the cycle of just adding more servers to solve disk capacity challenges in order to make IT at Intel cost effective. We now have 3 PB online but can cope with that by using SAN and NAS technology to decouple storage growth from our servers.

How do you deal with the issue of centralized versus departmental level data?

Over time, we had developed a bottom-up data architecture that was too inefficient i.e. the company contained a large number of departmental data stores. This became unmanageable. We have successfully transitioned to having an enterprise-level data architecture which has helped to contain costs while improving the overall manageability of our information.

So is centralization of storage the mantra?

Not quite. It is a case of achieving the right balance. Centralized storage can make a huge difference in cost and efficiency. Yet having storage distributed around the globe is also an important factor in controlling costs.

Apart from the obvious explosion in storage capacity, what trends do you see emerging in IT data?

A big thing I have noticed in my time at Intel has been the rise of reference data. In my early days, it was all about transactional data. Nowadays, you hear much more about reference data. But you do have to deal with both. We are experiencing 60 percent growth rates on transactional data and 92 percent growth for reference data.

How important is hierarchical storage and how do you organize it at Intel?

It is more cost effective to use hierarchical storage architectures. I would recommend that people look at a tiered storage environment something like this:

  • Tier 1 is mission critical, highly available, and highest performance. Thus is the most expensive system. This tier can include a no data loss disaster recovery (DR) set up.
  • Tier 2 contains other production and pre-production SCSI Drives. This is in the mid-range in terms of cost. It may include DR, but not a no data loss option.
  • Tier 3 is basically various archiving and backup options. By using ATA drives you can make this your lowest cost option. It is important to realize, though, that Tier 3 is not going to match the performance or Tier 1.
  • How about legacy platforms? Should these be retained or replaced?

    I believe it is best to consolidate to newer technology where feasible. Storage frame capacity, for example, has greatly increased and maintenance on older frames with a fraction of the capacity far exceeds the cost of a new frame. It’s important to realize that these capacity increases are both in total disk storage and port connectivity. That often makes it possible to replace many older frames with one newer one.

    What cost cutting tips have you learned from experience?

    When viewing costs, you have to take into account the total lifecycle cost for a specific system or technology. This not only includes the hardware and software. It also includes data center space, production operations, backup devices, media and off-site storage, system engineering, etc. In addition, it is best to manage utilization by pooling storage and avoiding hard allocation of storage capacity.

    How else are you reducing IT costs?

    We have an aggressive disk recycle program with automated reports sent to customers and administrators detailing data that has not been accessed in 180 days. This space is then targeted for archiving to tape and then reused prior to making new purchases. In one organization within IT, we recycle about 8-10 percent of customer storage annually with this program, which is a direct cost savings on capacity purchases.

    What about cost avoidance? What tips do you have in that area?

    Forecasting and indicators are very important. Forecast out for demand and continuously revisit forecast. Also, be aware that the cost of disk space goes down every quarter. By pushing out the purchase of additional disk space, we take advantage of lower costs.

    Any tips on dealing with vendors?

    Regularly benchmarking your costs is very important. Intel pays based on Gartner Group research data on costs. This enables aggressive negotiations with the vendors if our pricing is falling behind.

    What are some of the big lessons you have learned?

    Capacity will continue to increase. Willingness to pay for storage will continue to decline. Therefore, the system around the storage technology determines success.

    Anything else you wish to add?

    Matching different levels of performance expectations is also important to controlling cost. Departments with 25 users, for example, do not typically need super-high speed systems and should not be made to spend large dollar amounts for high performance. On the other hand, design groups often require a solution that will be higher cost to achieve the performance levels their work requires.