Overview: Once upon a time, “computer” meant “single mainframe.” Such computers owned all of the storage and, hence, all of the stored information. We have come full circle. Now the network is the computer, connecting distributed storage and processors. To the network, processors become resources to run applications and interface with users, and storage is a resource that holds information. Hence, the network is the mainframe. Under this scenario, storage managers will ask two key questions: What are my information resources, and what do I want done with them? That is a huge improvement over the old method of managing the processor first, in an effort to get at data. The key questions recognize that, in data processing, “data” comes first — as it should.
Step No. 1: Create a Storage Utility
A storage utility would present a simple “data tone” — as dependable as a phone’s dial tone — and allow any qualified user the fastest possible secure access to stored information regardless of storage mechanism, computing platform type, distance and type of connection between them, or time of day. Software veneer in front of the storage utility would deliver data on demand.
For enterprise IT cost accounting or billing end-users, a “charge”/metering mechanism would keep track of who wants what and from where. Enterprise administration would also provide control points for security and quality of service.
To approach the storage utility, storage providers have to extend the veneer first across their own products (disk, tape, NAS [network attached storage]), then across multiple suppliers’ products, and then across a WAN (wide area network) — the Internet. They first tackled the delivery veneer (the SAN [storage area network]) and then the administrative veneer (virtualization), with WAN capabilities coming up (sort of) via IP (Internet Protocol) support. Metering is not really being tackled.
The Long and Winding Road
The storage utility goal is far from being accomplished. The main barriers are listed below.
Moving SANs across suppliers. Most implemented SANs are pretty much single supplier — not so bad when EMC is dominant, but more problematic with the emergence of real competitors. The SNIA (Storage Networking Industry Association) Common Information Model (CIM) is a positive first step.
Recent Aberdeen InSights |
CIOs Embracing Web Seminars: A recent poll of IT executives found nearly three-quarters turn to Web seminars at least once per quarter. Is A CRM Turnaround On Horizon?: After five quarters of decline, the Customer Relationship Management software sector may be due for recovery. Security Policy Automation In The Enterprise: What emerging security policy automation tools can do for your network. The Promise of Financial Value Chain Management: Using tools to streamline and automate various financial processes in order to cut costs throughout the commerce cycle. BPM Burns Operational Fat: Business Process Modeling bridges the gap between existing IT infrastructure and emerging B2B collaboration protocols. Where Financial Processes and Technologies Stand: A look at the opportunities and challenges offered by financial process automation. The Road to .NET for Business Applications: Great Plains’ annual Convergence conference showed it is truly Microsoft’s business apps arm. Click here to reach CIN’s Research section.
|
Incorporating low-end/workgroup storage. This is important because, with the advent of server farms, server-attached storage (with second-tier PC servers and Sun/Oracle servers) is becoming more entrenched than ever. The hope is that with blade servers becoming more popular there will be one connection to the storage, making it easier to replace server-attached storage with a SAN. Also, NAS (file serving from storage to workgroup-level servers) is now popular and is not easy to combine with a SAN. NAS aggregation, a promising new technology, either obviates the need for SAN storage or may drive SANs into the embedded technology background.
SAN storage is truly networked storage; it is storage connected to servers through a special network as if it were directly connected peripheral storage. SAN storage is block-level storage. NAS is a misnomer because NAS is not networked storage, but rather a special-purpose computer (or “appliance”) with its own storage that is networked with other computers. NAS storage is file-level storage.
Creating administrative hooks. Block-level storage virtualization and a file virtualization layer associated with new software tools could give the administrator both the ability to manage information objects at a global level and the capability of looking at individual devices, if necessary.
Migrating data. File-level virtualization could provide a single namespace, and storage virtualization can create a dynamic logical storage pool to facilitate migration.
Performance when a WAN is included. Theoretically, the local storage should act as a cache to deliver near-optimal performance on average; practically, existing virtualized SANs built on existing storage often can demand large bandwidth from remote sites. However, providing this bandwidth is not easy with existing low-cost Internet connections.
Security/control concerns when the storage is not within the user’s data center. As storage networks burst data center bounds and become global network resources, data “owners” will demand control over their information resources, as well as proper custodial handling and security.
Charging users for usage. Database suppliers moved away from charging users for usage when it turned out that people didn’t want to outsource databases to any great extent, and managing all the different licenses and metering schemes was a pain.
Required: The Intelligent Network
Solving the foregoing problems (which are only storage problems; application integration must be added) will require a network that has the intelligence to virtualize and maintain logically consolidated files and storage and that regards processors as resources. Data tone will be Goal No. 1. Generalized networked processing for generally available data — “IT tone” — will be the goal after that.
Aberdeen Conclusions
The goal is nothing less than the holy grail of storage; namely, that any qualified user can access any information:
Goal No. 1 is the storage utility, providing data tone. Goal No. 2 — further in the future — will be to blend the storage utility and general IT utility services to reach the “IT tone.”
In the future, vendors and researchers will grapple with first-step storage problems, as well as the last mile and other bumps in the long, long road.
Dan Tanner is a senior analyst in the Storage and Storage Management practice of Aberdeen Group in Boston. For more information go to www.Aberdeen.com.