As we race through the digital age, the natural byproduct of our high-speed journey accumulates at increasing rates that we have only begun to imagine. This byproduct, information, comes in all shapes and sizes—from databases to emails, Web pages to imaging archives. It grows continuously with our push to electronic commerce, 24/7 availability, and our requirement, real or perceived, to record every last transaction in our business exchanges. In our direct interaction with computers over the last 20 years, we witnessed the information explosion as our data storage went from a single floppy disk drive at 1.44 Megabytes (MB) to hard disk drives with capacities at 100 Gigabytes (GB)—a staggering 70-million fold increase. Applying growth of that magnitude to the business world virtually guarantees chaotic planning and mandates a radical restructuring to deal with an apparently uncontrollable byproduct.
The shift from a seemingly controllable to uncontrollable byproduct occurred as we began to network our computers. In the pre-Internet era, a significant amount of information exchanged in the business environment passed through human gatekeepers. Orders, inventory, customer support, and corporate correspondence, to name a few, typically flowed through an organization only as fast as a person could send or receive it. Today, the human gatekeepers no longer place artificial bottlenecks on the flow of information, and computers within and across organizations now generate communication and data at astronomical rates. Supply-chain management, real-time inventory tracking, automated monitoring systems of all types perpetually send data that must be tracked and ultimately stored.
Coupled with the sheer amount of information they generate, corporations face increasing pressure to accommodate, maintain, and protect it. Whether for customer relationship management software, electronic commerce Web sites, or corporate communications email infrastructure, today's business expectations place the utmost priority on data accessibility anytime, anywhere. Customers, partners, and employees demand that the information required for them to operate be there when they need it. The consequences of unavailable data quickly skyrocket to lost customers, disgruntled partners, and unproductive employees.
Behind the scenes of every corporation's mission-critical information technology (IT) operations, lies the foundation layer of data storage. Typically in the form of large disk subsystems and tape libraries, the media upon which the bits and bytes are stored empower the information flow. Couple this core with an effective software layer, the appropriate network interconnect, and precise administration, and you have the makings of a complete data storage system to drive organizational success.
Storage Takes Center Stage
In the early days of large-scale, enterprise computing, the data storage infrastructure simply came as part of the overall system. If you had an IBM mainframe, you were certain to find IBM storage attached as part of the deal. Few questioned this model, and in centralized data centers having a single point of contact for both mainframes and storage made sense. At that time, computing equipment purchases focused on the application platform first and the rest followed. This model continued through the migration to more open systems, including mid-range to high-end servers from a variety of vendors. Over time, storage capacity for applications grew as separate dedicated pools, with each application's capacity requirements managed independently within the organization.
As we moved to networked business environments, with more transactions, interactions, and information to capture, storage pools dramatically increased in size. Our push through the digital age caused thousandfold and millionfold increases in required storage capacity. Many companies were locked in to buying additional storage from their original mainframe or server vendor, often at inflated prices. Without options for any other kind of storage or the ability to share storage pools, you can imagine the margins that existed, and the vendors' enthusiasm to exploit them.
In the early 1990s, one storage vendor took note of this phenomenon and dove headfirst into the fray. EMC Corporation set out to attack IBM's market for captive storage, specifically in the mainframe market. Within several years, EMC was selling more storage for IBM mainframes than IBM was. The dam had broken, the vendor lock for homogenous configuration of mainframes and storage disappeared, and the storage market emerged.
Today, a cabal of companies controls the high-end enterprise market and fights daily for treasured data center territory. The bunch includes EMC, IBM, Sun Microsystems, Hewlett-Packard, Dell, and Hitachi Data Systems. The wars within this group for the crown of best "Terabyte Tower" perhaps are best expressed by the vigor of the product names. IBM called its latest salvo into the enterprise storage market Shark. Hitachi Data Systems calls its product line Freedom Data Storage, as if in some way to indicate that it offers choice to the downtrodden.
Now, more than ever, customers face a myriad of choices for their storage infrastructure. That may not be such a bad thing, especially considering that the storage component of IT deployments comprises as much as 70 percent of the total equipment cost. Management attention will always focus on the critical piece of the system and likely find areas that are bottlenecks, costly, complex, or competitive differentiators. Critical IT expenditures in storage indicate the central role it plays in corporate technology direction.
In the wired world, best-practice storage deployments require three focal points. The first is utilization and efficiency. Often the largest share of new IT expenditures, storage must be utilized to full capacity. A poor utilization rate is nothing short of leaving money on the table. To maintain an optimal storage infrastructure, executives and IT managers alike must maximize storage utilization rates. With storage networked together, the ability to exchange data and available space across the entire infrastructure becomes another routine corporate function.
In a wired world, with access to local, metropolitan, wide, and wireless networks 24 hours a day, 7 days a week, data must be everywhere, all the time. The second piece of best-practice deployment is availability. Even the most temporary outage of access to data, such as an unavailable Web page, can have a meaningful impact on customer goodwill. While all companies agree that business continuity must be maintained, few have spent the time and effort to see a thorough plan through completion.
In a wired world, power and intelligence reside in the network. There is no better example of this than what we have witnessed with the Internet. Its reach, impact, capabilities, and ongoing transformation speak volumes about what can happen when a bunch of computers are linked together, as if engaged in a never-ending dialogue. Storage on a network is no different. Though the development of storage networks is relatively new (5 years, perhaps) compared to the Internet (25 years), the potential can be equal or greater. Harnessing this power in the form of optimized, agile data storage infrastructure will set a class of companies above the rest and provide a platform for competitive differentiation. This third pillar of effective deployment is the overall agility and competence of the system.
The Need for Business Continuance
Data storage lies at the epicenter of corporate mission-critical applications. Given this essential organizational function, there is no time for gaps in storage availability. The data storage infrastructure acts as an information source for customers, partners, and employees, and they all expect access anywhere, anytime.
Downtime, as it is typically referred to when IT systems become unavailable for whatever reason, was once accepted as a necessary evil in dealing with computers. Today availability has taken over, with a range of modifiers from "high," to "24/7," to that additional stretch of "24/7/365." Perhaps our next sustained availability window will broach a decade! But downtime has real consequences, both in terms of business or revenue lost and the goodwill of any human contact in the disrupted exchange. An interrupt of a back-office database replication might delay the launch of a new service, but the replication software doesn't mind trying again...immediately. Yet, a corrupted On-Line Tranaction Processing (OLTP) exchange with a Web visitor could result in a lost transaction and a lost customer.
In either case, as measured in short term or long, faulty data storage translates to lost business and productivity. Expectations on business continuity go beyond simple customer expectations to a measure of corporate competence. Combating this potential black mark, companies have moved to implement business continuity plans, ranging from system-level redundancy and backups within the data center to sophisticated off-site protection schemes that replicate mission-critical data instantly.
Business continuity comes packed with dozens of questions about its defined meaning. Do we need 24/7 availability? How long can we afford to be down? Are we protected from system failure, site failure, and natural disasters? What data are we trying to protect, and why? Who is involved? Where should our second site be located? These questions underscore the complexity of a well-executed business continuity plan. The first step to executing such a plan is having the appropriate tools. When it comes to data storage, a networked storage environment provides the easiest way to get on track to a highly available storage environment.
Driving Towards Operational Agility
The rise in current and anticipated storage spending is evidenced by increasing customer requirements as well as the clamoring among storage vendors for market share. Much of the spending increase can be linked to customer requirements to maintain a highly available storage infrastructure that meets today's business continuance requirements. Few CEOs are willing to consider the possibility of a sustained data storage service outage within their company, and they sign checks to protect against these risks.
But too often, the infrastructure investment in storage focuses primarily on protection and business continuity. While a worthy objective for existing or new data storage deployments, this singular focus on risk prevention accomplishes only part of the long-term goals—to establish an effective and nimble data storage infrastructure. IT executives and implementers need to recognize that the storage spending on risk-prevention mechanisms must be coupled with operational agility mechanisms. Only through this combined defensive and offensive approach will a company be able to navigate the chaotic growth and change of data storage infrastructures.
Cost savings and return on investment can be difficult to measure for risk-prevention approaches. The impact of sustained service outages is frequently defined in lost dollars per hour or a similar metric. But can anyone really anticipate the costs of a full-scale disaster that wipes out a data center? On the offensive side, however, more easily measured metrics help justify the spending on upgraded and new data storage deployments.
Planning for a more agile enterprise, IT executives have a variety of parameters at their disposal to measure and justify expenditures. Application efficiency, capacity efficiency, simplified training, management, and maintenance, deployment time for new applications, and new storage capacity—these productivity considerations link to real cost savings within the IT budget. By including an offensive approach to a storage strategy, IT executives can couple risk prevention with operational agility to build a more robust, flexible data storage system.
Charting a Storage Networking Roadmap
Behind every successful corporation lies a set of core competencies that provide a competitive edge—FedEx has logistics; Wal-Mart, inventory management; Dell, supplier management. Today, with information as a key corporate asset, data storage management is as vital to business success as other operational functions. The starting point for developing data storage management as a core competence is an effective roadmap.
Certain companies are known to "get" technology. They are frequently the ones that have integrated business planning with IT development. By fostering a corporate culture that supports exchanges between business units and technology development, positive reactions develop that lead to defending, supporting, and expanding existing business. Additionally, the interaction can drive new business and lead to broader technology capabilities of a more effective organization.
As the lifeblood information assets of corporations, data storage requires development of independent plans and strategies that integrate with business objectives and related technology investments. Specifically from the storage perspective, three areas form the basis of an effective roadmap. The first is utilization and efficiency—ensuring adequate, yet not overprovisioned storage resources to serve the organization. A well-designed infrastructure using networked storage makes this feasible. Business continuity is the second component of the roadmap, guaranteeing data availability to all participants anytime, anywhere. Finally, storage competency, or storage agility, completes the roadmap. Those organizations that develop the best infrastructure, policies, and procedures to deal with explosive storage growth stand to lead the pack. Storage competency translates to business-savvy operational agility for maneuvering ahead of the competition.
The following chapters outline a complete storage networking roadmap, as shown in Figure 1-2, for organizations starting from the ground up or looking to improve upon longstanding installed infrastructure. From architectural basics, to control points, to strategies for continuity and agility, the book serves as both practical guide and navigational companion to understand the world of storage networking.
Audience and Text Overview
1.5.1 Target Audience
IP Storage Networking—Straight to the Core was written to serve IT and business managers. As these two groups need to understand each other's objectives to develop coherent action plans, it makes sense to cover considerations for both sides. On the business side, the book covers strategies for storage network deployment, options for outsourcing, and insight into storage vendor control points. On the technical side, the book presents comprehensive infrastructure explanations along with new and emerging developments in storage networking, specifically the recent integration of storage with mainstream IP networking. This area, which brings networking and storage professionals together for a common cause, presents the greatest opportunities for organizations to deploy innovative and valuable infrastructure.
Beyond senior IT and business management, the book addresses a wider range of constituents involved, directly or indirectly, with the deployment of enterprise storage. CIOs, CEOs, and CFOs will benefit from understanding the mix between technical and financial opportunities, particularly in the case of multimillion dollar deployments with disaster recovery plans that may require board approval. Research and development teams will benefit by seeing new architectural designs for application deployment. Technology investors, industry press, and analysts will benefit from a deeper understanding of the market forces behind the technology innovation.
All of these groups have a stake in the enterprise storage industry, and as such, can be considered "straight to the core" ambassadors, as shown in Figure 1-3.
Figure 1-3. Target audience for IP Storage Networking—Straight to the Core.
1.5.2 Text Overview
Chapters 2 and 3 cover the background of storage hardware and software respectively. Storage professionals will likely grasp these details easily. However, Chapter 2, "The Storage Architectural Landscape," pays special attention to the differences between network-attached storage, operating at the file layer, and storage area networks, operating at the block layer. As these two useful storage technologies merge to a common communications medium of IP networking, a review of each should prove helpful. Chapter 3, "The Software Spectrum," presents a detailed overview of the software spectrum around data storage management. Moving more quickly than other areas of the infrastructure, the software components can make a storage administrator's life easier and provide business value for the corporation. New market segments, specifically virtualization, are particularly interesting given the rapid development underway. While current virtualization software may not be appropriate for all organizations presently, these powerful capabilities will enhance storage administration, and any organization looking toward new investments must keep this component in the design plan.
Chapter 4, "Storage System Control Points," and Chapter 5, "Reaping Value from Storage Networks," lay the foundation for building effective storage network roadmaps. Chapter 4 presents a history of data storage architectures and how the intelligence piece of the equation has evolved across server platforms, disk subsystems, and now within networking fabrics. This dynamic shift, if managed appropriately, can be used toward customer advantage in balancing multivendor configurations. Building these kinds of strategies into a storage networking roadmap can provide significant cost savings at future junctures. Similarly, Chapter 5 outlines the defensive strategies for data storage most often associated with disaster recovery and business continuity. IP Storage Networking—Straight to the Core goes one step beyond the typical analysis by providing offensive strategies for storage deployment, fostering operational agility, and anticipating future changes to the overall architecture. Finally, the chapter concludes with techniques to measure returns on defensive and offensive tacks, allowing business and IT managers to justify expenditures and be recognized for cost efficiency and profit contribution.
The primary business continuance applications are covered in Chapter 6, "Business Continuity for Mission-Critical Applications." Most organizations have some or all of these applications running at present. The chapter begins by outlining the business continuity objectives for each application, for example, designing "classes" of storage. It also breaks down the differences between server availability and storage availability, and the required interaction. Storage-specific requirements of these applications are reviewed, particularly, mechanisms to optimize performance while maintaining business continuity. Conclusions about enabling uptime allow business and IT managers to draft the appropriate business continuity plans to meet their objectives.
Chapter 7, "Options for Operational Agility," examines means to increase efficiency and productivity, and create a flexible data storage infrastructure to serve the organization through dramatic and unpredictable information growth. By applying business models of ownership, financing, inventory management, and forecasting, business and IT mangers can create solid roadmaps for an agile enterprise. Specific topics covered include the emergence of outsourced storage models, recommendations for managing service level agreements, accelerating application deployment, and incorporating data storage into proven models for corporate asset management.
The tactical roadmap begins in Chapter 8, "Initiating Deployment," and extends through Chapters 9 and 10. From an exploration of the infrastructure design shifts to crafting an effective migration, Chapter 8 assists in this decision making. It also includes guidelines for calculating the total cost of ownership and identifying the true cost components of an enterprise data storage deployment.
Chapter 9, "Assessing Network Connectivity," serves as a comprehensive review of networking options. In cases of both business continuity and operational agility, the networking component plays a critical role. The ability to network storage devices within and across organizations presents opportunities for innovating architectures. Recently, with the introduction of new IP storage technologies such as iSCSI, choosing the right networking option is more important than ever. Topics such as network technologies, topologies, and carriers are covered thoroughly in this chapter, including security for storage across networks.
Rounding out the tactical storage roadmap, Chapter 10, "Long-Distance Storage Networking Applications," reviews specific applications across wide area networks. Historically, storage applications have been designed for the data center, with little consideration for the associated latency of running these applications across long distance. Today, sophisticated techniques exist that mitigate the effects of distance and enable storage applications to run between virtually any locations. These techniques are outlined in the chapter.
With the appropriate storage infrastructure in place, business and IT managers still have the task of maintaining and managing storage policies. Chapter 11, "Managing the Storage Domain," presents options for creating overall enterprise storage policies with a focus on data protection and disaster recovery scenarios. As these functions are inextricably linked with the need to administer overall growth, aspects of managing storage expansion are included in this section.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment