Image: Courtesy of Ciena
The expanding use of virtualization in enterprise data centers is reshaping both the economics and the nature of enterprise IT delivery, through physical resource consolidation and increasingly automated self-service support. The private cloud — where enterprise data center resource virtualization enables true IT-as-a-Service — can support all enterprise IT applications, irrespective of design, computing or I/O intensity, or mission-criticality.
Public cloud services, however, remain largely limited to either simple software-as-a-service (SaaS) applications or basic infrastructure services that are little evolved from legacy hosting practices. They often do not share virtualized data center assets among customers for resource efficiency, and connections typically are Internet-based. From an enterprise perspective, these factors tend to restrict public cloud utility to applications such as Web hosting, information archiving, and development and testing. The broader range of operational IT applications remains out of reach of the public cloud.
The next phase of cloud development must bridge the public and private cloud. The public cloud must become capable of supporting operational and mission-critical applications on virtualized infrastructures that mirrors and are connected to the enterprise private cloud. Only this will extend improvements in enterprise IT economics and allow public cloud providers to capture more of the enterprise IT budget. This next-generation cloud will require a new architecture: the data center without walls.
Why the Data Center Without Walls?
While virtualization and data center consolidation go hand in hand, many enterprises with virtualized IT architectures continue to employ multiple data centers. This practice is driven primarily by information resiliency requirements, addressed through multi-site storage replication. Moving from storage backup to distributed storage technologies enhances workload availability and performance by supporting the migration and distribution of computing virtual machines among multiple sites.
Cloud service providers also operate multi-data center architectures. This may be driven by a requirement to support high levels of information availability, requiring storage replication between provider data centers. Furthermore, since public clouds often support customers over a wide geographical area, provider architectures are more useful when their data centers are situated in multiple regions, better assuring consistently high application performance through user-to-data center proximity. With active-active storage distribution, fluid workload mobility becomes possible among the set of provider data centers within distance limits defined by the latency tolerances of processes and applications, enhancing computing service availability.
By permitting for a significant degree of effective asset pooling among data centers, inter-data center workload mobility also allows for a reduction in total provider data center resource needs. We have conducted analyses with our customers that indicate that a reduction of up to 35 percent of total data center physical assets is delivered by the Data Center Without Walls – significantly impacting data center capital and operating costs, such as real estate, power, cooling, maintenance and administration overhead.
Finally, the seamless connection of private and public clouds, required broadly to support operational enterprise IT in the cloud, takes the Data Center Without Walls across the enterprise-provider data center boundary. Referred to by the industry as the hybrid cloud, this connection effectively enables the extension of the enterprise data center infrastructure into the provider cloud, such that the provider data center resources supporting any application may be “dialed” from zero to 100 percent of the total. Of particular importance, this allows enterprises to size their own data centers to support long-term average or minimum workloads, and simply to “rent the spike” from providers.
The Cloud Backbone
The evolution of the cloud toward expanded enterprise IT utility is driving the creation of a true Data Center Without Walls, comprising multiple provider and enterprise customer data centers, among which “north-south” (user-to-machine) and particularly “east-west” (machine-to-machine, storage-to-storage, machine-to-storage) traffic is generated and flows. Therefore, a cloud backbone network interconnecting data centers is an integral component of the Data Center Without Walls.
East-west traffic may scale to large volumes and, in general, is sensitive to network latency and connection quality. Maintaining cloud service performance at consistently high levels requires that inter-data center traffic be carried over a network that both minimizes latency and reduces random or bursty frame losses to very low levels. However, maintaining sound economics requires that the network scales cost-effectively. This means avoiding over-dimensioning of the network, which is challenging given the time-varying and unpredictable traffic patterns on the cloud backbone.
Supporting Performance-on-Demand operations will become increasingly important as on-demand, enterprise-class customer applications scale in volume in the cloud, and as policy-driven, automated service and resource optimization practices proliferate. Performance-on-Demand requires a software-driven, and ultimately, a software-defined network. We’ll talk about this in a follow-up blog … watch this space!
Source : wired.com