Archives de catégorie : Cloud

Microsoft dumps Live Mesh in Windows Essentials 2012 with SkyDrive

On Tuesday, Microsoft announced a new version of Windows Essentials 2012 was available to download. At the time, the company promoted its new versions of Movie Maker and Photo Gallery that were included in this new collection of small Windows applications for Windows 7 and Windows 8. Other Windows Essentials programs such as Mail and Messenger don’t seem to have received an update in this new collection.

However, Microsoft has quietly dumped one of the applications found in the previous versions of Windows Essentials. In an FAQ page on the Windows blog, the company admits that Live Mesh, the file synchronization and remote desktop app, is not a part of Windows Essentials 2012. Microsoft states:

If you have Windows Live Mesh installed, it will automatically be removed if you install the new Movie Maker or Photo Gallery (available as part of Windows Essentials 2012). Microsoft SkyDrive will be installed in place of Windows Live Mesh. To sync folders from the cloud to all of your PCs, you will need to install SkyDrive on all of your PCs or Macs.

So if you really want to keep Live Mesh, your only option is to keep using the previous release found inWindows Live Essentials 2011. Microsoft says, " …  you cannot install and run both Windows Essentials 2012 (with the new Movie Maker or Photo Gallery) and Windows Live Mesh on the same PC."

Source: Windows blog

Pano System for Cloud

Pano System for Cloud provides the lowest-cost solution for delivering Google Chrome-based cloud desktops to your end-users. Utilizing a fully-centralized architecture it provides a single management console to deploy, control, and secure both endpoints and cloud desktops.
Pano Logic’s field-proven Pano Zero Clients contain no CPU, no memory, and no software – removing all management and support overhead from the user desktops. With over 100,000 Pano Logic Zero Clients and Pano-powered Fujitsu Zero Clients already deployed for virtual desktops, Pano System for Cloud provides a new way to leverage your investment with even better economics and productivity.

Source : http://www.panologic.com/Pano-System/Cloud

Microsoft: the cloud is your future, and ours

Microsoft is doing what it can to promote cloud computing, focusing on the platform at the TechEd annual developer conference in Orlando, Fla. this week. In the event’s opening keynote, Server and Tools chief Satya Nadella told a sold-out crowd of 10,000 that the future of Microsoft is in the cloud.

In the month or so leading up to TechEd, Microsoft has made a fair number of cloud-related announcements. Some of this is out of necessity, because of customer demand, but a lot of it has to do with competitive pressures.

Microsoft finds itself squeezed by Google and Amazon, as both make their plays for the cloud. It could be argued that Google is one of the premier PaaS providers out there, while Amazon pretty much has the IaaS sector locked up. This leaves little space for Microsoft, who long ruled the traditional computing market with an iron fist.

That doesn’t phase Nadella though, and he believes Microsoft is uniquely positioned to make a play for the cloud.

“The operating system does two things: it looks after the hardware, and it provides a platform for applications. The modern datacenter and modern apps put more pressure than ever on infrastructure to become truly cloud-optimized, and that’s where Microsoft builds on our legacy with the OS to help our customers,” Nadella said.

Nadella pointed to Windows Server 2012 Release Candidate as a sign of the company’s seriousness in helping its customers adopt the cloud. The RC is being marketed alongside System Center 2012 as a way to embrace both virtualization and the cloud, both public and private.

The latest update to Windows Azure and its move from a PaaS play to an Iaas and PaaS offering is another facet of Microsoft’s increasing support for the cloud. Azure now permits users to run persistent virtual machines of both Windows and Linux, as well as develop virtual private networks that extend from on-premise into the cloud. Obviously, Microsoft is attempting to take on Amazon Web Services here, which has a very strong presence in the IaaS market.

Finally, Nadella announced a new version of Windows Intune, which includes expanded mobile device management features and upgrade rights to the latest version of Windows as the company moves towards a fall release of Windows 8.

In an accompanying blog post, Nadella explains more about Microsoft’s cloud strategy. "With Windows Server 2012 and Windows Azure, we’ve taken everything we’ve learned from running datacenters and services at global scale and are delivering the next generation of operating systems — the ‘cloud OS’ — to help our customers seize the opportunities of the cloud", he wrote.

Source : betanews

Next-Gen Cloud Needs Data Center Without Walls

Image: Courtesy of Ciena

The expanding use of virtualization in enterprise data centers is reshaping both the economics and the nature of enterprise IT delivery, through physical resource consolidation and increasingly automated self-service support. The private cloud — where enterprise data center resource virtualization enables true IT-as-a-Service — can support all enterprise IT applications, irrespective of design, computing or I/O intensity, or mission-criticality.

Public cloud services, however, remain largely limited to either simple software-as-a-service (SaaS) applications or basic infrastructure services that are little evolved from legacy hosting practices. They often do not share virtualized data center assets among customers for resource efficiency, and connections typically are Internet-based. From an enterprise perspective, these factors tend to restrict public cloud utility to applications such as Web hosting, information archiving, and development and testing. The broader range of operational IT applications remains out of reach of the public cloud.

The next phase of cloud development must bridge the public and private cloud. The public cloud must become capable of supporting operational and mission-critical applications on virtualized infrastructures that mirrors and are connected to the enterprise private cloud. Only this will extend improvements in enterprise IT economics and allow public cloud providers to capture more of the enterprise IT budget. This next-generation cloud will require a new architecture: the data center without walls.

Why the Data Center Without Walls?

While virtualization and data center consolidation go hand in hand, many enterprises with virtualized IT architectures continue to employ multiple data centers. This practice is driven primarily by information resiliency requirements, addressed through multi-site storage replication. Moving from storage backup to distributed storage technologies enhances workload availability and performance by supporting the migration and distribution of computing virtual machines among multiple sites.

Cloud service providers also operate multi-data center architectures. This may be driven by a requirement to support high levels of information availability, requiring storage replication between provider data centers. Furthermore, since public clouds often support customers over a wide geographical area, provider architectures are more useful when their data centers are situated in multiple regions, better assuring consistently high application performance through user-to-data center proximity. With active-active storage distribution, fluid workload mobility becomes possible among the set of provider data centers within distance limits defined by the latency tolerances of processes and applications, enhancing computing service availability.

By permitting for a significant degree of effective asset pooling among data centers, inter-data center workload mobility also allows for a reduction in total provider data center resource needs. We have conducted analyses with our customers that indicate that a reduction of up to 35 percent of total data center physical assets is delivered by the Data Center Without Walls – significantly impacting data center capital and operating costs, such as real estate, power, cooling, maintenance and administration overhead.

Finally, the seamless connection of private and public clouds, required broadly to support operational enterprise IT in the cloud, takes the Data Center Without Walls across the enterprise-provider data center boundary. Referred to by the industry as the hybrid cloud, this connection effectively enables the extension of the enterprise data center infrastructure into the provider cloud, such that the provider data center resources supporting any application may be “dialed” from zero to 100 percent of the total. Of particular importance, this allows enterprises to size their own data centers to support long-term average or minimum workloads, and simply to “rent the spike” from providers.

The Cloud Backbone

The evolution of the cloud toward expanded enterprise IT utility is driving the creation of a true Data Center Without Walls, comprising multiple provider and enterprise customer data centers, among which “north-south” (user-to-machine) and particularly “east-west” (machine-to-machine, storage-to-storage, machine-to-storage) traffic is generated and flows. Therefore, a cloud backbone network interconnecting data centers is an integral component of the Data Center Without Walls.

East-west traffic may scale to large volumes and, in general, is sensitive to network latency and connection quality. Maintaining cloud service performance at consistently high levels requires that inter-data center traffic be carried over a network that both minimizes latency and reduces random or bursty frame losses to very low levels. However, maintaining sound economics requires that the network scales cost-effectively. This means avoiding over-dimensioning of the network, which is challenging given the time-varying and unpredictable traffic patterns on the cloud backbone.

Supporting Performance-on-Demand operations will become increasingly important as on-demand, enterprise-class customer applications scale in volume in the cloud, and as policy-driven, automated service and resource optimization practices proliferate. Performance-on-Demand requires a software-driven, and ultimately, a software-defined network. We’ll talk about this in a follow-up blog … watch this space!

Source : wired.com

Top 10 Cloud Influencers, Thought Leaders

Je viens de lire ce matin un billet dans Wired à propos des 10 personnes les plus influentes dans le Cloud.

Voici la liste des ses personnes :

Rien ne vous choque ?

Il n’y a pas un seul représentant de Microsoft dans cette liste.

Il n’y pas non plus de représentant de Google, ni d’Apple…

Après celui d’Orange, l’Etat finance le projet Cloud de SFR et Bull

L’Etat vient d’apporter son soutien à un second projet de Cloud computing en France. Après le consortium composé d’Orange et de Thales, SFR et Bull vont recevoir 75 millions d’euros via le Fonds national pour la Société Numérique (FSN).
Dans le cadre du projet Andromède, l’Etat a fait le choix de soutenir deux initiatives concurrentes en matière de Cloud computing. Après avoir avalisé en avril dernier le duo formé par Orange et Thales, le Fonds national pour la Société Numérique (FSN) va s’associer à SFR et Bull pour investir 75 millions d’euros dans ce projet.
Une société commune va donc être créée dans laquelle SFR détiendra 47 % des parts, Bull 20 % et la Caisse des Dépôts via le FSN, 33 %. L’objectif de cette structure sera de « construire et d’exploiter une centrale numérique de confiance qui fournira aux entreprises et administrations, des PME aux grandes organisations, une gamme de services de cloud computing sécurisés couvrant les besoins en ressources informatiques, des plus courants aux plus critiques », explique le consortium.
Selon SFR et Bull, au total, ce projet représente un investissement de 225 millions d’euros et la création directe d’environ 400 nouveaux emplois.
Dans un communiqué, le p-dg de Vivendi (maison-mère de SFR), Jean-Bernard Lévy, explique que « pour relever ce défi, nous sommes fiers de bâtir une usine numérique innovante et ouverte, abritant des infrastructures fiables où fournisseurs et utilisateurs trouveront les ressources dont ils ont besoin. Notre ambition est d’y associer les meilleures expertises informatiques, télécoms et en matière de sécurité, pour amener les services de cloud computing à répondre de manière efficace et compétitive aux contraintes et objectifs des systèmes d’information dans leur ensemble ».
Pour rappel, le gouvernement avait laissé la porte ouverte à deux projets concurrents en matière de Cloud computing et disposait d’une enveloppe globale de 135 millions d’euros. Après avoir octroyé une partie de ses subsides à Orange et Thales, la Caisse des Dépôts et Consignations accorde donc une nouvelle fois sa confiance afin que les deux consortium fassent jouer la concurrence sur la marché de l’infonuagique. Quant aux premières offres commerciales, elles devraient être lancées d’ici cet été.

Source : http://pro.clubic.com/

DevLabs: Microsoft Code Name Casablanca SDK for C++

Casablanca is a Microsoft incubation effort to support cloud-based client-server communication in native code using a modern asynchronous C++ API design.

Casablanca is a project to start exploring how to best support C++ developers who want to take advantage of the radical shift in software architecture that cloud computing represents.

Here’s what you get with Casablanca:

  • Support for accessing REST services from native code on Windows Vista, Windows 7, and Windows 8 Consumer Preview by providing asynchronous C++ bindings to HTTP, JSON, and URIs
  • A Visual Studio extension SDK to help you write C++ HTTP client side code in your Windows 8 Metro style app
  • Support for writing native-code REST for Azure, including Visual Studio integration
  • Convenient libraries for accessing Azure blob and queue storage from native clients as a first class Platform-as-a-Service (PaaS) feature
  • A consistent and powerful model for composing asynchronous operations based on C++ 11 features
  • A C++ implementation of the Erlang actor-based programming model
  • A set of samples and documentation

REST

With Casablanca, you get support for doing things like developing REST services for Azure, or accessing them from clients via an HTTP library, sending JSON data, accessing Azure blob and queue storage, and using TCP for flexible networking needs, all in a library that takes advantage of modern C++ developments.

Asynchrony

Casablanca also gives you a convenient model for composing asynchronous operations. Whether your application is compute-intensive or I/O-driven, its scalability is likely to require careful resource utilization. Asynchronous APIs are great for scalability, but can be very hard to use when all you have is C-level functionality. Fortunately, C++ 11 offers a whole new set of capabilities that can make dealing with asynchronous operations easy, and Casablanca takes advantage of that throughout.

Actors

Another aspect of Casablanca is its implementation of the actor programming model, which has proven itself useful in building reliable and scalable systems. The C++ implementation stays close to the Erlang model; it’s obviously difficult to exactly mimic the model of a pure functional language in library built with an imperative language that has pointers, but we’ve gotten pretty close.

Note on Visual Studio 11 and Azure SDK

In this release of Casablanca, we do not provide Visual Studio 11 support for Azure since the Windows Azure SDK does not quite officially work for Visual Studio 11, see here. Also, we do not provide Visual Studio 10 support for Azure on Windows 8 since Azure SDK on Windows 8 is not officially supported, see here. We will support Azure bindings on Windows 8 and Visual Studio 11 in an upcoming release.

How to Get Casablanca

Try out Casablanca, an incubation effort to support cloud based client-server communication in native code.

Download Casablanca: Visual Studio 2010

Download Casablanca: Visual Studio 11

Get Started

  1. Download and install Casablanca for Visual Studio 2010 SDK or Visual Studio 11 SDK.
  2. Download the Casablanca samples.
  3. Read the documentation to get started with Casablanca.

Requirements

  • Windows 7 or Windows 8 CP
  • Visual Studio 2010 SP1 or Visual Studio 11 Beta

Source : http://msdn.microsoft.com/en-us/devlabs/casablanca

Understanding The Different Roles In A Cloud Computing Setup

If you are migrating from enterprise to a cloud based solution, it is necessary to understand the different roles in cloud computing. In this post, I will cover the 6 major roles in the cloud setup.

Cloud Service Provider

This is the entity that provides the cloud service. The cloud service provider owns and controls the cloud computing platform. The services include SaaS (Software as a Service), PaaS (Platform as a Service), IaaS (Infrastructure as a Service) and IpaaS (Integration Platform as a Service). Based on the services provided, the CSPs can be broadly categorized into 3 types:

  • Application provider – These are providers that directly provide you access to an application without you having to worry about the layers underneath. Thus, if you are running a mail application with Google, you don’t need to worry much about the server infrastructure, resources such as RAM or platforms. Examples include Dropbox, Salesforce.com, Google Apps and Microsoft Office 365.

  • Resource provider – These provide virtualization systems on top of their servers and lets you buy resources such as RAM, computing cycles and disc space. Offerings from providers such as Rackspace Cloud and Amazon Web Services (AWS) typically fall under this category.

  • Infrastructure provider – These lease servers and associated infrastructure from their datacenters. The infrastructure includes servers, storage, bandwidth and the datacenter (with power, space and personnel to man them). Companies such as Rightscale provide you the complete infrastructure to set up your cloud service.

Cloud Consumer

This is the user that is consuming the cloud services. The “Cloud Consumer” could be one of the following:

  • Developers in your enterprise who are building the apps using the cloud infrastructure.

  • Office workers and end consumers who are accessing the storage and productivity applications.

  • IT support team that uses the cloud services such as cloud backups to supplement their resources.

Cloud Service Brokerage

Cloud Service Brokerage (CSB) provides intermediation services between the consumer and the provider. They provide three major services:

  • Aggregation services – integration service different cloud service providers,

  • Intermediation services – help you identify the right service provide, consistent billing and support

  • Arbitrage services – getting good deals on what you pay for different cloud services.

Cloud Architects

These are the guys who will help you design your cloud solution and develop the right cloud architecture that suits your needs. The architecture should incorporate your storage, security, computing and compliance needs and build the right design that satisfies these needs.

Cloud Auditor

Security is a very critical aspect in any cloud setup. Since you might be storing confidential enterprise data on a server over which you have less control, it is important to put in place the right auditing and regulatory processes. Cloud Auditors are a third party who will provide independent assessments on your cloud setup and point out to the security vulnerabilities, data leaks and performance issues. There are 5 major types of audits that these companies might offer:

  • Data security audit – making sure that your enterprise data and those of your customers are not leaked out.

  • Regulatory compliance audit – these are audits that make sure your cloud installation satisfies all the major regulations from federal (Sarbanes-Oxley, HIPAA), state level (California’s data breach law, Massachusetts’ data protection law) to industry body level.

  • Performance and Reliability audit – this audit measures how good is your setup for various performance tests. These could include stress testing and manual testing to find out bottlenecks.

  • DR/BC (Disaster Recovery and Business Continuity) Audit – finds out how fast can you recover from disasters and how much of the data can you salvage.

  • ROI (Return on Investment) audit – this checks the business justifications of the cloud setup and make sure it makes accounting sense.

Cloud Carrier

This is the provider of transport level infrastructure to the cloud and connects cloud service provider and the customers. Although the telecom companies providing these services are assumed to operate in the lowest part of the stack, these days the carriers are getting more aggressive in growing up the value chain by providing transport solutions tailored to the needs of various cloud providers and consumers.

By Balaji Viswanathan published in Cloudtweaks

Will Cloud Become the De Facto Standard for Computing?

"The recent TOSCA initiative has made interoperability for cloud computing closer than ever," observed Andrew Hillier, co-founder and CTO of CiRBA, in this exclusive Q&A with Cloud Expo Conference Chair Jeremy Geelan. "However, until players like Amazon and Google join in," Hillier continued, "it will be difficult for organizations to move from one cloud to the other without risks to their data and infrastructure."

Cloud Computing Journal: Agree or disagree? – "While the IT savings aspect is compelling, the strongest benefit of cloud computing is how it enhances business agility."

Andrew Hillier: Although savings and agility are both compelling benefits, it’s usually agility that’s realized first. This isn’t because it is a higher priority, but because it occurs earlier in the cloud adoption process. The push toward standardization and self-service can rapidly increase flexibility and decrease provisioning time, but can actually work against efficiency (much to the surprise of many cloud adopters). The resulting environments are difficult to manage, and many organizations end up with higher spend (for external clouds) or much lower density (internal clouds) than they originally envisioned. Fortunately, by adopting more sophisticated methods of planning and controlling these environments, workload placements and resource allocations can be safely optimized, eliminating over-provisioning once and for all and turning the cloud adoption process into the "win-win" that was originally targeted.

Cloud Computing Journal: Which of the recent big acquisitions within the Cloud and/or Big Data space have most grabbed your attention as a sign of things to come?

Hillier: Cisco’s recent acquisitions, including Newscale and Tidal, are very interesting. Converged infrastructure with a decent "built-for-cloud" management stack could change the face of the hardware market. Add some analytics to manage the complexity by simplifying on-boarding and performing more advanced capacity management and organizations will be able to leverage these technologies extremely effectively. This type of approach is far superior to strategies that repurpose "old school" tools and frameworks, many of which were designed for physical environments, to attempt to cobble together a cloud ecosystem.

Cloud Computing Journal: In its recent "Sizing the Cloud" report Forrester Research said it expects the global cloud computing market to reach $241BN in 2020 compared to $40.7BN in 2010 – is that kind of rapid growth trajectory being reflected in your own company or in your view is the Forrester number a tad over-optimistic?

Hillier: I think what is interesting about the Forrester projections is the markets they see it happening in. Pure IaaS is projected to almost double and then contract slightly, which makes sense. The largest projected growth is in Virtual Private Clouds, which Forrester defines as a "hybrid business model of the highly standardized public cloud offerings provided over the public Internet and the more customized privately hosted solutions." Because this heavily overlaps with outsourcing and other remote hosting models in use today, this cloud growth will likely come at the expense of other segments, and may more resemble a "lateral shift" in hosting strategies than the explosive growth the numbers would suggest. In fact, one can argue that this growth should be attributed to the outsourcing market, and that the "cloud" prefix may even become meaningless in the next decade as it becomes the de-facto standard for computing.

Cloud Computing Journal: Which do you think is the most important cloud computing standard still to tackle?

Hillier: The recent TOSCA initiative has made interoperability for cloud computing closer than ever; however, until players like Amazon and Google join in, it will be difficult for organizations to move from one cloud to the other without risks to their data and infrastructure. The development of application and infrastructure templates will make the idea of public cloud computing more palpable to larger organizations, and the possibility of experiencing the true cost savings the cloud has promised more of a reality.

Source : cloudcomputing.sys-con.com