Read the Original Article at http://www.informationweek.com/news/showArticle.jhtml?articleID=221901196
With Amazon's EC2, Google's AppEngine, and now Microsoft's Azure, cloud computing looks a lot less like some catch-all concept in the distance and more like a very real architecture that your data center has a good chance of being connected to in the near future.
If that happens, more than the technology must change. The IT organization, and how IT works with business units, must adapt as well, or companies won't get all they want from cloud computing. Putting part of the IT workload into the cloud will require some different management approaches, and different IT skills, from what's grown up in the traditional data center.
These include strategy questions, such as deciding which workloads should be exported to the cloud, which set of standards you want followed for your cloud computing, and how you'll resolve the knotty issues of privacy and security as things move out to the cloud. And there's a big question of how, and how quickly, business units get new IT resources. Should they help themselves, or should IT remain a gatekeeper?
There are different vendor management skills. Staffs experienced in managing outsourcing projects will find parallels to managing work in the cloud, like defining and policing service-level agreements. But there's a big difference in that cloud computing runs on a shared infrastructure, so it's a less-customized deal. Some compare outsourcing to renting a house and the cloud to getting a room at a hotel.
With cloud computing, it may be more difficult to get to the root of any performance problems, like the unplanned outages we've seen this year of Google's Gmail and Workday's human resources apps. Monitoring tools are available to give the cloud customer insight into how well the cloud workloads are performing, so customers aren't totally dependent on the say-so of a cloud vender. But remote monitoring app performance--seeing the app performance an employee or customer sees in response times and expected results--may be a skill that IT staffs must still develop.
Many existing data center skills will apply to cloud work, in slightly modified form. Since clouds are highly virtualized environments, the x86 and virtual server expertise IT has built in recent years may transfer into creating the "virtual appliances" that are shipped off to run in the cloud.
But that also leads to a change--and a need for increased collaboration across disciplines. When constructing a virtual machine for use in the cloud, it may be critical for a system administrator, network manager, and information security officer to collaborate up-front on the design of specific VM types. These templates of servers--golden images--will become the guide used over and over as thousands of VMs are cloned from the pattern captured in the template.
In the past, these skills have been applied at different times in the provisioning of a server, with the security officer too often coming in at the end to inspect other people's work and impose any overlooked security measures. As virtual machines get cloned by the dozen, there's no chance for errors of any kind to be caught just before deployment. The three crucial disciplines must work together interdependently.
Understand, cloud computing isn't yet a reality at most companies. But that could change fast. This year, almost half of companies (46%) say they'll use or are likely to use cloud CPU, storage, or other infrastructure services, given the economy, according to an InformationWeek Analytics survey (see chart, p. 27). A year ago, less than a third (31%) had that positive view. For software as a service, 56% will use it or are likely to.
There are still plenty of doubters. Variable computing capacity like Amazon's EC2 has its niche, but legacy enterprise apps aren't leaving the data center, and you can't send critical business data to the cloud. But looking at the services being offered by Amazon, Google, and Microsoft--whose Azure cloud platform goes into full production Jan. 1--others see new, overpowering economies of scale. Listen and you can already hear authoritative voices saying cloud computing is changing the way they view IT--and how IT will be viewed at their companies.
A Michael Jackson Moment
Greg Taylor, senior systems engineer at Sony Music Entertainment, is responsible for the computing infrastructure behind Web storefronts for hundreds of performers. Earlier this year, he built surplus capacity for MichaelJackson.com and other leading musicians' stores. MichaelJackson.com, for example, could process transactions and record comments from 200 shoppers at a time, well beyond expected traffic levels.
You know what comes next. Upon the star's unexpected death June 25, the site was overwhelmed with people wanting to buy his music or simply commune with other grieving fans. "Our site became the water cooler for everyone wanting to remember Michael Jackson," says Taylor. More than a million people tried to access the online store over the next 24 hours. Many wanted to post comments but could not. The servers stayed up, but not everyone who wanted to find album information and background on Jackson's music could be served. Worse, many would-be purchasers were lost as traffic clobbered what was already "a very database-intensive" site, Taylor says.
Sony Music's top management understood Jackson's death had been an unexpected event but considered it unacceptable to have people trying to reach the company's music sites--and spend money--not get served. The problem couldn't be solved in the conventional way by throwing more hardware at it. That's because Sony Music had too many stores and no way to predict which artists would be hit next.
In response, Taylor rearchitected Jackson's and other popular artists' stores so that traffic can be split into two streams: for people trying to buy, and for those just seeking information. The transactions remain on the core store site hosted by dedicated Sony servers. During a traffic surge, visitors seeking album or background information may be shunted off to a matching, read-only site powered by servers in the Amazon EC2 cloud. Many companies share those servers, keeping their costs low, while there is always surplus capacity to handle individual store spikes.
Cloud servers under the Amazon agreement can scale up to as many as 3.5 million to 5 million visitors per day, Taylor estimates. In a big traffic spike, visitors still might not be able to immediately buy an album, but they're unlikely to go away miffed at not being able to get any information.
"It changes the way you look at IT," Taylor says of the cloud option. It's no longer a question of having direct control all the time over the resource. It's rather a question of what needs to be under his immediate power in the data center versus what's suitable to be moved off to the "elastic," public cloud, where at a moment's notice he can "fire up 10 more servers."
In the future, any business with the need to both conduct transactions and serve large amounts of content may find itself adopting a similar solution. Transactions and customer data remain in-house, but read-only content that poses no threat to customer privacy or data center security can be shipped off to the cloud. If spikes in the on-premises data center could be shipped off to the cloud, the enterprise could make do with a smaller data center requiring less capital expense.
It's the "magical world of cloud bursting," as Gordon Haff, a virtualization and cloud analyst at Illuminata, describes it. It's difficult to execute today, in part because no one quite trusts shipping off part of the workload of critical applications or customer data. There are also unresolved regulatory issues. For IT leaders, it's going to raise new questions about what's a core competency that must stay in the data center--managing personal data, perhaps, or transactions?--and what should go to a cloud.
Companies have already been making some decisions along those lines, by using software as a service for apps such as HR and employee benefits. "You don't become a market leader through better 401(k) administration," Haff says.
Software testing and quality assurance are two common early uses of Amazon's EC2. A variation is application-migration staging, where an app upgrade is tested in the cloud in a duplicate of the data center's production environment. As cloud computing becomes more commonplace, more testing and software development will likely occur in the cloud.
Ideally, developers can create Web apps using a cloud development platform for the environment in which that app will be deployed. IT then can skip the usually painful transition from developer source code to code that's ready for the deployment environment. Microsoft has picked up on the potential of this model and will offer TeamSystem Server in Azure--it has even reached out to PHP and Eclipse users to try to get them to use Azure development services, such as version control.
What's tougher is to decide which among your business production applications, if any, can move out as well.
Chris Steffens, chief technical architect at Kroll Factual Data, a supplier of credit rating reports and other financial analysis, is well acquainted with that problem. His firm's business consists of analyzing large amounts of financial data and preparing reports, and he copes with large spikes in data center activity. "We would very much like to play in the cloud," he says, but so far, he hasn't seen a way to do so.
For one thing, any customer data must meet an array of national and state privacy regulations, and a cloud service provider somewhere other than Kroll's Loveland, Colo., location would complicate compliance. "There's not a uniform set of guidelines on what standards and systems you can use to secure your data," he says.
There are some de facto cloud security standards--SAS 70 data center security audits, for example--but there are no data-handling standards for the CEO, CFO, and CIO to rely on. For now, sensitive data must remain on premises or released only to partners known to be operating trusted systems.
Boundaries Will Be Blurred
Even if data handling could be resolved, there's an organizational roadblock: an entrenched division within IT between data center operations and development staffs, with each only partially heeding the interests of the other.
In operations, a system administrator typically gets to know an application and its server well, while a programmer learns network protocols, APIs, and coding languages well. In cloud operations, the system administrator's role changes. The admin has to trust someone else to do the conventional role of directly managing a server. And in order to access a server, the admin needs to understand more programming skills, like the SOAP or REST Web services protocols and how to deal with virtual machines in a distributed environment, which may require an understanding of PHP, Python, or one of the other scripting languages.
"The system administrator as something distinct from programmers will collapse," predicts Jason Hoffman, founder and CTO of 6-year-old Joyent, a virtual data center provider. He maintains that the skills needed to run in the cloud are so different from those in the conventional data center that third-party providers such as Skytap, Elastra, and RightScale will manage workloads for their customers in the cloud, like converting a workload from physical to whatever virtual machine image is required by the customer's cloud of choice.
The line between system administrator and programmer is blurring at National Retirement Partners, a startup whose advisers help companies choose 401(k) retirement plans. Adam Sokolic, senior VP of operations, says he can design a tool that he knows a 401(k) plan adviser needs, and a system administrator, without sophisticated programming chops, can develop it for use with Salesforce.com CRM. The admin writes it in Apex, the programming language for Salesforce's Force.com application development platform.
Customizations let advisers do tasks such as integrate information on would-be customers from Microsoft Office and Outlook into Salesforce, and move customer data to mobile applications for smartphone access.
Sokolic steers this IT effort as a tech-savvy accountant. The ability to quickly add new tech tools is a key piece of how the company, No. 2 on Inc. magazine's list of the fastest-growing companies this year, hopes to add to its roster of 150 advisers. "We use Salesforce-based tools to recruit," he says.
Developers at Japan Post are finding they can get new tools out faster on cloud platforms, but first they had to learn agile application development. Japan Post, an insurance, banking, and postal service conglomerate with about 70,000 employees, built customer contact and accident-claim reporting apps on the Force.com platform.
It took about one-fourth the time and cost it would normally require to develop an app and deploy it on conventional infrastructure, says Yoshihiko Ohta, a senior general manager with Japan Post. A downside to Force.com is that developers can't do every customization they would like. But with core app logic and many interface functions pre-built, some apps can be developed in days. That begs for a more agile approach, of quick builds and frequent business-user reviews. Ohta recommends that teams on Force.com train developers on the agile process first.
Cloud computing could move too fast for many business technology operations--the business-unit users and IT teams. That may require changes in policies, in terms of IT approvals, project management, portfolio managements, and even IT budgets.
When a business-unit manager requests server capacity today, it's often a six-week endeavor. The server must be ordered, unpacked, configured, and deployed. With virtualization, a new server can be spun up from a template or, better yet, a golden master image, pulled out of storage. Management software from VMware, Citrix Systems, and Virtual Iron (now part of Oracle) provide a lab manager front end, usually a Web portal, where end users select the type of server they want and can self-provision it, if the system allows it.
Chargeback systems can automatically bill departments for the resources the users consume. Add in the variable capacity of external clouds, and it's easy to see how one IT headache goes away--and another whopper could quickly take its place. IT leaders must learn to set up systems so that users are given rights to provision servers and access resources that match their roles.
If the approval process for new servers doesn't match this speed, businesses will miss out, and IT risks a new form of user backlash. "The business processes need to reflect what virtualization is capable of doing," analyst Haff says. "If you can spin one up in 60 seconds, that will do no good if it still requires three weeks of approvals."
What's Inside, What's Outside
The last change we'll point out is that the line between server virtualization inside a company's data center and public cloud computing from the likes of Amazon will also blur.
The skills that data center pros have honed around virtualization of x86 servers won't be wasted as they try to push some capacity to public clouds. For any foreseeable future, trying to move a workload from inside a data center to a public cloud via live migration will take deep knowledge--whether the move is taking place between like-to-like servers, down to the specific chipsets. And if you're a VMware user in-house, you may want to learn conversion skills to Amazon's AMI format, Amazon Machine Image, or Microsoft's VHD format.
VMware, Citrix, and Microsoft are all vying to take the lead in x86 virtualization, and in doing so they will struggle mightily to make it easier for companies to manage hundreds or thousands of servers through a virtualization management layer. Those management tools will let IT teams run a "cloud center" within the data center with fewer people and less energy. Part of the payoff of cloud computing will come inside the data center as the skills gained permit a "private" cloud to operate as part of the data center with a greater degree of automation than before. Such a private cloud will need to employ virtual machine load balancing, which may include live migrations, and require IT teams to develop a sixth sense for when a server, chugging along just fine, is about to be overworked by its multiple-VM load.
Then the challenge is to make the new combination of public cloud and private cloud work, even when the two environments don't start out as perfectly compatible. Volantis Systems is a U.K. company that produces the Ubik.com server to let consumers and small businesses create free Web pages designed for the small screens of mobile phones. Major telecom carriers including T-Mobile and Norway's Telenor use Volantis' services to build large information sites about their products and services. Telenor uses it for its online store. Rapid growth threatened to outstrip Volantis' storage capacity for hosted services.
Volantis CEO Mark Watson wanted to use cloud storage but ran into the problem that his systems, like many in telecommunications, are Unix-based. His IT staff had little experience dealing with the x86 Dell servers that make up the Microsoft Azure cloud. Working with outsourcer Infosys, Volantis created APIs between its hosted store creation systems and Azure in two weeks. Now it's storing customer sites in the cloud because it's more scalable. Volantis is an early test user of Azure, Microsoft's cloud service that goes into full production in January, with customer billing starting in February.
Microsoft is about to expand what can be done in the cloud by not just opening up computing capacity, but also by placing its development tools in its Azure cloud and building out services that make it easy to launch new applications there. Salesforce, as National Retirement Partners' experience demonstrates, is expanding the platform used by its applications into an easier-to-use cloud platform where those apps get customized and new ones integrated alongside them.
Cloud computing is evolutionary in many ways, in that it frequently builds on development and deployment techniques already familiar to IT organizations. If anything, it removes some of the old obstacles to deploying applications that can scale out to large numbers of users.
At the same time, cloud computing adds its own layer of complexities to master. Learning to work with the cloud, and fine-tuning the IT organization and its policies to the issues the cloud raises, isn't necessarily a requirement this year, depending on your company's adoption. But that distant form in the firmament is taking on concrete shape. And it's looking like a very real, future extension of the data center.