Provisioning an application correctly is difficult. How can a CIO predict how much hardware to buy and provision? In reality, IT managers buy hardware to cover high load scenarios. The practical meaning of this approach is high CAPEX and low utilization rates. Before the “Cloud Computing Era” hardware purchase planning was based on high load scenarios and was the only option to go, but organizations today can use a cloud bursting model to significantly reduce CAPEX and at the same time increase utilization rates of their datacenters.
What is Cloud Bursting?
Cloud bursting is a deployment model in which an application runs in an organization’s datacenter and bursts into a public cloud when the demand for computing capacity spikes. As mentioned, the advantage of such a hybrid environment is that organizations pay only for extra compute resources when they are needed. One can compare this model to a family that has a single extra bedroom and is hosting, for a single night, seven people. Would it make sense to build two more bedrooms for hosting all seven people for a single night? The answer is probably not. It would seem more logical to host two people in the extra bedroom and the rest of the group in a nearby hotel.
That’s exactly the idea of cloud bursting – handle spikes in capacity more efficiently and more effectively.
Cloud Bursting Considerations
As mentioned in a previous post, some organizations are not able to move computing resources and data processing to public clouds. This is a result of regulatory and strict security constraints. Thus these organizations will not be able to deploy a cloud bursting model. A good example is retail. Online retail stores are known to experience peaks in demand during the holiday shopping season. It seems like retailers should seriously consider deploying a cloud bursting model. With that public cloud service providers do not necessarily offer a PCI DSS-compliant environment. Thus retailers could be putting sensitive data at risk by moving some of it to a public cloud.
When hosting an application in two different datacenters there are risks of incompatibility. Cloud computing service providers are offering virtualization options and management tools that provide the ability to burst computing power out of the datacenter. With that, these services often “force” organizations to maintain a homogenous and a specific IT environment that might not be available for them.
Application developers and architects have always designed their systems such that they are data-center-resident. No one envisioned that applications will dynamically expand to the cloud. But now cloud computing is becoming an integral part of application design. Software architects now ask: how will data be managed? Where will the cloud get its input data from and where should it store its output data? There are several approaches to this problem. Some of them relate to independent clusters, some relate to pre-positioning of data and there are other approaches. But it would be difficult to implement each one of these approaches if applications are not initially designed to run in a hybrid environment.
As for today, most applications are still data-center-centric in design and implementation approach.
Hybrid Clouds by Cloud Providers
Leading cloud computing providers realize the potential in hybrid environments and therefore are offering relatively easy to configure and highly secure cloud computing servers.
Amazon is offering its customers with a Virtual Private Cloud (VPC) datacenter. The idea is that customers can use a secured VPN tunnel to transfer computing power and data resources from their own server farms to Amazon’s farm. It is true that this added security layer is offered with an increase in cost but it also satisfies regulation restrictions that turn cloud bursting to a viable option.
GoGrid is offering a “Datacenter Expansion Solution” that allows organizations to extend their datacenters by connecting them with GoGrid’s cloud infrastructure via a secure 2-way IPSEC VPN Tunnel. According to GoGrid the service includes free F5 hardware load balancers, web servers, database servers, virtual and physical infrastructure, cloud storage, 1 Gbps Internet connection and off-site backup.
Rackspace is offering “RackConnect” a combined traditional and cloud hosting solution that allows for a centrally defined network security policy manager and increased security over an encrypted VPN tunnel or a private link.
Server side enterprise applications are often architected such that they have input queues that collect incoming requests and turn them into tasks. They also have worker threads that listen to these queues, retrieve tasks, process them and finally they have output queues that keep results for additional processing. A heavily used application, one that sees spikes in incoming requests, has to distribute these tasks between several application servers, otherwise it will fail.
Now comes a question, how can this model be architected to cloud burst?
One option is to add a manager component that can run as in independent thread or even as a separate process. This thread or process will constantly watch for incoming requests. When it becomes obvious that the system is reaching maximum processing capacity, the manager will invoke the cloud computing API to deploy new virtual application servers. Each server will be preconfigured with a worker role that will actually be an image of the original application server. When the cloud server boots it will issue an “I’m alive” message which will tell the application’s load balancer to start directing new incoming requests towards it. Now the mechanism will work in a similar way; the cloud server will have its own incoming tasks queue, it will have its own worker threads and its own output queue. It will also be monitored by the manager component for deciding whether to start more instances or vice versa, whether to shutdown instances due to reduced capacity.
As mentioned above, there is still a series question of how to handle persistent data produced by various cloud and non-cloud application servers. I’ll be happy to elaborate, if readers have questions.
Categories:Field Service Management