Private Cloud is an Evolution of Dedicated Servers, not Cloud
Tuesday May 1, 2018
There are a variety of computing workloads that support successful business operations. If we were to maximize resource usage for these workloads, we would end up with a vast assortment of computer chassis, hard drives, RAM chips, and a host of various other computer parts. And although this assortment would provide coverage for the current needs, it would not be able to keep up with business growth (or shrinkage), nor would it be economical to maintain.
What is a workload?
“Workload is a generic industry term meaning an independent collection of code, service (app), or similarly packaged process. A defining factor in the definition, especially when looking at today’s infrastructure technologies, is the independent nature.“
But that is where many on-premise businesses find themselves. A large collection of mish-mashed gear that was perfect when purchased, but has fallen behind in adequately covering business needs economically. Unfortunately, when many businesses turn to the cloud, they find that the over standardization of the market has left them with fewer options than their needs dictated and at a higher price tag.
Private cloud was designed to address the needs of businesses finding themselves in these situations. Private cloud is the efficient methodology for defining workload space within a fixed cost environment. That being said, private cloud is not found within the public cloud business model. As many analysts and industry insiders have said before, private cloud is almost the antithesis of public cloud, for it shackles public cloud expansion by placing VMs in a much smaller fixed environment. In fact, some analysts have gone so far as rejecting the private cloud operations model completely.
So why is it that many businesses are finding more success with private cloud than the public ones?
What is a workload?
Workload is a very generic industry term that means an independent collection of code, service (or app), or similarly packaged process. A defining factor in the definition, especially when looking at today’s infrastructure technologies, is the independent nature. Can I pick up this service, as is, off of the current server and run it on a different one? In a cloud environment, would I be able to move the service from one VM to the next?
A few examples of computing workloads (be it cloud or otherwise) include batch, database, mobile application, backup, website, and analytic workloads.
A batch workload, as an example, includes processing large volumes of data and can be run off hours at scheduled intervals. Batches include data reconciliations, audits, and system syncing. These workloads rely on predetermined scripts, access to the data, and a pool of compute and memory (whether that pool is fixed such as on a full server or dynamic such as in the cloud is irrelevant). As long as the originating system has access to the data or systems involved, those scripts can be picked up and moved to a new server.
The Evolution of Dedicated Servers
The reason that more businesses are finding success with private cloud is that private cloud is not the evolution of public cloud, it is the evolution of dedicated servers.
Dedicated server environments increased in popularity as the need for root access, dedicated static IP addresses, and dedicated resource pools increased. In the early days of hosting, both root access and static IPs were firmly out of reach in a shared environment. The unfortunate consequence was those who had smaller workloads but required either a dedicated IP or root access had to move to a dedicated server. These scenarios helped push forward VPS and later Cloud.
On the other end of the spectrum, many dedicated server users had multiple workloads they had to run, and many placed them on the same server. Although this would help both from a cost standpoint as well reducing complexity, it also increased the possibilities for performance issues (compute and storage bottlenecks) and security and business continuity problems. The fix to this was to purchase multiple servers, which would increase cost but would solve performance issues.
Although Cloud was an answer to these problems, it wasn’t always the most economical even when compared to those who purchased multiple dedicated servers.
Cloud pricing is based on overselling the hardware. If you took 10 VMs from any number of Cloud hosts and compared the price of those VMs to an equal measure of server hardware, you could purchase at a Dedicated Hosting outfit you would find that, without automated processes for spinning down and spinning up VMs, the pricing was greatly inflated with the Cloud host.
Now if we apply what we learned from Cloud operations to the dedicated server world, we would find something remarkable. The ability to take your current server configuration and streamline it, so each workload receives the proper amount of resources is game-changing in the dedicated server world. It means consolidation of servers. It means incredible flexibility for provisioning resources for the right fit of the workload. It gives DevOps and Sysadmins the ability to automate provisioning across their servers based on a set of pre-determined criteria, and that is game-changing.
With Dedicated Private Cloud, a user can take a dedicated server environment and cut it up into the appropriate VMs necessary to handle their current workloads. No need to pay for overhead or per VM licensing costs.
Where as public cloud is amorphous with nigh-limitless resource pools, private cloud is more like slapping a perfectly formed organizer on your resources. Some companies need the ability to grow and shrink in seconds, or the lowered costs of operating a few cloud servers. Most companies, however just need a way to organize their workloads, providing each with the perfect balance of resources to keep the engine running at peak efficiency.
Learn more about the GigeNET cloud or receive a free consultation.