by on February 16, 2016
FedEx, the shipping conglomerate, started with several converted business jets. These small planes flew packages really fast to multiple destinations. However, as demand increased, the small aircraft proved to be uneconomical. That is at scale, it is less expensive to combine packages into a fewer number of larger aircraft than to keep adding smaller ones to the fleet.
Not to long ago, most internet startups either purchased or rented servers. A newly formed company would rent or co-locate a few servers, set them up, and pay a third party to manage them. However, much like shipping on small airplanes, these did not scale easily. The act of scaling required more server purchase agreements, time to bring the new machines online, and many hours of setup.
Running on the bare metal was optimal for performance. However, any company expecting an increase in traffic would need to rent or buy more machines than they usually needed just to ensure that a quick spike in visitors would not break the site. Couple this with the large expense of co-location or maintaining a data center and the cost quickly skyrocketed.
1999 was the best year for the now defunct Sun Microsystems. This maker of high-end servers and workstations sold many of the servers that powered the early years of the commercial internet. Oracle purchased them in 2008 after several years of loss. The reason for the loss was a change in how companies hosted web application.
Virtualization was the first big change. Instead of purchasing multiple small to mid-range servers, companies bought fewer big ones. Then running virtualization software, they would turn the big server into many smaller ones inside the same box. For Sun, this meant fewer support contracts and far less service sales.
A virtual server is a complete server inside software. Meaning that it is given a finite amount of RAM, storage, and processor cores on the host machine. The problem with this is that it must be configured to a fixed size and once set-up cannot be easily changed. For example, a four core machine may have four virtual servers running on it. If you need a fifth one, then it is time for a second machine.
In 2008, a company called SliceHost was acquired by RackSpace. This company changed hosting forever by allowing people to rent virtual server space in increments and later change the size without having to move code. Thus with the click of a mouse one could expand or contract their server size to meet current demand.
Many years ago, goods were manually loaded onto ships in various sizes and types of containers. Then in 1955, Malcom McLean invented the modern shipping container. Creating a standard sized steel container that can be loaded by machine onto a ship, truck, or train. Thus significantly reducing the time and cost involved with shipping goods.
Today, goods are shipped in standard sized containers that fit on a ship, sail to a destination, and then are transported to a waiting truck or train for shipment to the final location. It is a very efficient system that for better or worse has made international trade much less expensive.
In 2013, dotCloud, a platform as service company, introduced an interactive demo of the first portable container system. A month later, 10,000 developers had downloaded the software. The reason is that it solves the major problem of dependency. That is, code can be written and deployed to a Docker instance without concern for the underlying operating system.
Furthermore, Docker containers can be moved from one infrastructure to another without having to change code or manually build a server. However, it is not a proverbial free lunch. There are performance issues, potential security threats, and lots of growing pains as the software strives to meet the expectations of a very jaded group of devops professionals.
Despite the shortcomings, companies such as RedHat and Amazon are waiving the container banner. The reason? It is currently the best way to separate code from infrastructure while insuring scaling an application is as easy as launching more containers. Not perfect, but far easier than building new servers - virtual or otherwise. Including an ability to quickly create development, QA, and production environments.
Even internet giants such as PayPal and Yelp are using containers with publicized success. The reason such companies would risk performance issues and security threats is the ease of deployment. Sure C code running on bare metal is the fastest, but that comes with its own problems.
In closing, Containerization has come a long way in a short amount of time. I suspect it will continue to improve as so many big tech companies are investing in it. Eventually, it will become a standard and the current list of negatives will get shorter. However, just like every technology, it is not for everyone and some applications still are best suited to bare metal performance.