History of Cloud Computing

The concept of Cloud Computing came into existence in 1950 with implementation of mainframe computers, accessible via thin/static clients. Since then, cloud computing has been evolved from static clients to dynamic ones from software to services. 

Some accounts trace the birth of the term to 2006, when large companies such as Google and Amazon began using “cloud computing” to describe the new paradigm in which people are increasingly accessing software, computer power, and files over the Web instead of on their desktops.

At the time, Netscape’s Web browser was the technology to be excited about and the Yankees were playing Atlanta in the World Series. Inside the offices of Compaq Computer, a small group of technology executives was plotting the future of the Internet business and calling it “cloud computing.”

It was a gradual evolution that started in the 1950s with mainframe computing: 

When we start to think about when was the cloud invented and explore cloud computing history, we have to venture back almost 70 years. Multiple users were capable of accessing a central computer through dumb terminals, whose only function was to provide access to the mainframe. Because of the costs to buy and maintain mainframe computers, it was not practical for an organization to buy and maintain one for every employee. Nor did the typical user need the large (at the time) storage capacity and processing power that a mainframe provided. Providing shared access to a single resource was the solution that made economical sense for this sophisticated piece of technology.

In the 1950s and 1960s, computers were enormous, expensive, and only a reality for corporations and large organizations like universities. This was the age of the mainframe. Each mainframe was a multiuser computer — massively powerful by the standards of the time — which human operators interacted with via a terminal. Throughout this period and into the 1970s, the way operators interacted with computers evolved from punch cards, through teletype printers, to primitive screen terminals that were the ancestors of today’s command line terminals. Mainframe clients had almost no computing power: they were simply interfaces for the mainframe computer which could be located a long way from the operator, connected over a dedicated network.

The Internet Birth: 

At the tail end of the 1960s, DARPA was hard at work on ARPANET — a packet switching network that was the proving ground for the principles and technologies that power web services on the internet today.

The internet as we know it was still a decade and more the future. There was no web and no email, but as ARPANET and its successors developed through the 1970s, joining together institutions and corporations that used mainframes and minicomputers, still connected to users via terminals, we begin to see something that almost bears a resemblance to modern cloud computing.

1970, the concept of virtual machines (VMs) was created:

Today’s elastic compute cloud services wouldn’t be possible without the virtualization that allows us to run many virtual servers on a physical server. The first glimmerings of virtualization were developed at IBM in the 1970s. Using the VM operating system, mainframe owners could run virtual machines, much as we do today. Interestingly, although it was first released in 1972, 45 years ago, the VM operating system is still in use today by companies with mainframes. It’s often used to run virtual machines with Linux or a commercial Unix variant. 

Using virtualization software like VMware, it became possible to execute one or more operating systems simultaneously in an isolated environment. Complete computers (virtual) could be executed inside one physical hardware which in turn can run a completely different operating system. The virtual machine (VM) operating system took the 1950s’ shared access mainframe to the next level, permitting multiple distinct computing environments to reside on one physical environment. Virtualization came to drive the technology, and was an important catalyst in the communication and information evolution.

1990s, telecommunications companies started offering virtualized private network connections:

The World Wide Web was invented in 1989 at CERN by Tim Berners-Lee, an English researcher. The first web browser was released a couple of years later.

The web was , and is, a technology for linking hypertext documents and other resources. The web rides atop the internet, and has been called the internet’s killer application. The development of the web led directly to the massive expansion of the internet, huge investment in networking technology, and a wave of social changes we’re still riding today.

In the early years of the web, the available bandwidth was still meagre, but as businesses and consumers embraced the web, the infrastructure hosting industry was born. Many larger companies at this time used on-premises data centers, but throughout the late 90s and early 2000s, the data center industry boomed, and shared hosting and dedicated servers became the hosting platforms of choice.

It was also in this period that the first Software as a Service applications were released. One of the first big successes in the SaaS space was Salesforce, which used the improved bandwidth and hosting technology to provide enterprise-grade CRM software accessed with a web browser.

Telecommunications companies only offered single dedicated point to point data connections. The newly offered virtualized private network connections had the same service quality as their dedicated services at a reduced cost. Instead of building out physical infrastructure to allow for more users to have their own connections, telecommunications companies were now able to provide users with shared access to the same physical infrastructure.

2000s: Infrastructure-as-a-Service And The Modern Cloud

As cloud computing gained momentum during the mid-2000s, many organizations struggled to understand what exactly 'Cloud Computing' is. A memorable example is when Larry Ellison, the founder and CEO of Oracle Corporation at the time, shared his thoughts in 2008 on Cloud Computing (a must listen). While Larry Ellison’s provocative comments highlight his lack of understanding of cloud computing at the time, most people were in the same boat and did not fully realize its benefits yet either.

One of the contributors for the confusion was the common practice of ‘Cloudwashing,' where vendors took their legacy software solutions, made them accessible over the internet and marketed them as cloud solutions.

The first recognizable Infrastructure-as-a-Service platforms became publicly available in 2006. Offering on-demand compute and storage, Amazon Web Services and its competitors radically changed the way businesses pay for, think about, and manage their infrastructure, as well as powering rapid innovation in the startup spaceSeveral years before the first Infrastructure-as-a-Service platforms were released, ServerMania was founded to provide inexpensive infrastructure hosting to small and medium businesses. Over the next decade and a half, ServerMania embraced virtualization and the cloud.

Public Clouds : When we say “cloud” in casual conversation, we usually mean public cloud. Public cloud computing services use virtualization and modern network technology to provide on-demand scalable compute and storage. Many of the largest infrastructure users in the world depend on cloud infrastructure, as do hundreds of thousands of smaller businesses.

Private clouds : In response to the demands of enterprise infrastructure hosting clients for privacy and control, IT service providers began offering private clouds, which have many of the same benefits as the public cloud. The major difference is that one organization owns and controls all the cloud servers. Custom private clouds allow companies to leverage the benefits of virtualization in a completely secure and private environment which can be tailored to the specific requirements of their workloads.

Hybrid clouds : In the early days of public and private clouds, pundits pitted the two against each other, wondering which would achieve dominance. In reality, public and private clouds are complementary technologies, and many enterprise infrastructure users find a place for both. Thus developed hybrid clouds, which integrate public and private cloud platforms, as well as bare metal dedicated servers, which still have a major role to play.

In 2007, IBM, a company with a long legacy and history in virtualization and cloud dating all the way back to the 1950’s, announced that it planned to build clouds for enterprise and provide additional services on top. IBM released several software and hardware solutions for the cloud (such as IBM CloudBurst in 2009) but officially launched its own cloud computing offering IBM SmartCloud in 2011.

In 2008, a new cloud challenger emerged; Google! Google’s first public PaaS service, Google App Engine, was introduced. Similar to AWS’ approach, Google focused on the development and the hosting of web applications on Google’s infrastructure with Google App Engine.

The same year, another major player ascended; Microsoft! In October 2008, at the Professional Developers Conference, Microsoft announced Windows Azure (in tech preview mode), allowing users to host a web application on Microsoft data center.

In 2009, a new cloud provider was born, and yet again, by an e-commerce company from China, Alibaba Group. Alibaba Cloud was founded in 2009 and opened its first data center the following year.

Another noteworthy event from that decade was the foundation of VMTurbo, Inc., now Turbonomic. VMTurbo was incorporated in December 2008 and was open for business in 2009 after raising $7.5M in Series A funding.

When VMTurbo was launched, it focused on optimizing virtualized environments running VMware and other hypervisors - but, the founders of the company had a clear vision of the future, which was to assure performance of any workload regardless of where it runs, including in a multicloud deployment model. 

In 2009, AWS introduced many of the services we know today. Some noteworthy include:

  • Amazon CloudWatch - introduced in May 2009, offered real-time monitoring service for EC2 instances using metrics gathered from the underline Xen-variant Hypervisor. In 2017 AWS released a CloudWatch agent for guest OS-level and custom metrics, it supported servers running on-premises as well.
  • Amazon Virtual Private Cloud (VPN) – announced in August 2009, allowing users to create logical and isolated networks on AWS
  • Amazon Relational Database Service (RDS) – announced in October 2009, it is a cloud-based rational database service based on MySQL. Today the service offers various databases including Amazon Aurora, PostgreSQL, MariaDB, Oracle Database and SQL Servers. Database-a-as-Service (DBaaS) is considered a type of SaaS.
  • AWS Auto Scaling – unveiled in May 2009, along with ELB and CloudWatch, the service allowed users to horizontally scale EC2 instances up and down by leveraging CloudWatch metrics and scaling policies.

In 2009 Amazon also unveiled two new pricing models for its EC2 service, which only offered an On-Demand pricing model. The first model was Amazon EC2 Reserved Instances (or RIs for short), which allowed users to reserve capacity and reduce costs vs. on-demand rates. Reserved Instances commitments also assisted Amazon with its capacity planning for its global infrastructure, especially in the early days of the business, as capital investments in infrastructure had to be meticulously made.

The second new pricing model was Amazon EC2 Spot Instances, which allowed users to “bid” on unused compute capacity at a fraction of the cost of on-demand. Spot instances are still considered the cheapest pricing model on AWS (savings up to 90%); however, due to their unexpected termination within 2 minutes (when the market price is higher than the bid price), they are mostly used with fault-tolerant and flexible applications.

Even now, the purchase of bare metal can be modeled in commercial cloud (for example, billing by usage or put another way, physical server billing by the hour). The result of this is that a bare metal server request with all the resources needed, and nothing more, can be delivered with a matter of hours.

In the end, the story is not finished here. The evolution of cloud computing has only begun.