To be successful in the cloud, IT organizations should abandon their ‘Engineering’ mentality and embrace a ‘Cloud’ mentality. I’ve worked with quite a few companies that struggle with this change in mindset and as a result they tend to face challenges of their own making. As if moving to the cloud isn’t complicated enough – the last thing organizations need is additional hurdles to overcome.
Before I get into that though, let’s look at what an ‘Engineering’ mentality is. Picture yourself in a physical data-center with row after row of racks, each rack stuffed to the max with power distribution, network cabling, servers, firewalls, load balancers, RAID storage, etc etc etc. Years of work and many hands were involved to build this data-center. Imagine them all – from HVAC, to lighting, to power to network cabling to servers to systems to DBAs to sysadmins. Many teams need to cooperate to build infrastructure that works well with all of the other facets of the data-center. Sustaining such a system requires that you ‘go with the flow’ and continue the momentum of the existing status-quo. Organizations can’t introduce revolutionary concepts and ideas, or rewire the entire data-center with each project. Changes are introduced gradually, and usually integrate with the existing systems. In fact, most projects are slipstreamed behind previous successful implementations, reusing many of the same components.
When an admin is tasked with on-boarding a new application, the ‘engineer’ mentality kicks in and the application is viewed as a piece of the puzzle that must successfully fit into the larger puzzle. The application must play nice with the other parts of the data-center, and it has to use things that are familiar to the teams involved. Also, re-using hardware and maximizing capacity come into play. There’s a good chance that when the new application is integrated into the data-center, it will leverage existing switches, routers, firewalls, load balancers, web servers, database servers, etc. It’s unlikely that each new application is provided with a dedicated set of infrastructure unless the organization has unlimited budget and data-center space (I’ve yet to meet that customer).
Therefore, the ‘engineer mentality’ can be defined as trying to find a way to add new features and applications by fitting it into existing infrastructure. The thought is that the datacenter is an engineered solution that can handle whatever the organization needs from it. New technology gets integrated into the existing infrastructure and maximizes the capital expenditures made in the datacenter. The easiest way to do this is to establish a system of silos, each with a particular function, including checks and balances on other parts of the process. To see an example of this way of thinking, one could look at a typical software release life-cycle and the silos that it runs through.
Now, enter DevOps – Defined as a combination of Software Development, IT Operations, and Quality Assurance. DevOps erases the lines that used to separate developers from production. No longer does one team develop code for another team to execute on infrastructure deployed by a 3rd team. In the DevOps world, all 3 roles are integrated into a single team. This is a common methodology among lean start-ups and works great with organizations that need a higher degree of agility.
To accommodate the DevOps way of thought, the infrastructure needs to keep up with constant deployments and continuous delivery. Infrastructure that supports high levels of automation, automated deployment, and configuration management are well suited for this task. Add elasticity, scalability and you’re in the realm of Cloud Computing. When a DevOps team back-ended with cloud computing receives a request to on-board an application, their thought process is quite different than a typical engineering team. Instead of finding ways to reduce cost, power and space requirements by efficiently reuse existing hardware (load-balancers, firewalls, etc.), he takes an opposite approach and creates dedicated, purpose-built cloud infrastructure to run the application as best as possible. Each cloud application or function is built to stand on its own, with a dedicated set of supporting infrastructure. There’s not a lot of overlap between technology services. This is the definition of a ‘Cloud’ mentality – purpose building scalable and elastic resources to accomplish a function.
In a sidebar conversation on this topic with fellow blogger Peter Lee, he stated this perspective which I thought very relevant:
“I think there’s a real killer point here which is that the reason people can do this in the cloud is because there is a ton of overlap and reuse happening at the layer below. That’s what makes cloud such a winning proposition – it lets IT guys build these nicely separated, non-overlapping infrastructures on top while creating even greater cost efficiencies because you’re not just talking economy of scale across a single data center or organization, you’re talking massive economy of scale.”
Re-using a load-balancer, switch, or firewall for 10 different applications is something that would be done in a physical data-center when the cost of rolling out 10 dedicated sets of infrastructure is cost prohibitive. Organizations maximize their investment by stacking applications, databases, and other functions on top of each other, leveraging their data-center investment to the max. Contrarily, the DevOps guy doesn’t have to worry about running 10 load-balancers in the cloud, it’s just as easy as running 1 load balancer. Space, power, cooling, cost, etc. is of no concern to DevOps. The best possible scenario is now possible without restrictions that used to prevent such a dedicated, isolated architecture. If DevOps needs to roll out 10 applications using a cloud, they will roll out 10 load-balancers, 10 switches, 10 servers, 10 databases, etc. If 1 application gets high usage, scaling can occur on just 1 app. If an outage occurs on a database server, it would affect just 1 function, not all 10 apps. You can see this in practice by reviewing the Amazon AWS Reference Architectures and notice how each set of infrastructure is designed to provide a specific function or purpose. There’s not a lot of overlap between applications. No reuse of system components. You won’t find an Amazon Elastic Load Balancer ( ELB)that’s managing the load for different, disparate applications.
If we look at 3 applications (for example a website –App1, mobile app -App2, and user comments -App3) using the engineer mindset in a physical data-center, we would see something that resembles this, with many areas of overlap and reuse:
Contrarily, if we look at the same 3 applications using a cloud mentality, we would see less rigid design, less component re-use, and more streamlined approach:
Each layer is designed independently to handle scaling events, only 1 application is serviced by the infrastructure, and other subsequent applications that are on-boarded follow a similar method. When it’s time to build App1, App2 or AppX, the same method is applied, resulted in X number of segmented, isolated, scaling environments, each dedicated to it’s task.
My advice to customers looking to use the cloud is to abandon thinking like engineers and think like DevOps – even if your shop isn’t a DevOps shop. This will make it easier to transition to a cloud mentality later. When designing your cloud infrastructure, don’t overlap technology services. Build for each purpose, and dedicate infrastructure to fulfilling that purpose. In the end, your environment will be more resilient, agile, and live in a more natural state within the cloud.
An example of where this translates to SecureSphere deployments would be an admin that tries to reproduce his entire data-center into an Amazon AWS cloud. He has 10 applications, so following his engineer mindset, he uses as little infrastructure as possible and ends up deploying 1 Load Balancer, 1 web server, and 1 database server to run all 10 apps because that’s how his physical data-center is deployed. Although he used less infrastructure components, he introduced an additional and unnecessary challenge to overcome – coexistence. A change to part of the infrastructure to suit 1 application will probably have unintended consequences to other components. From trying to stack multiple SSL certs on an Amazon ELB (not possible), to trying to separate traffic that was previously combined (complex), planners could face many unexpected challenges due to the engineering mindset.
This also carries true for any products – such as SecureSphere – that are added into the mix. For example, although SecureSphere handles multi-layered application traffic (HTTP traffic that mixes multiple sites on 1 ip/port leveraging HTTP Host Headers) in the physical and virtual data-centers, the cloud is a different beast. Knowing this, Imperva embraced DevOps and cloud mentality to ensure SecureSphere for AWS was released with features required to be successful in the Amazon AWS cloud like scaling, usage of CloudFormations, Elastic Load Balancers, IAM, automation, APIs, and CloudWatch. With this in mind, customers are more successful at moving into the cloud when they abandon the engineer mentality of rebuilding their existing data-centers, and move to building technology services that leverage the capabilities offered by cloud computing.
Here are a few high level pointers to keep the Cloud Mentality in the foreground and ensure success during your migration:
- Design purpose-built systems with little or no overlap with other systems
- Plan each tier for scaling and elasticity
- Rigid, intertwined designs don’t work well in the cloud
- Avoid technology partners and vendors that sell products and services in the cloud based on the engineer mentality
- Avoid using infrastructure that doesn’t offer scaling and/or elasticity
Try Imperva for Free
Protect your business for 30 days on Imperva.