Over the last 10–15 years, there’s been a constant barrage of new DevOps tools and techniques popping up. You might wonder how much of this is just buzz and how much of it will actually move the needle for your business. And if certain things do move the needle, how would you even go about implementing them?
At Logicata, we’ve seen our customers come up against this problem time and time again, which is why we’re now helping customers with DevOps, in addition to our InfrAssure service. But first, let’s look at exactly what DevOps is and what outsourcing DevOps might mean.
What is DevOps?
There is a lot of ambiguity around the term “DevOps.” Some say it’s so broad that it doesn’t really mean anything, some say it’s just automation, some say it’s a cultural movement about responsibility for the delivery of code, and some say it’s whatever that one techie in your company does who doesn’t work on your products.
There is truth to all of these, but here’s the way we think about it. As Agile software development became more popular, there was pressure to release software faster and faster, with the eventual goal of continuous deployment. And as this methodology became more common in industry, it became clear that faster software delivery was prevented by development and operations teams being siloed from one another.
DevOps was a cultural change in software development that arose to address this problem. In order to move towards the goal of continuous deployment, two major changes to software development needed to happen: the development and operations teams had to be much more tightly integrated and those teams had to leverage automation much more.
In practice, DevOps is anything that enhances the speed and quality of software deliverables, which often involves working on code to automate business processes rather than the product itself. This includes unit tests that run automatically when code is checked in, automated deployment of code to a testing environment for integration testing, automated packaging and shipping of applications for customers, and much more.
How we see DevOps
We see DevOps as the automation of key development and operations tasks outside of normal product feature development. And this DevOps code should be managed under source control for organization, reliability, and replicability, just as you would manage the code for your product.
This is infrastructure as code (IaC) and is an industry best practice. The alternative is to click through a cloud provider’s website menus to provision infrastructure, which is aptly named ClickOps. IaC is a much more robust process because it minimizes human error during setup and provides a history of changes to configurations through source control.
The idea is to build a system to automatically integrate and deliver code without really having to think about it, so that your team is spending the vast majority of its time working on feature requests and bug fixes.
However, it’s important to point out that perfect automation is more of an ideal to strive for rather than a realistic expectation: your automated pipelines will break occasionally. But when they do, you should focus on jumping on issues to repair the system and make it more robust rather than resorting to manually pushing through changes yourself. This way, you still let the automation do the heavy lifting.
Your journey to continuous deployment
Accomplishing all of this is easier said than done. A lot of companies start off without any DevOps processes in place. They might have one server or a small server farm, manually deploy their code, and have an “if it breaks, we’ll fix it” attitude. The two major components of DevOps that could benefit their workflows the most are heavy automation and different environments for development, staging, and production.
In other words, companies like this need Continuous Integration/Continuous Delivery (CI/CD) pipelines in place. Automating all this can be daunting, but you can do it in stages. Most companies start with continuous integration and only once this is in place do they start thinking about continuous delivery or deployment.
Continuous integration is when developers merge their code into a particular branch of a source control repository on a regular basis. You should write your unit tests so they can be automatically run whenever developers want to merge new code into the main branch. Generally, merging code into the main branch is disabled until these tests pass to ensure that nobody can check in code that doesn’t compile or has serious errors (which would break the code of everyone else pulling from the main branch).
DevOps engineers can set up webhooks to automatically run the test suite when code is checked in and enable merging if the tests pass. This is truly hands-off and allows developers to look through the logs of their automated unit tests and constantly debug and push new changes themselves until the unit tests pass.
On top of these automated unit tests, it is also beneficial to have a pull-request review process in which another developer manually reviews the changes before approving and merging.
After you reach a point where you can reliably control the code that is merged into the main branch with little to no extra effort, you are ready to focus on delivering the software. To achieve continuous delivery, you want to automatically deploy the latest version of the main branch to a testing environment and run a more sophisticated set of integration tests (that often take hours rather than minutes) to validate key functionality in the software.
Generally, these tests run on a nightly cadence, but they could run more often or less often as well. If the tests fail, the test infrastructure can alert the team, who can fix any bugs. However, if the tests pass, it can instead deploy the code to a staging environment. One big benefit of a nightly test is that it doesn’t matter if your test suite takes more than 12 hours (since it’s running while everyone is asleep). This means that you can continually build on your integration test suite to make it more and more sophisticated without slowing down development time during the day.
In theory, if your code has passed all tests, you could move on to the next step of automating the deployment to your production environment. However, this is the final step in your journey to continuous deployment, and you have to be completely ready for it. Until then, you will need to switch over to a manual approach to complete your production deployment.
A lot of teams won’t quite trust their integration tests well enough to completely automate deployment to customers, especially at the beginning when they’re still developing their integration test suite. For this use case, it is helpful for a CI/CD pipeline to automatically stop after running the integration tests. Then, the next morning, quality assurance (QA) engineers can put the code through more rigorous manual tests and sign off on the software after it passes those.
After manual approval, the pipeline can then continue where it left off to deploy the code to your customers in your production environment. This method still leverages automation for 90% of the deployment tasks, but also leaves room for manual intervention in the decision to release to customers. We think that when it comes to DevOps, everybody should want to get to at least this point.
After you are confident in your continuous integration and continuous delivery, you can look at your integration test coverage to decide if you’re ready to automate deployment to customers. If you take this approach, you should try to automate all of the manual tests that the QA team is running and integrate them into the automated test suite. Then you could automate the decision to deploy to customers if the integration test suite passes.
Two common methods of deploying to customers are deploying your application into a production environment that serves your web application or packaging your application’s binary for users to download. Whether you choose one of these methods or any other method, your pipeline configuration and the results of your automated tests determine what is deployed to customers without any manual intervention at all.
CI/CD is absolutely the 80/20 of DevOps, but there’s room for your setup to get a lot more sophisticated.
One way to improve your DevOps processes is to move your DevOps processes to the cloud, for the same reason that you might switch other infrastructure to the cloud: it is cheaper and more reliable than trying to rig it up yourself.
In this paradigm, you would still have your normal CI/CD pipelines and different environments for staging and production. They would just be in the cloud rather than on premises, so you don’t have to worry about your build server going down and wrecking your productivity. This is DevOps as a Service.
AWS DevOps Outsourcing
Major cloud service providers like AWS support this functionality through services like CodePipeline and CodeDeploy. Although there is a learning curve when adopting cloud-based CI/CD, the concepts are similar to using traditional CI/CD tooling. However, our experience is that many businesses are not yet familiar with such technologies, so automating their software deployments in the cloud is only an aspiration.
After moving to the cloud, DevOps can get even more complicated. At this point, the line between DevOps and architecture starts to blur entirely. For example, you could have:
- Containerized deployments through Kubernetes (ECS/EKS)
- Infrastructure as code (CloudFormation)
- Resilient self-healing deployments
The rabbit hole goes deep, so it can be helpful to check out the AWS Certified DevOps Engineer exam material to understand the different possibilities. At Logicata, we like to keep up with all of the latest changes in the field by having our team members prepare for and pass certifications like this.
The pros and cons of outsourcing DevOps
It might make sense for some companies to build in-house teams to implement some or all of these DevOps techniques across the organization, become experts, and stay up to date on the industry. But we think it usually makes sense to collaborate with a managed service provider (MSP) that you trust, especially for small and medium enterprises (SMEs). However, there are different pros and cons to consider for your specific use case when outsourcing DevOps.
Pro #1: Simplicity
One major issue with DevOps is the sheer complexity of the systems to set up and manage. It’s not just one feature in the codebase; DevOps engineers instead have to understand the entire system, as well as the best up-and-coming technologies to implement any solutions.
There are often significant hidden costs to implementing the latest fad you’ve seen somewhere else, as that could affect the system in ways that are difficult to understand. You’re going to have to pay these costs one way or another, whether that’s by dealing with the pain of not having strong DevOps knowledge in your org, by going through years of expensive mistakes to become an expert yourself, or by working with an expert that you trust.
Pro #2: Reliability
For SMEs especially, the “DevOps team” usually consists of one person. This could be a problem, especially considering that DevOps engineers tend to change jobs quite frequently due to burnout (there are only so many times they’ll tolerate being woken up at 3 am to jump on a critical problem). It can be very risky to have this single point of failure in your business.
In the same way it might make sense to de-risk your infrastructure by working with a team with a diverse range of expertise that is on call 24/7, it can also make sense to de-risk your DevOps. When comparing the cost of a full-time salary for a DevOps engineer and an MSP’s services, the MSP is often cheaper as well.
Pro #3: Focus
This is the same reason to outsource anything. Your team has limited time every day and an unlimited number of tasks to serve your customers well. This means that you often have to make tradeoffs to specialize in what you’re truly great at and let others handle the rest. If you outsource DevOps to a team that you trust, then your team can apply its full focus to your own business, which is what your customers are actually paying you for.
Con #1: Infrastructure Awareness
If you outsource your DevOps processes, you might underestimate the true cost of runtime environments that seem “simple,” because there’s much more complexity below the surface that your internal team is not directly involved with.
Therefore, it’s important to work with a provider who is transparent about how everything works and what the business implications of each component are for you. This can also make it easier to switch providers if you ever need to, since you will actually understand how the processes that you outsourced work.
Con #2: Communication
When an internal team completely handles certain business processes, there is generally less fragmentation of the team, especially for small businesses. Everyone knows everyone else and uses the same tooling across the company.
However, when working with an external team, sometimes the people actually doing the work can be hidden behind other client-facing representatives, which causes important information to get lost in translation.
Therefore, it is important to work with a professional partner that understands your business and has great communication. This way, your internal team can effectively collaborate with them and always understand what’s going on.
Making the right choice when outsourcing DevOps
At Logicata, we understand what it’s like to be a small business and we strive to give you all the attention and service that we value when working with others. We also have deep expertise in AWS, infrastructure, DevOps, and application development, so we can help you along your journey to continuous deployment.
If you’re trying to get to this point in your business, or you’re considering outsourcing your DevOps, and you think you could use a partner to help you on your journey, feel free to reach out! We’re always happy to talk.