Close

Contact us

Drop us your deets and we'll be in touch as soon as we can.
Scroll down.
Blog
VOLUME 01

Tech digest: The deployment pipeline.

Our regular wrap up of trending topics in engineering, development and technology.

Victor Garcia, Unsplash

Welcome to the Tech Digest – our regular wrap up of the issues, trends and themes affecting engineering, technology and digital product development. This issue, we’re discussing how the the deployment pipeline has changed over time, from copy/paste to automation.

The way products are deployed has changed a great deal over the past few years. There’s more options out there to automate workflows, and the idea of continuous integration fits well within an agile methodology. In this post, we’ll look at why today’s methods are so useful and how they give you security and control over your product, as well as the value of having multiple pipelines for development, testing, approval, and production.

The story so far

Back in the golden days of the 1990s, updating a website meant uploading the new files via the File Transfer Protocol (FTP) and overwriting the old files on the server. This would often lead to disasters, such as file permissions being set to public read/write/execute, live fixes being overwritten with in-development code, and code not performing as well on the live server as it did on the developer’s local machine.

If the developer was particularly battle-worn then they would know to make a copy of the original files so they could quickly revert back when disaster struck…

Web development technology has improved so much over the last three decades that we have been able to solve these problems with ease. Version control enables developers to review each other’s code, avoid overwrites and manage deployments, while giving the opportunity to roll code back to an earlier working version, all at the press of a button.

New environments

The next evolution in website deployments was the utilisation of multiple environments. Instead of just having the developer’s local environment, and the live environment hosting the website, web developers started creating extra environments for testing, quality control and for showcasing new features to stakeholders.

Years of experience showed that limited environments would lead to other deployment issues including, but not limited to, releasing features ahead of approval, not being able to catch faults introduced at deployment, and no opportunity to perform regression testing.

But with this new and improved process came unforeseen chaos.

If a team was working on multiple features in parallel, deploying the code to multiple environments and making sure each environment was up to date with the correct code became a herculean task for developers.

Enter automation

The solution to this issue was the automation of deployments. DevOps would create multiple environments for a client website, using the most financially viable method which fit the client’s needs. This could be a single hosting server with multiple virtual hosts, or a dedicated server for production and a separate server(s) for non-production. DevOps would then create a bash script or utilise an existing deployment service. Third party services are often a fast and robust solution for creating deployment pipelines.

The developers working on the website wouldn’t need to know how the deployment process works. They could either click a button to deploy their code, or the process could be automated further by having their code deployed as soon as it was merged into the correct branch in the code repository.

The automated deployment on upload works well for projects using agile methodology as it more closely supports the continuous integration that agile developers strive for.

With the latest features or fixes deployed to the test environment, QA can jump in and start testing the developer’s work to see if anything has been missed in the client requirements, or if there are any particular scenarios where the code doesn’t work as expected. As QA have their own environment, they can change the content of the site as much as they see fit to test the client requirements and isolate any edge cases that developers haven’t accounted for.

When QA has signed off on the work done, that deployment can be replicated on a staging environment. The product manager and client can see and approve the changes, or request more work to be done before sign off.

Once the change request has been approved to go live, it’s time to deploy to the production environment. This is as simple as deploying to the non-production environments, and a separated deployment pipeline gives extra control over the process. Should anything unforeseen happen, developers can quickly rollback the deployment to a previously working version, and ensure that the production server is always showing the latest stable and approved release.

As we look back at the evolution of website development and deployment, developers have strived over the last 30 years to avoid common place problems that could negatively impact businesses for hours, if not days, with simple strategies for best practices. However, there is always room for improvement, and with new technologies like Docker and Kubernetes, new deployment practices will need to be explored and adopted.

Like what you see? Sign up to get the next issue straight to your inbox.

More in this series

We love a good chat

Honestly. Sometimes you can't shut us up. Something you want to talk to us about? Excellent stuff.