The term DevOps appeared in the mainstream software development community about 10 years ago. It represented a set of tools and techniques that helped solve many relevant issues of the time, namely responsibility for code in production and the lack of a positive feedback loop. As of today, DevOps is a huge movement. It is considered the default way in which software development should be practiced. It has allowed even small teams to operate as efficiently as large teams. Many engineering teams have adopted DevOps-derived approaches to suit their specialized workflows.
The dark side of DevOps
In practice, DevOps brings a lot of tangible benefits to software engineering teams. Automation, transparency and speed are the main advantages. From a business perspective, engineers are now more invested and accountable to users’ experience. The “Your Success Is Our Success” era for engineers has finally come – and teams couldn’t be happier.
For the individual developer, the number of things they were responsible for was piling up. More objects to be monitored in production. As more features were created and deployed, alarms from internal systems continued to go off more frequently. Individually, the engineer became responsible for the behavior, performance, security, compliance, and many other aspects of the application and platforms. All too often, this became a distraction and began to defeat the very purpose of the DevOps methodology that was to remove obstacles that were in the developers’ way.
This DevOps has been done wrong. The immaturity within the software engineering teams has caused many people to go down the DevOps path which is toxic and thus forced engineers to quickly burn out. The core principle of DevOps is to enable engineers to do more, but not to the extent that they have to put in a lot of effort. DevOps still recognizes two distinct forms of specialized effort – software DaveEscape and platform employmenterations. The nature of services, expansion, and security may vary in the degree to which development and operations responsibilities converge.
Dave-girl face: error occurs over time
Interestingly, engineering teams view DevOps as development and “everything else”. From the time the developer checks the code in version control, there is a huge variety of workflows that start. Ideally, there’s testing, integration, and more testing, publishing, and feedback. This is a large body of work, often underestimated and underrepresented. Placing this responsibility on the members responsible for development alone is where things start to go wrong. Often, as a product ages, incident management, maintenance, and other engineering efforts build up—resulting in fewer product development cycles available over time.
There is also a complementary case, where the engineering teams focus on product development and assign most of the free courses to them. The platform’s operations are forced to become a reduced effort and consist of the minimum it takes to keep the lights on, so to speak. This is a common observation in small teams and startups that usually have a new product release.
When implemented correctly, DevOps should allow engineers to take responsibility for all aspects of feature development and management in production. This means that programming the business logic, creating declarative and non-declarative programs, linking to services, building the tool, and finally deploying to production all come under the developer chain. In addition, the feedback loop – created by observability in the mix will provide insight into the service status (or feature) – will require the engineer to stay on top of the operational aspects of performance and reliability.
The definition and current practices of DevOps originated from Agile methodologies. They broke away from the limitations of typical Agile teams released in “sprints”, still kept dev and ops efforts separate, and were neutral to the core technology. On the other hand, DevOps is more related to the technology used, for example: microservices and automation. Facilitates continuous publishing and high release frequency. More importantly, it converges development concepts and platform operations. We find that the explanation for this “convergence” is what distinguishes good DevOps implementations from bad ones.
Right kind of left shift
At the heart of healthy DevOps adoption lies a technology shift complemented by a cultural shift. The core principle that contradicts successful teams is a great developer experience. Apart from the infrastructure that supports the technical stack of a web application, the software engineers who work on it need to be able to easily develop and deploy. Let’s explore how.
Examples of tools that help engineers during the development phase include IDEs, Source Control, TDD collections, documentation generators, and much more. Huge amounts of research and product development are currently underway to enable developers to program better and facilitate more efficient development.
Different skill sets and focus areas are needed to improve different parts of the scope of operations. In the case of large corporate organizations, the division is obvious. There are platform operations teams responsible for keeping all systems running by providing automated means to setup and support trusted server-side systems. Application development teams are tasked with creating applications, API endpoints, integrations, and many other tools that will enable users to carry out their tasks.
Because of the common core nature of both of these jobs – writing software – they tend to be cross-pollination. This merger is welcome, provided it is well managed. This is where abstractions come in handy. In particular, PaaS abstractions that provide in equal parts agility and stubbornness. Flexible enough to accommodate customized workflows and customized needs for software products and teams, yet prevents unnecessary osmosis between Dev and Ops responsibilities, ultimately resulting in a superior developer experience as the core of this atomic transformation.
PaaS with flying colors
The perfect balance can only come from PaaS systems that consist of a modular architecture and provide application creation and deployment functionality. The ability to work across multiple languages and frameworks along with best packaging and deployment best practices is a key principle. Abstraction of the underlying infrastructure in favor of supporting and organizing container-based artifacts at scale is the next most preferred component of the PaaS framework. The battery-embedded approach to automating continuous delivery, security best practices, and other workflows goes a long way in ensuring that PaaS works for all kinds of developer needs. When using abstractions, the greatest opacity comes in the form of a lack of production parity. If PaaS can work in all staging states. dev, test, pre-production, prod, or any other remote states, it will save huge effort arising from asymmetry.
Bonus points for being open source, compliance understanding, policy mandates, and customization to suit the dynamic needs of different organizations. Having an active community that supports the ecosystem, drives innovation and manages change would make the case for PaaS tools more compelling.