Application Lifecycle Management (ALM) is a top-down process that was resented by the developers, testers, and IT personnel who were forced to use it. With its roots in industrial manufacturing practices, such as Product Lifecycle Management Software (PLM), ALM has never been a natural fit for the software industry, and recently, processes such as continuous deployment have started to eclipse the traditional ALM practices and tools. This article looks at the reasons behind the growth of continuous deployment and why you should adopt it.
Waterfalls and Silos: ALM Process & Practice
ALM covers the planning, design, implementation, testing, delivery, and maintenance stages of an application’s lifecycle. Let’s look at the ALM process and the tools used to make that process work.
Each stage of the ALM process is performed in sequence by distinct teams following the waterfall development model. In larger organizations, this process is often initiated by business analysts who determine that there is a need for a specific product or service. The analysts gather business requirements, which are then handed over to the design team and translated into functional requirements used to design a system architecture and specifications. When the design is complete, it is assigned to a development team to write the code and implement the design. After the code is written and the application built, it enters the testing phase, after which the application is handed over to the deployment team for release.
At this point, we can already see a number of common problems associated with implementing the ALM process. For starters, it is based on an industrial supply chain/production line model that assumes that key components will be delivered according to strict deadlines. Anyone involved in the software development process understands that nothing is ever delivered on time, and there are always delays. To make matters worse, the ALM process depends on specialized teams to handle each stage. Ideally, the process should be horizontal, with information and deliverables flowing from one group to the next. But over time, the individual teams of analysts, designers, developers, and testers compartmentalize and form vertical silos. The teams do not communicate and often develop hostility towards each other.
One common way to overcome the inherent problems of ALM is to create or buy tools. But this often leads to a number of new and different problems, including those encountered when organizations choose between best of breed or integrated systems for their ALM solution.
The best of breed approach is based on the assumption that you should be able to choose the best product in its intended category or subcategory. In theory, the end result of this approach is a system wherein the sum of each component exceeds its individual parts. In practice, this rarely happens, since integrating products from different vendors is always hard and frustrating. As is common in the software industry, there is a period of consolidation where the smaller vendors are acquired by the dominant player in the space. In theory this should result in being able to choose best in breed products from a single vendor that works to integrate, maintain, and support each item in their vast and growing portfolio. Unfortunately, the reality has been that this outcome, albeit possible, is extremely rare.
An integrated ALM system tries to provide all the key parts of the ALM process in a single package, allowing users to manage requirements, tests, and defects via a single product. But the integrated approach is a “jack of all trades, master of none.” For example, if the product was developed by a company that specializes in software testing tools, it is highly likely that the test management module is far better than the requirements and defects modules. As a result, users have to choose the lesser of two evils: try and work with inferior tools from their chosen vendor or integrate with additional third-party tools. This means that an organization that opts for an integrated approach in fact ends up with a best of breed solution.
Continuous Everything: Integration, Delivery, and Deployment
Over the last twenty years, a very different, grassroots approach has evolved to manage the software development process. Following the release of the Agile Manifesto, a number of new methodologies and practices came out of the development community. These included agile, DevOps, lean startup, open-source, and test-driven development. In parallel, new web- or cloud-based technologies—such as Git, GitHub, package management, test automation, virtual machines, and containers—changed the way software was developed. In time, these developments led to new ways of continuously integrating and delivering software, culminating in continuous delivery and deployment.
Continuous delivery aims to build on continuous integration and decrease the gap between the writing of application source code and live deployment of that code to users of any software-based product and/or service. Achieving this goal involves building a “delivery pipeline” that provides all the necessary infrastructure to build, test, release, and deploy production software with high frequency. Many practitioners of continuous deployment will not only build but also update production code multiple times a day. To do this, the process relies on integrating open-source testing, source control, build, and deployment software to create a highly automated and effective process.
In general, continuous processes (integration, delivery, and deployment), have produced better and more consistent results than ALM-based processes. One key reason for the widespread adoption of these new practices is the general acceptance of agile development. Agile is no longer a controversial fringe movement and has moved into the mainstream. You can find its practitioners in the smallest startups to large corporations and, in some cases, even the governmental sector. Now that agile practices have been broadly accepted, developers have created tools based on agile ideals. And many of these tools have been created and maintained by the people who use them on a daily basis, making them easy to both extend and integrate with other open-source tools.
Over time, these tools have matured and become the basic building blocks of modern application development. As a result, developers can today focus on delivering products and producing results instead of dealing with complicated processes. And customers and users get new products and features with the added ability to give their feedback in almost real time. This means that once a user discovers an issue with an application and notifies the developers, the developers can fix the issue and release a new, improved version of the product faster than ever before.
Delivering Continuous Improvement
In many ways, ALM was an attempt to tame the development process by trying to assert total control over it. One method for asserting this control was by gathering requirements and ensuring that the final implementation fulfilled them. Another key method was for project managers to always have their fingers on the pulse of a project by defining a range of metrics and monitoring key performance indicators (KPIs).
In today’s agile world, managing requirements and metrics has largely fallen out of favor, but these two areas are still vitally important to building and delivering quality software. To be successful, an application or service must still meet the requirements of its users. Furthermore, the delivery toolchain used to create modern applications is highly automated and requires instrumentation—in other words, monitoring and metrics. This is where an Enterprise Agile Delivery (EAD) solution, like Panaya’a Release Dynamix (RDx), can help. RDx enables you to manage requirements and control the entire delivery process using agile methods along with DevOps continuous tooling to produce great software.
Summary: Combining the Best of Both Worlds
ALM may have been a flawed methodology that at best produced mixed results, but it was an attempt to impose order on the prevailing chaos involved in managing software-development projects at the time. This post has concluded that ALM processes exacerbated existing problems, such as communications breakdowns and compartmentalization, and that many organizations tried to overcome the complexities of ALM with expensive and complicated tools. In both cases, neither the processes or the tools made things better, and in many cases they actually made things worse.
Today, we are in a new world of continuous integration, delivery, and deployment—DevOps processes seen as the final realization of the agile movement. Still, there are lessons to be learned from ALM that can benefit teams and organizations as they implement and expand the use of continuous processes. By adopting new frameworks like Enterprise Agile Delivery and new tools such as Panaya’s Release Dynamix, you can combine the best of both worlds to create groundbreaking as well as high-quality applications.
Read more on how Agile and DevOps drive success and some best practices to achieve Enterprise Agile Delivery, in the complimentary Forrester report, The State of Agile 2017: Agile at Scale.