Tired of Playing Ping-Pong with Dev, QA and Ops?

Editor’s Note: While Thoughtworks Studios is affiliated with the IT consultancy ThoughtWorks, it’s aim is the development and marketing of automation tools to enable more Agile delivery of DevOps.

IT organizations are under ever increasing pressure to deliver software faster and more reliably. On the one hand, businesses are being squeezed by faster-moving competition. On the other hand, IT is often spending upwards of 70 percent of their budget on operations, much of which is spent maintaining mission critical systems on heterogeneous and legacy platforms.

As businesses attempt to become more responsive to change, project teams that have successfully adopted an agile development approach are requesting releases at an ever increasing frequency that IT operations simply cannot support without compromising quality and stability. Because of this, IT practitioners have begun to coalesce around a new approach to this problem that involves organizational change, as well as embracing new practices. This approach is known as DevOps.

At its heart, DevOps focuses on enabling rapid, reliable releases to users through stronger collaboration between everybody everyone involved in the solution delivery lifecycle. One of the important results of this collaboration is the application of agile practices to the work of operations, in particular aggressive automation of the build, deploy, test, and release processes for hardware and system environments.

In practice, DevOps means that owners, developers, testers and operations personnel collaborate on the evolution of systems throughout the service lifecycle. New releases are deployed to production-like environments (or even to production, depending on business needs) continuously throughout the development process. This requires operations people to work with developers from early on to automate provisioning, deployment and monitoring of environments and systems under development. Meanwhile testers work with developers — throughout the delivery process — to create comprehensive suites of automated regression tests that validate that every change to the system meets established business needs and can be deployed with minimal risk of defects or system failure.

While this approach has proven effective at improving the reliability and stability both of releases and of operational environments, it can pose several challenges to organizations wishing to adopt it:. First, it requires an organizational culture where developers, testers and operations teams collaborate throughout all phases of the service lifecycle. Second, it requires the adoption of agile techniques by operations teams such as aggressive automation, comprehensive environment configuration management and test-driven systems evolution. And, finally, it involves the adoption of new tools such as cloud computing stacks, test automation frameworks and systems monitoring tools.

How DevOps enables effective risk management

Many of these changes can be controversial in an organization. For a start, they appear to fly in the face of established control concepts such as segregation of duties and least privileges. These controls have been adopted from the accounting world as a way to meet regulations and standards such as Sarbanes-Oxley and PCI and are intended to reduce errors and fraud.

In the traditional view, the controls are usually applied manually and can become obstructive to the IT organization’s ability to respond to the business’ needs due to onerous review and approval processes and functional silos that do not collaborate on solutions. However, the DevOps approach to delivery can apply automated processes with controls in an equally effective manner to manage risk and thus meet regulatory goals, while still enabling frequent releases of new functionality to IT systems that deliver value to the business.