Continuous Pipeline Configuration

The continuous integration pipeline evolves over time as architecture and design changes are made.   Manually configuring the pipeline can be a costly activity if the project contains a large number of modules.   Setting up a ‘Continuous Pipeline Configuration’ process may become a necessity for reducing this time-consuming effort, as well as for reducing the risk of important bug fixes or other changes being left out of the final product build due to an improper pipeline configuration.

Developing a CPC process to regularly validate and adjust the pipeline is something I highly recommend for any size project, but especially for large, complex modular projects. CPC is actually quite easy once you know what kind of source code changes will require corresponding changes in the pipeline configuration.

The CPC process works much the same as a CI server watching a version control system for change.   The pipeline configurator watches for changes to the build dependency graph for the full product build and makes the appropriate pipeline adjustments to the build and test triggers in the CI server. When a build or test dependency is removed from the graph, the corresponding CI trigger is removed and when a dependency is added, a trigger is added.

Continuous Pipeline Configuration

  1. Capture the product build and test dependency graph
  2. Capture the existing Continuous Integration pipeline configuration
  3. Compare the dependency graph to the pipeline configuration
  4. Update the CI triggers as needed.

The product build and test dependency graph is the complete list of ‘A depends on B’ relationships.   I like to think of this as the producer-consumer relationship where B is the producer and A is the consumer if A depends on B. Both A and B can be any type of process, activity or event, such as build, test, bundle, or deploy. It is not mandatory for these producers to actually publish binary artifacts. In some cases, the only output is a pass or fail result – enough to determine if a corresponding CI trigger should or should not execute the next event in the pipeline.  If you are using the Artifactory for your artifact repository, you use it to store and retrieve the dependency graph via the Artifactory Query Language.

If the consumer module’s dependency is a static constraint, the dependency is a ‘pull’ connection in the pipeline and if the dependency is a dynamic constraint, the dependency is a ‘push’ connection. Pull connection don’t need a CI trigger since the change to pick up new versions will be a code change that triggers the ‘consumer’ build or test. Push connections require finished build triggers in the CI server so new ‘producer’ builds will trigger ‘consumer builds if they succeed.

Continuous Pipeline Configuration is not limited to Continuous Integration.  It can be used for configuring of all of the pipelines in the Continuous Delivery process.  My thoughts on CD are very similar to those of Jame Betteley as he describes his views here: Methods and Tools.  Continuous Delivery extends the core principles of CI all the way through to the end of the software development life cycle.

We’re automation experts.  Let’s take Continuous Integration – and ultimately Continuous Delivery – to the next level with Continuous Pipeline Configuration.

Continuous Delivery is a Solution

The rush is on!  Software development organizations everywhere are rushing to implement Continuous Delivery.   It seems as though being ‘continuous’ is the answer to every company’s success story.  Successful adoption of the continuous movement is synonymous with product development success, but an important distinction is missing.  Continuous Delivery is a solution; not a development process.

I’ve been around long enough to see a few ‘one size fits all’ solutions go too far.  One time in the late 90’s, someone proclaimed to me that object oriented programming would be replacing every bit of procedural code in existence.  COBOL was out, C++ and Java were in.  OO was all the rage.  But last time I checked online, my bank transfers are still not entirely real-time operations.  The debit side happens right away, but the deposit happens during an evening batch processing run.  Some kind of procedural programming run is still in there.

Puppet and Chef are great tools, but what process do you follow when deciding if you need them?  Is it simply a, “follow the herd decision?”  Did the loudest, most passionate developer make the decision for you?  Is continuous deploy the right solution for your build environment?  Did you create a set of requirements, analyze them, and select accordingly?

Of course, it’s not that these continuous solutions are bad.  Quite the opposite; they are great solutions.  It’s the notion of a single solution being right for every problem that doesn’t sit well with me.  Herd mentality appears to have kicked in and everyone, it seems, is on a rampage to pursue this great new silver bullet.

Continuous integration is something many of us have been working toward for the past 15-years of software engineering.  I think of continuous integration, continuous test, continuous deploy and all the others are as nothing more than pieces of a puzzle.  They are great pieces and suitable for many organizations.  CI is an easy way to say, “one requirement of my SCM environment is end-to-end, unattended build automation.”  Are these other continuous solutions anything more than the same principles applied to the rest of the development processes?  Is continuous test the right way to test every software product? Is continuous deploy the right solution for your deploy?

A better choice for is the pursuit of a process to determine the best solution for creating and delivering our software.  If the solution produced by the process is Continuous Delivery, and it just might turn out that way, then awesome.  But if it’s not, you will be well on your way to figuring out the best solution for your situation.

My suggestion is to use an SDLC approach for determining if one or all of the continuous solutions are right for your project.  Start by understanding the stakeholders and defining the requirements instead of rushing to the answer.  Follow the process through implementation and maintenance and listen to the feedback.  Using an SDLC to determine our internal development processes means our stakeholders are internal.  Some are easy to find, but others are quiet and you will have to look for them.

This site will never be a one-stop shopping destination for everything related to the internal SDLC.  It is my intent to start conversations regarding various ways to effectively use and perhaps even formalize an SDLC to manage our internal development processes.  I think it’s prudent to view all of the internal development process together as a single system and evaluate effectiveness using the same tried an proven techniques we use for evaluating how successfully our business products satisfy the needs of our customers.