Quality Intelligence for DevOps

We have moved

This is a legacy version of Copado Robotic Testing help. This page will not be updated anymore.

See https://docs.copado.com/ for up-to-date documentation.

Introduction to Quality Intelligence for DevOps

Quality Intelligence (QI) for DevOps is a DevOps data analytics and integration solution that provides you with tried and true DevOps metrics that allow you to:

  • Understand the current status across all phases of the DevOps process (DevOps - development and operations)

  • Understand how different factors affect each other (leading indicators)

  • Make your value stream visible, measure cycle times, and observe where the bottlenecks and queues are

  • Solve issues proactively rather than after-the-fact

  • Start measuring DevOps on Day 1 by just configuring the dashboard and integrations – without any massive infrastructure development projects

QI for DevOps gathers data from your DevOps data sources and presents it in a form that can be easily understood by everyone in the organization. You don’t need to have in-depth understanding on every area of measurement; it’s enough to understand the traffic light colors (is it good or bad?) and index values (how good/bad is it?). All metrics are available to view in one place at any time without the need to log in to different tools.

The essential elements of Quality Intelligence for DevOps are:

  1. Copado Robotic Testing Data Warehouse that stores all measurement data. All test data from Copado Robotic Testing test runs is also stored into Data Warehouse.

  2. Pull Services that pull data from other DevOps tools and store it to the Data Warehouse.

  3. Push Services that can be implemented to push data to Data Warehouse using its API.

  4. Insights – the data analytics application that processes the data stored in Data Warehouse and allows the user to define metrics, data sources, and dashboards for visualizing the data.

How to find actionable DevOps metrics?

It is difficult to implement a useful, effective software measurement system. Maybe the most typical problem is focusing on vanity metrics. They are metrics that may sound cool or where you may have nice scores that make your team look good – but those metrics don’t help you improve anything. Metrics might be irrelevant, easy to manipulate (e.g. lines of code, velocity as story points) or the data used for calculating them might be unreliable or skewed so that it makes your scores look nice. If you can’t conclude any actions based on your metrics scorecard, you probably have this issue. QI for DevOps dashboard comes with a pre-defined set of proven, actionable metrics you can integrate your data sources to.

Another typical measurement challenge is the lack of leading indicators. Your metrics might be valid as such but you don’t know what factors have caused the results. For example, one day you notice that your Quality in Use metric has dropped by 20% in 2 weeks and your Deployment Frequency keeps on slowing down. What things are causing those? What levers should you turn to fix the situation?

QI for DevOps metrics have been derived from a system model that illustrates how DevOps creates value and depicts the assumed causalities among DevOps outcomes. We call this visualization as DevOps Value Creation Model (VCM).

devops vcm

DevOps process is often depicted as a never-ending, fast paced infinity loop as above. One full DevOps process cycle can take a month or just one day. PLAN refers to agile planning, BUILD to code implementation of the next release to be integrated and tested in CONTINUOUS INTEGRATION phase. DEPLOY is about delivering the release candidate to production and OPERATE is the Ops phase of each production release where its performance is monitored. We also want to LEARN from the software we build as well as from the DevOps process and continuously improve both. Understanding the core logic of how DevOps works and measuring essential indicators helps us here.

The arrows and their colors indicate assumed causalities. The blue arrow color in the value creation model means that the measured items develop to the same direction: the higher (or lower) the Production deployment frequency, the higher (or lower) will the Flow of value be. And the red color denotes the opposite direction: increasing Technical debt leads to decreasing Code quality and Velocity.

The goals of DevOps are to 1) accelerate the flow of value deliveries and 2) keep the quality in use (i.e. quality as perceived by the customer) high. The most important causality chains that contribute most to these DevOps goals are called  value paths. The value paths give you the leading indicators that help proactively fix the quality and productivity issues before they escalate.

We have identified two value paths for DevOps:

  1. Path of Flow Velocity: Quality in use → Defect Inflow → Unplanned work → Flow load → Delivery predictability → Firefighting → Technical debt → Velocity → Flow of value; and Release quality → Production deployment frequency → Flow of value.

  2. Path of Release Quality: Technical debt → Code quality → Functional quality, Performance and Security → Release quality → Quality in use.

The logic of these two value paths can be summarized as follows:

  • Path of Flow Velocity answers the question "How can I accelerate the flow of value?". It highlights what you can do to make sure you deliver valuable new features or fixes to your end-users as fast as you can.

  • Path of Quality answers the questions 1) "Is the Release Candidate OK for Production?" and 2) "What should I fix to prevent service outages?". It highlights what affects the quality of your deliverables and their current status.

  • You need to release good quality software to production in order to let the team focus on new development and not just fixing bugs (ref. Downward spiral of IT).

  • Good enough Release quality is must-have as an enabler for frequent production deployments, too. The value from software development efforts is released only when software is deployed to production.

  • Achieving constantly high release quality requires explicit investments in paying technical (and process automation) debt as well as solid technical practices for software development and testing.