Blog
Product Updates

Twelve Key CTO and CIO Metrics of Codebase Health

Codebase Health – the quality, security and other non-functional requirements of the code, along with the development process and the developers who build the code – is a critical determinant of organizational success.

Oct 5, 2023
#
min read
Share
X

Table of contents

Introduction

Codebase Health – the quality, security and other non-functional requirements of the code, along with the development process and the developers who build the code – is a critical determinant of organizational success.

Without sufficient codebase health, the product roadmap can’t be delivered with predictability and appropriate velocity. Security breaches could cost the organization millions or more. Customer retention is at risk.

Sema has analyzed our dataset of 4 billion lines of code and $1T worth of enterprise value – across all team sizes and private sector industries – to identify Twelve key metrics across Engineering functions that assess codebase health.

You can read a whitepaper quantitatively analyzing many of these metrics here.

And you can use an anonymous calculator to estimate your organization's codebase health here. Takes 5 minutes.


These Twelve metrics can be divided into two main categories:

  • Product Risk, affecting the ability to deliver code with the right requirements predictably and with suitable velocity.
  • Compliance Risk, the degree to which the codebase is susceptible to security breaches and legal risk.

The Twelve metrics can further be broken into six functional areas, three related to Product Risk and three to Compliance.

Codebase Health Framework

The six functional areas are listed below along with questions that Engineering leaders should have answers to at their fingertips.

Product Risk:

  1. Code Quality: Is the code good enough to be expanded? How much investment will be required to clean up technical debt?
  2. Development Process: How disciplined is the software development activity, and is the observed variation expected and desired?
  3. Team: Who is the code’s subject matter expert developers, and do they still work at the organization?

Compliance Risk:

  1. Open Source: Might the code be legally required to be shared for free due to what third party code has been used?
  2. Code Security: Are there risks to the code or client information being hacked?
  3. Cyber Security: Have code or user credentials been hacked or accidentally shared? Is email set up to prevent phishing and spoofing?

Here are the 12 metrics and why they matter.


Product Risk Metrics

Number of repositories needing refactoring (Code Quality):

  • What is it: refactoring changes the code’s structure without changing its functionality.
  • Why does it matter: A refactoring can be needed to improve the maintainability and extension of the code and improve velocity. But it comes with a “tax” on the product roadmap by reducing capacity for new features and functionality.
  • Implications: Engineering leaders should view potential refactorings skeptically given their cost, and present a clear ROI to the business when making the case to the C-Suite. The rest of the C-Suite should share that skepticism, but be open to refactorings when the case is clear.

Language composition (Code Quality):

  • What it is: the number of software languages used by a codebase, and the riskiness of the individual languages used. Language riskiness includes the number of developers who know the language, whether the language is still maintained, and how unpopular that language is with developers.
  • Why does it matter: having too many languages, no matter how strong they are individually, makes it harder to find developers and balance workloads across the Engineering team. Having languages that are rare, unpopular, or unsupported puts the maintainability or extension of the code at risk.
  • Implications: Engineering teams should have a formal process to approve new languages in the core codebase (not counting Proofs of Concepts). The total list of languages should be reviewed annually. Engineering teams should make contingency plans when an individual language is too risky.

Core technical debt (Code Quality):

  • What it is: “imperfections” in the code common across languages, including insufficient unit testing, duplicate blocks of code, excessive complexity, and the presence of line level warnings (like what Grammarly finds in the English language).
  • Why does it matter: All code has technical debt. If it didn’t, nothing would ever get released—the code would be technically perfect but practically useless. Too much technical debt, however, hurts developer productivity and can cause performance and reliability issues for customers.
  • Implications: measure technical debt regularly and agree on the appropriate level for the organization’s size and stage. Fund the Engineering team to reduce technical debt to that level and train them on optimizing that debt.

Commit Analysis (Development Process):

  • What it is: how much development activity is going on, measured by the number of commits the entire Engineering team makes over time.
  • Why does it matter: Predictable Engineering activity leads to predictable product releases.
  • Implications: Measure total development activity over time. Explore the variations and see what the causes are. If development activity is unexpectedly fluctuating or declining, identify and resolve any blockers the Engineers are facing.

Development team size (Development Process):

  • What it is: total count of developers working on the codebase in a given time period.
  • Why does it matter: Developers craft the code that is eating the world. In a good way.
  • Implications: Understand who is working on the team and if the change over time is unexpected or expected. If unexpected it, get to the root cause. Note: this metric is especially important when working with third party development shops.

Average Developer Activity (Development Process):

  • What it is: the average amount of code (measured by commits) measured by the development team over a given period of time.
  • Why does it matter:  Consistent or increasing average developer activity increases the likelihood of meeting product roadmap goals.

Developer Retention (Team):

  • What it is: percentage of developers who’ve written a meaningful portion of the code who still work at the organization.
  • Why does it matter: Trying to maintain or write code without the coders who created it is like trying to finish a novel without the novelist. There is no substitute for having devs who understand the mental model and the nuances of the code.
  • Implications: Know who the key subject matter expert developers are. Keep them happy. When areas of the code have too little subject matter expertise, invest in cross-training and knowledge codification.


Compliance Risk Metrics

Third-Party Code Referenced Package IP Risk

  • What it is: number of high risk licenses from Open Source (Third-Party) code that is managed with a package manager.
  • Why does it matter: All Open Source code, and all code, comes with licenses. Some are permissive and do not present legal risk. Other licenses such as “Copyleft” generate legal risk for the business if used the wrong way—in particular, the risk that the organization’s code needs to be given away for free.
  • Implications: train developers to review licenses when considering which Open Source code to use. Periodically review the code for at risk licenses and remediate the “true positives.”

Third-Party Code In-File IP Risk

  • What it is: similar to #8, but concerning the legal risk from Open Source code directly copied into the repositories, rather than
  • Why does it matter: The organizational risks are the same as #7. However, what’s riskier about this metric is that this license risk is an iceberg, vast majority is hidden from sight, since it’s not neatly organized by a package manager. Across our dataset, organizations have on average 77 times more In-File risk than Referenced risk.
  • Implications: Same as #8, but tooling is necessary to find in-file risk, it can’t be done manually.

In-File Security Warnings

  • What it is: risks to the security of the code and data inside the code itself.
  • Why does it matter: Code security is a Multi-Billion dollar problem.
  • Implications: Invest in tooling (SAST/DAST) and training for Engineers. And even more important, the C-Suite must permit Engineering to allocate development time to remediate high risk warnings.

Third-Party Security Warnings

  • What it is: risks to the security of the code and data stemming from the use of third party / Open Source packages.
  • Why does it matter: See above.
  • Implications: Same as #10, but with a CVE detection tool, and giving Engineers time to upgrade packages to reflect security patches.

Number of Cyber Security attack vectors

  • What it is: count of risk areas stemming from the setup of organization domains and subdomains, and sensitive data available on the dark web.
  • Why does it matter: see above.
  • Implications: Implement a broad-based cyber security tool, or tools, and invest time in triaging and remediation.

p.s. Curious how your codebase health compares? You can use a free anonymous calculator to estimate your score.

No items found.
Want to learn more?
Learn more about AI Code Monitor with a Demo

Are you ready?

Sema is now accepting pre-orders for GBOMs as part of the AI Code Monitor.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.