Skip to main content

Stack Divergence


The visualizations described on this page have been sunsetted in IQ Server/Lifecycle release 170.

Please refer to Integrated Enterprise Reporting for the latest version of Data Insights, available from release 171 onwards.

The Stack Divergence analysis compares the popularity of components in your Java apps to the industry at large. This comparison helps you identify where you meet the de facto standard of the industry, and where you don't.

Stack Divergence is presented in a series of data tables, as in the examples below.

Example of the Stack Divergence data table


Stack divergence is an important part of open-source component management. Your applications, and the software development community as a whole, benefit from standardization on a small set of high-quality, well-maintained open-source components.

The intended outcome of the Stack Divergence analysis is to prompt discussion about the component choices in your applications. Ideally, it encourages you to adopt popular components that your peers are already using and avoid, or migrate away from, unpopular, out-of-date components.

Accessing the Stack Divergence Analysis

To see the Stack Divergence analysis on your IQ Server, you'll need to be a Lifecycle customer.

Click on the cogwheel icon in the top right-hand corner of the browser UI, then click Data Insights at the bottom.

You'll be brought to the Data Insights landing page. Click Stack Divergence on the left.

Reading and Understanding the Analysis


Currently, Stack Divergence only measures your Java apps.

Because the Stack Divergence analysis is being actively developed, some details about how it's generated are subject to change.

Stack Divergence updates once a month, on the first of the month.

The Stack Divergence analysis measures the rate of change between two sets of data. The first set of data is the popularity of components across the industry, which is pulled from known SBOMs across a variety of sources. The second set of data is the components in your applications, which are pulled from regular Lifecycle scans.

The central metric for the Stack Divergence analysis is Max Divergence, which can be seen on the first and second levels of the data table. Green bars and positive numbers indicate convergent behavior, meaning that your component usage and the industry's component usage are becoming more similar. Purple bars and negative numbers indicate divergent behavior, meaning that your component usage and the industry's component usage are becoming less similar. Ideally, you should desire to see positive numbers and green bars, because that indicates that your apps are using the same popular components as the rest of the industry.

The Max Divergence metric measures two moving targets – the industry's component adoption and your own – Max Divergence has a limit of +/- 200. To quickly orient yourself, keep in mind the following hypotheticals:

  • A score of positive 200 would indicate that every app in the industry, including yours, had converged on a single set of components within the last month.

  • A score of negative 200 would indicate that every app in the industry, including yours, had chosen entirely unique components with no overlap in the last month.

  • A score of 0 indicates that there was no change to the industry or to your own apps in the last month.

Three Table Levels

There are three levels of data tables in the Stack Divergence analysis.

The Category level is the first level of the data table. It's what you see when you first access the analysis.

The Subcategory level is the second level of the data table. You access this table by clicking on a row on the first level.

Example of the first level of the data table

The Component level is the third level of the data Table. You access this table by clicking on a row on the second level.

Example of the third level of the data table.

The third level can be expanded to show more details for a specific component. Click on a component to view these additional details.

Example of the expanded details.


Ultimately, the value you receive from the Stack Divergence analysis will depend on your organization, its goals, and its overall maturity. It will also depend on your knowledge of your organization's apps and your ability to affect change.

However, what follows is a simple workflow for quickly realizing some value from the analysis. In this workflow, you'll identify where your applications are most "out of sync" with your peers and determine if selecting a different component for new apps, or migrating to a different component in your existing apps, is a good strategy. On a bimonthly basis:

  • Access the analysis to see the first-level data table. Select the bottom-most category. The second-level data table will appear. Select the bottom-most sub-category.

  • The third-level data table will appear. Sort by Popularity in descending order. The components at the top of the list are the components that are most popular for that Category and Sub Category in market segments like yours.

  • Review the top three or four most popular components. Think about the following factors.

    • Are your apps already using these components? Check the My Usage column. 0% means that none of your applications are using that component. 50% means that half of your applications are using that component.

    • Are these components high-quality? Review the Quality Rating column. Bars that are mostly filled indicate a higher-quality component. Components without bars mean that Sonatype doesn't have enough data about them to make a determination.

    • Are these components in use by the majority? Review the percentage in the Popularity column. Components can be the most popular without being adopted by the majority.

    • Are these components gaining or losing popularity? Again, review the Popularity column. The arrow and the number alongside it show the change in popularity. Take note of double-digit changes. Small changes in a few points are inevitable.

    • What do you know about these components? Do you recognize the name? Do you know them to be problematic or frequently vulnerable?

  • From those top three or four most popular components, identify good candidates for migration and/or preferential selection in new apps.

    • If one of the components is high-quality and is widely popular or rapidly gaining popularity, it's possibly a good candidate.

    • If any of the components are low-quality, are not widely popular, or are rapidly losing popularity, it's a bad candidate.

  • Identify the teams in your organization that are responsible for apps that could use these good candidates, but aren't.

  • Schedule a 5-minute call or attend a roundtable with those teams. Publicize what you've learned.

  • Identify if the good candidate makes sense for the app. Does it provide a function that's truly equivalent to the currently used component?

  • Discuss the possibility of migrating to a good candidate.

    • Not every app will be able to migrate. Ask why and document the response. Some examples may include:

      • It's a legacy app, and there's no development budget.

      • The app is part of a complicated architecture and migration would risk other apps in the structure.

      • A good candidate carries nonobvious risks or drawbacks.

Speaking broadly, if the app can migrate, then it probably should. However, finding time for a migration is difficult and may not be a priority. Focus on enabling development teams to migrate when the opportunity presents itself. Migration is naturally easier at certain points in an app's life cycle.

Above all, be a good listener when discussing possible migrations with development teams. Standardizing is smart, but it's not always easy, and the Stack Divergence analysis can't capture the nuances of your organization, its business goals, and its value streams.