Tools and Templates

Material that may be helpful to understand the dynamics of the SCM software business

Industry Segmentation


Successful supply chain management (SCM) software must be capable of adapting to the needs of multiple industries. SCM software companies typically target certain industries based on their capabilities and their desire to grow, using segmentation strategies. Full lists of industries are available through the North American Industry Classification System (NAICS) or the Standard Industry Classification (SIC) system. A simplified set of industries that SCM software companies typically target is found in the diagram below. 

But these delineations are general. Within industries, you are likely to encounter a variety of processes that span multiple industries. Even within a single manufacturing facility, you may have front-end process-oriented manufacturing feeding back-end discrete manufacturing. Semiconductor manufacturers have morphed from B2B companies to a combination of B2B and B2C, as they have moved into consumer products. 

The challenge for software companies is to achieve these goals without creating a feature-bloated behemoth that is so complex that is it is costly to implement and maintain. This is no small feat. Over the years, most software companies have attempted to create templates, or industry-specific variants that come out of the box preconfigured for a specific industry, or even a sub-industry within an industry. In this case, the software is still loaded with features, but configuration "switches" are turned on and off based on the presumed needs of a specific industry. System integrator firms have also jumped into this game by providing their own "best practice" templates, based on the implementations they have done in a specific industry. 

For software companies, templates are challenging because they require significant investment if they are to be done right. Templates need to be treated the same as any other software product - they need to be updated, tested, documented, and released through the same processes as the core software product. This requires investment and long-term commitment. Whenever the core product is released, the templates must also be released. Software companies heading down this route must make a long-term commitment and then also figure out things like release timing and industry-specific roadmaps. 

Manufacturing Industry Characteristics


There are various ways to look at industries to understand the structural dynamics of their supply chains. The diagram below is an example of how to look at industry supply chains by generalizing the characteristics for discrete manufacturing industries. The chart shows product, process, and volume characteristics that define how discrete manufacturing industries operate. 

Product, process, and volume complexity says a lot about how supply chains are organized and operate. High product complexity is usually associated with products that have complex bills of materials. These types of products typically have material-intensive supply chain challenges - meaning lots of suppliers, engineered content, and potentially a diversity of upstream processes. Process complexity, on the other hand, is a measure of the number of steps, the routings, bills of distribution, assets, and labor it takes to produce and distribute a product. As the diagram shows, most discrete manufacturing industries have high process complexity - meaning lots of steps, lots of assets, and/or lots of labor to produce products. High product complexity typically comes with high process complexity simply because a high number of parts requires a lot of processes to bring them together into a final product. Volume adds another dimensions of complexity; high volume operations that make complex products require a lot of precision to maintain high levels of quality.

For example, the automotive industry is the only major industry that makes a highly complex product, with complex processes, at high volume. To give a sense of product complexity, a subjective way to look at things would be to say that if a computer has a complexity of 10, then an automobile would have a complexity of 100, and a commercial airplane a complexity of 1000. An automobile contains 20,000 detailed parts with roughly 2000 components coming together at assembly. At the same time, an automobile consumes from a diversity of industries, with a diversity of processes, including metals, electronics, industrial, rubber, textile, and semiconductor. And, with an industry volume that exceeds 80 million per year globally, it is obviously a high volume industry. 

Manufacturing industries that have high process complexity are typically associated with lots of assets and thus have the need to keep those assets busy in order to make money. In any case, the discrete industries shown in the chart below can be contrasted with consumer packaged goods (CPG) industries, which produce less complex products using less complex processes. Supply chain problems in CPG industries are typically distribution-oriented, meaning they are dominated by warehousing and transportation. These are also the dominant supply chain problems in the retail sector. 

Comparisons of supply chains across industries is very tricky, and generally not useful when it comes to metrics comparisons. This is not to say that industries can not learn from each other; at the process level, individual process elements in one industry can be lifted and used in other industries. It is just that comparisons of things like inventory turns are academic at best. Even within industries, comparisons of metrics like inventory terms may not be very useful. For example in the automotive parts industry, there are companies with relatively high inventory turns and others with relatively low inventory turns. The structural dynamics of companies and the type of products they make have a big impact on their operating models. Thus, it is really important to understand the structural dynamics of companies in order to draw comparisons. 

For example, the semiconductor industry is an asset-intensive industry; it takes large investments in property, plant, and equipment (PP&E) to make semiconductor devices. Traditionally, in this industry, it takes about one dollar worth of PP&E to produce about one dollar worth of revenue. Having said that, one needs to be careful to also understand other trends. For example, in the semiconductor and other industries, the past twenty years has seen a general trend to go "asset light," which means to outsource manufacturing to someone else. This offloads assets upstream into the supply chain. In semiconductor, this is called going "fabless," which means a company designs and engineers the semiconductor devices and then hands off the manufacturing to someone else. Thus, Taiwan Semiconductor Manufacturing Company (TSMC), the world's largest pure-play semiconductor manufacturing company, has a revenue-to-PP&E assets ratio of 0.95; while Skyworks, which is a fabless semiconductor company, has a revenue-to-PP&E assets ratio of 11.8.

At the end of the day, someone has to own the assets necessary to produce stuff. Having said that, over the course of the past twenty years, managers and boards know that stock markets and investors have rewarded those with fewer assets. This has been widely discussed and is central topic of a recent HBR article, which provides empirical data supporting this assertion. Thus, even 3PLs and service providers - the companies other companies used to offload their assets to - are looking to offload assets. Of course, software continues to eat the world of physical assets, and this is the idea behind multiple virtual supply chains

One could argue that a semiconductor product such as a microprocessor is a complex product, but from a supply chain perspective, it is not material intensive in the same sense as a complex BOM product like an automobile. It is produced by a highly complex set of manufacturing processes, thus its manufacturing process complexity is high. 

The chart also shows the typical gross margins and inventory turns for each industry sub-segment. The relationship between gross margin and inventory turns follows a pattern similar to other industries (for a discussion on this in the retail industries click here). This pattern is driven to some extent by product complexity, process complexity, and volume. 

The 20% Improvement Rule-of-Thumb


After analyzing hundreds of companies to determine how their supply chains perform and could potentially perform better, it is apparent that all companies have the ability to improve their operating performance. While it is certainly true that some companies have a lot more opportunity for improvement than others, there is a certain level of improvement that all companies can achieve. Based on years of analyses, we have established what we call the 20% rule of thumb. This rule of thumb says that in most companies, there is a 20% opportunity to improve operating income through improved supply chain management. The size and mix of the opportunities may vary significantly from company to company based on current challenges and competitive threats. The mix of opportunities shown in the diagram below is specific to manufacturing companies. 

The diagram shows how this might play out for a normalized $10B manufacturing company, with $7.5B cost of goods sold (COGS), $1.6B in selling, general, and administrative (SG&A) expense, and $400M in R&D spend. These numbers would be pretty typical for an industrial manufacturing company. The company has a $500M operating profit, which implies something like a $300M net operating profit, which means 3% of sales, which would be typical in an industrial manufacturing environment. Using the rule of thumb, this normalized company probably has a $100M operating profit improvement opportunity in its supply chain across the categories shown in the diagram. Of course, the devil is in the details, but this size opportunity is out there for most companies. 

What are Best Practices?


"Best practices" is a term that gets thrown around like it's some sort of gold standard against which to compare performance. In order to understand best practices, here we discuss and contrast "breakthrough" practices with best practices. 

Breakthrough practices can also be called disruptive practices, since they are completely different from status quo practices, which heretofore were best practices. Those initiating the breakthrough practices may be unknown, and once they arrive on the scene others try to figure out what they are doing; many pooh pooh them as a fad that will go away. Many continue to increment off of their current supposed best practices believing that is the better approach for a whole host of reasons, including risk, investment, inertia, or just plain laziness. For many, by the time they come around to adopting the new practices, they are so far behind that they've become obsolete. We all can point to myriad business examples of this. 

Over the years, one of the ways I have described best and breakthrough practices in a business context is to use the analogy of the Fosbury Flop. Many know all about the Fosbury Flop, but here's a short summary. Dick Fosbury was an American track athlete who competed in the high jump event in the 1960s. He tried unsuccessfully to make his high school team in the high jump event using the prevailing technique, which was known as the straddle technique. The straddle technique involves jumping over the bar face down while the legs swing in a fashion that straddles the bar. The straddle technique, and its variants, was the best practice of its time, and the prevailing practice for the previous fifty years. 

Fosbury was 6'4" tall, which hindered his ability to adopt the straddle technique. Like many breakthroughs, necessity was the mother invention. He started experimenting with an opposite technique - going over head facing up, back facing down, which upon landing, looked like a flop. 

In a matter of five years, he went from not being able to make his high school team to winning the gold medal at the 1968 Mexico City Olympics. Still, while there were many early adopters after that, many hung on to the old techniques, and it took a good decade or more for the Fosbury Flop to become the dominant technique. The adoption curve of the Fosbury Flop follows a pattern very similar to disruptive practices that come along in the corporate world. First, they are unknown, then there is success but not enough to attract attention, then there is attention-grabbing success followed by denial by others, then there are early adopters and more success, then more jump on the bandwagon and finally capitulation by the entire business community. The new practice becomes the best practice and then moves along on a continuous improvement curve. 

One of the characteristics of the Fosbury breakthrough, which is also true of business breakthroughs, was that not only was it a breakthrough in its own right, but it adjusted upward the improvement rate, as shown in the diagram below.  This is an important characteristic of breakthroughs - they don't just shift upward the curve, but they also change the slope of the new curve. In this case, the new technique doubled the rate of improvement of the old technique. If this were true for a business breakthrough and you adopted it five years after your competitor you would be significantly behind because of the cumulative compounding effect of your competitor's head start. 

So, consultants like to say they will bring best practices, and we all like to be leveraging best practices, but it is important to adopt them sooner, rather than later. Furthermore, it is also important to look for breakthrough practices because of the time-value impact of first-mover advantage.  

The Planning-Execution Funnel


The planning-execution funnel (also commonly called the "planning funnel"), depicted in the diagram below, is a classic tool for understanding synchronization of time and function across supply chain processes. It can be used at a macro level for discussing synchronization from long-term product development to short-term order-to-delivery and has also been commonly used in traditional master production scheduling time-fence discussions. This tool was used by software pioneers in the 1990s, not just for explaining the problem, but for explaining how new technologies and approaches were going to solve the problem. That promise remains unfulfilled. 

The funnel is an apt analogy, because on the left side at the mouth of the funnel, things are less granular and become increasingly detailed as you move from left to right towards the throat of the funnel. At the entrance to the funnel on the left, you are planning new products and planning placement of assets and capacities to support these new products. At the throat of the funnel is where execution occurs; this is where you are producing, distributing, and delivering products to customers.


The traditional way to describe the time dimension is in terms of three horizons - strategic, tactical, and operational. In the strategic horizon, companies make decisions on product and capital investments over the next five years consistent with their vision and aspirations. As time rolls forward, these decisions typically go through stage-gates each requiring additional capital and operational cost outlays. These stage-gates, like other decisions throughout the funnel, are tied to lead-time requirements. For example, commitment to a new product may require capacity commitment to a supplier 24 months ahead of unit one of production. Tactical horizon functions are typically associated with the planning of operations. These functions decide what and where to produce and distribute, how much labor is needed, and make decisions on flexing capacity, running promotions, increasing or decreasing inventory, and running overtime. These functions set the boundary conditions under which the operational horizon functions operate. Operational horizon functions are involved with production, distribution, delivery, order management, and customer management. 

Most supply chain transformation programs involve multiple of the business processes shown in the diagram below, within and across time horizons. Much of the time in planning and executing these programs is associated with "integration." As much as 30-50% of SCM transformation program cost goes towards integration. But integration is not synchronization. Integration merely means that data moves from function A to function B; integration is a necessary, but insufficient condition for synchronization.


Synchronization also requires timing, data freshness, and decision interlock. Receiving processes require data to be delivered by a certain time. This is typically part and parcel to integration, but timing aspects of integration are often overlooked until things don't seem to be working as planned. Data freshness is equally important; Sending two-day-old data to a receiving process that then takes a day to process it and return decisions means that such decisions might be three days out of phase with reality. These decisions will then have to be massaged (or "post-processed," to use a term often used in the back-and-forth movement across the funnel) to steer them back to reality. But this post-processing also takes time, so that the act of trying to steer decisions back to reality may adversely affect answers in a weird Heisenberg-uncertainty-principle sort of way. In lean manufacturing terminology, this is part of the "hidden supply chain," representing huge hidden costs. 

Therefore, synchronization between business functions in a company does not happen in the same way as synchronization of the gears in an automobile transmission, even if most consultants and software providers will use that pictorial analogy to explain how they are going to provide that level of precise interlock. Rather, state-of-the-art synchronization continues to be one of time-lagged data hand-offs made possible by expensive, brittle integrations, including lots  of post-processing.  The reality is that gears still work, but they grind and the transmission often slips. Propulsion is still forward, but it is a lot less smooth than desired. Embedded in this lack of smoothness is a lot of value loss, not to mention organizational frustration as the vehicle jerks forward.

There is hope. Some software providers have resurrected this challenge and taken it head-on, even if only for certain parts of the funnel. This is a great start that might eventually lead to technology-enabled business process convergence. This concept posits that the synchronization promise put forth by software pioneers in the 1990s will finally start to be realized, first within certain horizons of the funnel, and then across horizons.