Moderator

Apr 152013
 

Hiller Associates received a question this week from a business school asking us what the revenue of the product cost management market is. That was a very interesting question, and one that we have thought about before. However, we’ve never actually sat down to think about the question formally. So rather than answer the person privately, we thought it might be helpful to everyone to discuss this in a public forum.

 

Product Cost Management Software Companies Hiller Associates

Companies included in the Revenue Aggregate

There’s good news and there is bad news with respect to the estimation of the size of the Product Cost Management market. The good news is that the market is fairly self contained, i.e. there are only a certain limited number of players in that software market. However, that’s where the good news ends. There are several challenges to estimating the market size:

  • Private companies -80% of the players in the product cost management software market are privately held companies, either venture funded, or privately held by a small group of founding owners and managers. Therefore, their revenue numbers are closely guarded information that is not publicly available. This includes the PCM company that our managing partner, Eric Hiller, founded and at which he was the CEO and then the Chief Product Officer for many years (aPriori).
  • Bundling – the second challenge comes with the fact that some of the larger players bundle product cost functionality into the price of another larger product. For example, Solidworks Cost is a bundled-free option that is included whenever someone buys their professional or premium level Solidworks CAD product.

There is is one other good piece of news, which is that Hiller Associates knows most of the players in the market and speaks with them regularly.  For some of them, we do know the revenue, and for others, we have a good idea. Obviously, we cannot share revenue numbers of an individual company, but this inside information will help us move the estimate from a wild guess to an educated guess.

Taking a look at the figure on the left, you will see the companies that we have included in the estimate. These are all the main players that we know of in the market. If there are others that we’ve missed, we’re very happy to learn about them and consider if we should add them to the market sizing.

2012 Revenues in the Product Cost Management Software Market

Those of you who follow this blog or have worked with Hiller Associates know that our philosophy is that point estimates are very dangerous and, often, not even that useful. Knowing the uncertainty around a cost number is just as important as having a point estimate of what that number is. We feel this holds true with any financial quantity. Therefore, we will provide a range of the size of this market.  Please see the figure on the right, which shows are estimate of the total revenue of the company’s above for 2012. When given the uncertainty factors that we have discussed above, we feel the total market has a large range. Total revenue could be as low as $60 million or as high as, perhaps, $115 million.

Current market revenue of Product Cost Management Hiller Associates

Click to Enlarge!

Other questions that people should ask are how much of this revenue comes from services and how much of this revenue comes from actual licensing of product. Some of the companies included are primarily product companies, and most of the services that they offer are tightly bound around the product. Such services would include training on the software, implementation, and customization. However, there are others in the group that also maintain general consulting businesses. For example, several of these companies offer classes about product cost or  subsets of product cost management, such as design for manufacturing & assembly (DFM/DFA).  These are general classes which only relate peripherally to their products.

Estimate Methodology

To do our estimate, our methodology was to estimate the revenue of each of the included companies individually. We also did an estimate of the service percentage of their revenue on an individual company basis. Then we added up the aggregate numbers. You will notice from the estimate figure to the right that when all the numbers for service and product revenue or aggregated, there is approximately a 60/40% split between product and service respectively. This ratio of product vs. service revenue seems approximately correct, per our experience in the market.

Software vs Services in Product Cost Management Hiller AssociatesIt’s important to understand, that this estimate is four the actual revenue of these companies in 2012. It does not reflect the total addressable market for product cost management software, which we believe is woefully on realize that this moment. This is the first our discussions about revenue in the product cost management software market.

NEXT TIME: Growth Rate in the Product Cost Management Software Market

 

Share
Mar 182013
 

There are universalities that seem to cross people and cultures, such as, it’s polite to say “please” and “thank you.” These universalities also occur numerically. For example, designs that follow the Golden Ratiopop up all over the world. Many other aspects of one group versus another may vary, but there are these universal touchstones that pervade the world. The same is true with companies. Granted, one might argue that one company simply is imitating another company and that is why they share a simple practice or the important of a certain number. We believe that this is, indeed, true in most cases. Still, there are a couple of numbers in companies that seem to arise independently in all companies. We are going to talk about some of those universal and independent numbers today with respect to Product Cost Management.

The great universal number in PCM is “10%.” I have met hundreds of companies over the years, both in consulting and when I was the Founder, CEO, and then Chief Product Officer of one of the product cost management software companies. Invariably, a meeting with a company will occur, in which one of the customers will utter the “a” word: accuracy. The dialogue proceeds similar to the following:

CUSTOMER: So, how accurate is your software?
PCM TOOL COMPANY: What do you mean by “accurate?”
CUSTOMER: Uhh, um, well, ya know. How ‘good’ is the number from your software?
PCM TOOL COMPANY: Do you mean do we have miscalculations?
CUSTOMER: No, no, I mean how accurate is your software to the ‘real’ cost?
PCM TOOL COMPANY: What do you consider the real cost?
CUSTOMER: Uhh, um, well, I guess the quotes that I get from my purchasing department from our suppliers.
PCM TOOL COMPANY: Oh, I don’t know, because it depends on how close the quote is to the true cost of manufacturing the part plus reasonable margin.  Are you confident your quotes are correct proxies for the true cost of manufacturing.
CUSTOMER: Hhmmmmmmm… yeah, I think think so
PCM TOOL COMPANY: OK, how close do you expect the costs from our PCM software to be to your quote [or internal factory cost or whatever source the customer believes is truth]?
CUSTOMER: Oh, you know, I think as long as you are +/-10% of the quote, that would be alright.

 

Ding! Ding! Ding! – no more calls, we have a winner! The customer has uttered the universal expectation for all costs produced by a product cost management tool, with respect to the “source of true cost”: +/-10%

The universal expectation of customers of Product Cost Management software is that the PCM tool is accurate to within +/-10% of whatever forecast the customer considers the “true cost.”

This expectation is so common that you would think that every customer in the world had gone to the same university and had been taught the same expectation. Of course, that is not the case, but it is a ubiquitous expectation. How did this universal convergence of expectations come to be? We will probably never know; it’s one of the great mysteries of universe, such as, why do drivers is Boston slow down to 4 mph at the lightest sign of snow or rain?

The more important question is: Is the expectation that the cost from a PCM tool should be +/-10% of a quote realistic? To answer this question, we first have to ask:  How truthful is “the truth?” The truth in this case is supposedly the quote from the purchasing department. The reader may already be objecting (or should be), because there is not just one quote, but multiple quotes. How many quotes does a company get? Well, that depends, but we all know how many quotes a typical company gets: THREE!

The universal number of quotes that purchasing gets is 3, and they believe the “true cost” is within +/-10% of whichever of these 3 quotes they select as truth. 

Three shall be the number of the quoting and the number of the quoting shall be three. Four that shalt not quote; neither shalt they quote two, excepting thou proceedest to three.

Variance Within Supplier Quotes

Do all the  quotes have the same price for the quoted part or assembly? No, of course not. If they were the same, purchasing would only get one quote. So what is the range among these quotes? That is a fascinating question, one that I am currently investigating. So far, my research indicated that the typical range among a set of three quotes is 20-40%. That seems about right from my personal experience.

But, is the “true” price (cost + reasonable margin) contained within the range of the 3 quotes? Not necessarily. If we assume that quotes are normally distributed (another assumption that I am researching), the range would be much bigger in reality. For example, if we had three quotes evenly distributed with the middle being $100 and these quotes had a 30% range, the high quote would be $115 (+15%) and the low, $85 (-15%). This gives us a standard deviation that, conveniently, is $15. At two standard deviations (~95% confidence or “engineering” confidence), we predict that the “true cost” of the part is between a predicted  high quote of $130 and a low of $70. This is a range of 60% (+/-30%). You can see this on Figure 1.

OK, but what about if we  just source a single supplier. Well, there will be variance in this supplier, as well. This variance breaks down into two types: physical noise and commercial noise.

  • Physical noise — the difference in cost that could occur due to physical reasons, such as choosing a different machine (i.e. different overheads) a different routing (sequence of the machines), or even simple human variation from part to part or day to day.
  • Commercial noise – differences in pricing driven by the market, emotions, and transient conditions.
Variance in the Cost Forecast from Quotes Hiller Associates

Figure 1 – Variance in the Cost Forecast from Quotes (click to enlarge)

Physical noise can easily account for a range of 20% (+/-10%) in the quote that a supplier might provide to an OEM. However, physical noise can be quantified and discovered. A supplier can share what routing or machine they are using. The problem is that Commercial noise is very difficult to quantify. How do you quantify when the supplier believes you hurt him in the last negotiation and now he is going to repay you for it, or that he needs your company as a new strategic customer and will underbid to get initial business? Worse yet, Commercial Noise is often LARGER than Physical Noise in the quote! How big is Commercial noise? That is difficult to say, because we can’t measure it very well, but from our discussions with purchasing groups, at minimum, Commercial Noise adds at least another +/-10% .

Physical Noise  Commercial Noise
Comes from selection of different machines, routings. Comes from market conditions, emotions, and transient conditions
Quantifiable in general by understanding the selections. Very difficult quantifiable
+/-10% of the “factory average” +/-20%+ on top of Physical Noise

 

Supplier quotes are just one forecast of true cost. There are other forecasts the organization has.

Cost Estimation Experts

What about those people in the organization with the most manufacturing and product cost knowledge? What is the noise in their estimates compared to a source of alleged truth, such as a quote. We are not sure, but we have asked another question about variance to these experts. When asked the question, “How close are you as a cost estimator to the estimates of other cost estimators in your company, people most often reply, “Probably +/-10-20% depending on the complexity of the part cost estimate or assembly.” So, we might say that the cost estimators have at least a 30% range of quotes themselves.

Historical Costs in ERP

What about the historical costs in ERP? How “accurate” are they? There’s actually at least two problems with data in corporate databases. First, sometimes it is just plain wrong from the beginning of its life in the database. However, even if it is correct initially, it gets out of date very quickly. Material cost, labor rates, efficiency, etc. change. Go ask you purchasing buyer how close a re-quote of a current part that has been in the database for three or four years will be to the original quote. To give you an idea of the magnitude of this problem, consider these findings:

The Accuracy (i.e. variance to quote) of a Product Cost Management Software

Variance in Different Forecasts for Product Cost (click to enlarge)

Variance in Different Forecasts for Product Cost (click to enlarge)

So, after all of the discussion of the variance within other cost forecasts, how “accurate” are the forecasts from a product cost management software? Well, if the internal variance among expert cost estimators independently estimating is 30%, the BEST the PCM tool could do would be +/-15%… IF it is controlled by experts. What happens when non-experts use this software? How much does the range increase? Who knows? Obviously, the more automatic and intelligent the PCM Tool, the less the added variance would theoretically be. But, is this added variance +/-5%, +/-10%, +/-20%?  That is hard to say.

The Reality of Accuracy and Variance in Product Cost Forecasts

Regardless of the answer to the above question, the bigger questions are:

  1. What is your EXPECTATION of how “accurate” your PCM Tool’s cost forecast is to the quote forecast?
  2. Is your expectation reasonable and realistic?

We know the answer to question 1:   Be +/-10% of a my selected quote.

To answer the second question, let’s quickly review what we know:

Source of the Cost Forecast Common Variance Inherent in the Forecast
Range among 3 quotes +/-15%
95% confident interval (engineering confidence in quotes) +/- (15%+15%)
Physical noise within one single supplier +/-10%
Physical noise plus Commercial noise within one single supplier +/- (10%+20%)
Internal range among cost experts +/-30%
Best Case PCM Tool used by experts +/-30%
Non-cost expert using PCM tool +/- (30%+ 5%?)
Common [Universal] expectation of PCM Tool Cost Forecast +/-10%

 

Hhhmmmmmmm… Houston, I think we have a problem.

It just doesn’t seem that +/-10% is a reasonable expectation.

Bringing Sanity Back to Product Cost Management Expectations

What can you do in your company to help reset these unrealistic expectations? There are three things.

  1. First make your colleagues (engineering, purchasing, etc.) aware of the reality of the cost forecasting world. Don’t let them develop uninformed and unrealistic expectations.
  2. Don’t focus exclusively on the end cost, but on the physical and immutable concepts that cost is supposed to quantify: mass, time, tooling.
  3. Start to quantify the internal variance in your own firm’s cost forecasts. Your firm’s internal cost ranges in quotes, internal estimates, etc. may be lower or higher than the numbers presented here. However, you won’t know until you start to investigate this.

Is this a painful realization?  Perhaps, but you are already living with the situation today.  It is not a new problem in the organization.  If you don’t acknowledge the potential problem, you run the risk of misleading yourself.  If you acknowledge the potential problem, you may be able to solve it, or at least make it better.

 

Share
Mar 042013
 

 

We are still on our epic quest to find the DARPA study (a.k.a. the legendary seminal study reported to say that ~80% of product cost is determine in the first ~20% of the product lifecycle).  However, during our search we have been aided by Steve Craven from Caterpillar.  No, Steve did not find the DARPA study, but he did send us a study attempting to refute it.

 

Design Determines 70% of Cost? A Review of Implications for Design Evaluation
Barton, J. A., Love, D. M., Taylor, G. D.
Journal of Engineering Design, March 2001, Vol. 12, Issue 1, pp 47-58
 

Here’s a summary of the paper and our comments and thoughts about this provocative article.

Where’s DARPA and Can We Prove this 70-80% number?

First, the authors question the existence of the DARPA study and say that most studies that support DARPA’s findings reference other corporate studies that are alleged to support DARPA’s findings.  Most of these corporate studies are difficult to trace.   They authors then analyze a Rolls-Royce study (Symon and Dangerfield 1980) that investigates “unnecessary costs.”   In the Roll-Royce study, Symon and Dangerfield find that the majority of unnecessary costs are induced in early design.  However, Barton, Love, and Taylor make the point that unnecessary costs are NOT the same as the TOTAL cost of the part itself.   That’s fair.

The authors then go into a more “common sense” line of discussion about how the costs induced at different stages of the product lifecycle are difficult to disaggregate.  The difficulty occurs  because design choices often depend on other upstream product cost choices and the knowledge or expectation of downstream supply chain and manufacturing constraints.  This section of the paper concludes with a reference to a study by Thomas (The FASB and the Allocation Fallacy from Journal of Accountancy) which says that “allocations of this kind are incorrigible, i.e. they can neither be refuted nor verified.”

We at Hiller Associates agree with these assertions in the sense that these statements are tautologically true.  Maybe someone should have given this study to Bob Kaplan of Harvard Business School before he invented Activity Based Costing in the 1980’s in collaboration with John Deere?  After all, wasn’t ABC all about the allocation of costs from indirect overhead?  However, Kaplan’s attempt illustrates the reality of the situation outside of academia.  We in industry can’t just throw up our hands and say that it’s impossible to allocate precisely.  We have to make a reasonable and relevant allocation, regardless.  If it is not ‘reliable’ from a canonical accounting definition point of view, we just have to accept this.

Is DARPA Actually Backwards in Its Cost Allocations?

What if the DARPA study’s 80/20 claim is more that an allocation problem?   What if DARPA is actually promoting the opposite of the truth?   The author references a paper by Ulrich and Pearson that may reverse DARPA.  Ulrich and Pearson investigated drip coffee makers and conclude that the design effect on product cost accounted for 47% of cost, whereas manufacturing accounted for 65% of product cost variation.  They did, of course, make their own assumptions for that type of possible manufacturing environments that could have made the 18 commercially available coffee makers.

Considering the pre-Amazon.com world in 1993 when the Ulrich and Pearson study was done, it brings a smile to my face thinking of MIT engineering grad students at the local Target, Kmart, or Walmart:

CLERK:  Can I help you?
GRAD STUDENTS: Uh, yeah, I hope so.  We need coffee makers.
CLERK:  Oh, well we have a lot of models, what is your need…
GRAD STUDENTS: Awesome, how many do you have?
CLERK:  Uhh… I guess 17-18 models, maybe.
GRAD STUDENTS: Score!  We need 18.
CLERK:  18 of which model?
GRAD STUDENTS: Oh, not 18 of one model.  One of each of the 18 models.
CLERK:  What!  Huh… wha-why?
GRAD STUDENTS:  We’re from MIT.
CLERK:  Ooohhhhh…. right…
GRAD STUDENTS:  Uhh… Say, what’s your name?
CLERK:  Um… Jessica… like my name tag says.  You say you go to MIT?
GRAD STUDENTS:  Um, yeah, well Jessica, we’re having a party at our lab in Kendall Square this Friday.  If you and your friends want to come, that would be cool.    What do you say?
CLERK:  Uh, yeah right… how about I just get you your “18” different coffee makers.  Good luck.

 

… but we digress. Is product cost determined over 50% by manufacturing technique rather than design?   That seems a bit fishy.

Design for Existing Environment

With the literature review out of the way, the authors get to business and propose their hypothesis:

That consideration of decisions further down the chain are beneficial can be illustrated with a new ‘design for’ technique, Design For the Existing Environment (DFEE) that aims to take into account the capacity constraints of the actual company when designing the product… This contrasts with the conventional DfX techniques that take an idealized view of the state of the target manufacturing system.

They then talk about a simulation that they did which they hope takes into account inventory, profit, cash flow, missed shipments to customers, etc.  They run 5 scenarios through their simulation:

  1. A baseline with New Design 1 that lacks sufficient capacity needed by the customer demand
  2. A New Design 2 that uses DFEE to use the existing manufacturing environment and can meet customer demand.
  3. Making New Design 1 by buying more capacity (capital investment)
  4. Pre-Building New Design 1 to meet demand
  5. Late deliver of New Design 1

Not surprisingly, the authors show that scenario 2, using their DFEE technique, beats the other alternatives, considering all the metrics that they calculate.

Thoughts from Hiller Associates

This article is from over ten years ago, but it is thought provoking.  Is 80% of the cost determined in the first 20% of design?  We don’t know.  We certainly believe that over 50% of the cost is determined by design.  In our professional experience, a large part is controlled by design, even allowing for the relationships between design, purchasing, manufacturing, and supply chain.  We’ve personally observed cases in which moving from one design to another allowed for the use of another manufacturing process that reduced total cost by 30%-70%.

Overall, the authors bring up a valid point that goes beyond the traditional ringing of the Total Cost of Ownership (TCO) bell.  They present a simulation in which they claim to calculate Total Cost of Ownership in a rigorous way.  The problem is that the calculation is too rigorous (it took them 4 hours per simulation).  That kind of time and, moreover, the complexity underlying such a model is likely not practical for most commercial uses.   However, a more simplified estimation of Total Cost of Ownership is more appropriate.  In fact, Hiller Associates has helped our client’s teams use flexible tools like Excel, along with a well designed process, to estimate a Total Cost of Ownership.  Is that an end point?  No, but it is a beginning.  Later, as a client’s culture, process, and team improve, more advance Product Cost Management tools can be added into the mix.  And, we do mean TOOLS in the plural, because no one tool will solve a customer’s Product Cost Management and Total Cost of Ownership problems.

Hopefully, we will see some more academic work on the product cost problem.  But, until then, we’re still searching for the original DARPA Study.  Anyone know where it is?

References
  1.  Design Determines 70% of Cost? A Review of Implications for Design Evaluation, Barton, J. A., Love, D. M., Taylor, G. D., Journal of Engineering Design, March 2001, Vol. 12, Issue 1, pp 47-58
  2. Symon, R.F. and Dangerfield, K.J., 1980 Application of design to cost in engineering and manufacturing.  NATO AGARD Lecture Series No. 107, The Application of Design To Cost And Life Cycle Cost to Aircraft Engines (Saint Louis, France, 12-13 may, London, UD 15-16), pp. 7.1-7.17
  3. Thomas, A.L., 1975, The FASB and the Allocation Fallacy, Journal of Accountancy, 140, 65-68.
  4. Ulrich, K.T., and Pearson, S.A., 1993, “Does product design really determine 80% of manufacturing cost? Working Paper WP#3601-93 (Cambridge, MA: Alred P. Sloan School of Management, MIT).
Share
Feb 212013
 

Last week we talked about the struggle in corporate strategy between Core Competency structures and Lean manufacturing. Whereas Core Competency thinking naturally leads to more outsourcing and extended supply chains, Lean manufacturing would advocate for a geographically tight supply chain, often with more vertical integration.

So, what does this have to do with Product Cost Management. The answer is “knowledge.”

The Lack of Manufacturing Knowledge In Design

One of the biggest complaints that I get from my clients is that their teams have lost or are rapidly losing product cost knowledge in the last 10 years. This is especially acute with design engineering teams, but also effects other parts of the organization, such as purchasing. Years ago, the engineering curriculum at universities became so overloaded that manufacturing began to be pushed to the side in the education of most engineers (excepting the specific “manufacturing” engineering major). In fact, at most top engineering schools today, there is only one high level survey course in manufacturing that is part of the required curriculum.

However, manufacturing and its evangelistic design missives (design-to-cost, design-for-manufacturability, design-for-assembly, etc.) were still learnable skills that the young engineers and others could pick up on the job, over time. This is because most product companies were not only in the business of final assembly, but also in the business of sub-assembly, as well as manufacturing components from raw materials. These companies employed large amounts of manufacturing engineers who were resources for the design and purchasing teams. Even for parts and subassemblies that were purchased, the suppliers were likely close by the design centers and had long standing relationships with the OEMs.

Designers and purchasing people could literally walk down to a manufacturing floor in an internal plant or drive a few minutes to a supplier. Conversely the manufacturing engineer would walk upstairs to question engineering about a design. This is nearly impossible when suppliers are often in different countries and the firm that designs does little manufacturing themselves

The Effect of Lack of Manufacturing Knowledge on PCM Tools

One of the ways that industry has tried to remedy this situation is with sophisticated Product Cost Management software. This software codifies a lot of the tribal knowledge that resided in the manufacturing engineers head. However, these tools assume that the tool users have (1) the will and (2) the skill set to properly use a PCM Software.

There is no doubt that the PCM and DFM/DFA tools today are far more advanced than they were, even ten years ago.  However, the value one derives from a tool is not a function of the tool’s capability alone.   There is a bottleneck problem of using a tool to its full potential.  We could say that the value the PCM tool actually gives to the organization equals:

Value of PCM Tool = (Will to use tool) * (Ability to use tool) * (Potential of the tool)
 

People often forget about the ability component, but this is true with any tool.  People buy expensive tools, e.g. golf clubs, hoping to improve their performance.   However, 90% of the time, they cannot even use the set clubs they have to their full potential.  Worse yet, often more expensive or sophisticated tools are more powerful and have the potential to give more value, but they are often less forgiving of errors.  If you don’t know how to use them, they will HURT your performance.

In the past, with a Lean (vertically integrated and geographically close) supply chain, people used primitive PCM tools (often only spreadsheets).  On a scale of 1 (worst) to 10 (best), on average what I hear from industry is that there capability to use the tool was higher, but the tool was limited and cumbersome.  The users, including design engineers, knew what decisions to make in the tool, but the tool was cumbersome.   Currently, we have more of the opposite problem.  The PCM tools are better and much easier to use, but most design engineers are somewhat baffled on how to make what seems like the simplest of manufacturing input decisions in the tools.  The “Will to use the Tool” is another problem altogether that is beyond our discussion today.  However, my experience, in general, would be represented by the following table:

Tool_effectiveness

These results will vary company to company, and even, from design team to design team within the company.  Regardless, I wonder if we are at a breakeven state from where we were in the past today in the value we get from PCM tools… or maybe, we have even lost ground.  The sad thing is  that the PCM tools today ARE more user friendly and requires less of an expert to use.  However, is the loss of manufacturing knowledge in design engineering is so bad that it has overwhelmed the PCM tooling ease-of-use-improvements?

What Can You Do to Help the Situation in Your Company?

Obviously, nothing is as good as the osmosis of manufacturing learning that occurs from a tightly coupled, geographically close, and vertically integrated supply chain.  However, the state of your firm’s supply chain is likely out of your control personally. There is some positive movement with the re-shoring and re-integration trends in industry, in general. However, there are steps you can take to improve the value your firm derives from PCM tools.

  • Send Engineers Back-to-School – do you offer (or better yet, mandate) classes in Product Cost Management, DFM/DFA, Target Costing, etc. for your design team? This should be part of the continuing education of the design engineer. I am not talking about training on the PCM tools themselves (although that is needed, too), but general classes on how different parts are made, the different buckets of cost, the design cost drivers for each manufacturing process, etc.
  • Design Cost Reviews – This is a very low tech way to create big wins. Design reviews in which design engineers review each other’s work and offer cost saving ideas should be a regular facet of your PCM process. Even better: include the engineer’s purchasing counterpart, company manufacturing experts, and a cost engineer to lead the review
  • Embed Experts – Does the design team have at least one advanced manufacturing engineer or cost engineering expert for no more than 20 design engineers? If not, you should consider funding such a resource. Their salary will easily be paid for by (a) the cost reductions they they help your team identify for products already in production, (b) the costs that help the team avoid in designs before production, and (c) the speed their efforts will add to time-to-market by helping the team avoid late changes and delays due to cost overruns.

In the past, vertical integrated, geographically close supply chains helped Product Cost Management in a passive way.  The pendulum may be swinging back to that structure.  However, even if it does, don’t rely on the “passive” Product Cost Management to help. Take the active measures described above and get more value out of your PCM Software investment.

Share
Feb 192013
 

IndustryWeek.com has just published a new article authored by Hiller Associates title:

Product Selection versus Product Development (What the product development team can learn from shopping on Amazon.com)

 Synapsis:

The process of product selection that people do in their personal lives (e.g. shopping on Amazon) is strikingly similar to the process of product development that people encounter in their professional lives. Interestingly, people are often better at making the complex decisions associated with product selection than they are at similar decisions in product development.

There are three things we can learn professionally from our product selection experience on Amazon:

  • Make the priority of your product attribute needs clear.
  • Simultaneously and continuously trade-off attributes to optimize the products value.
  • Information about the product only matters if it is urgent, relevant and/or unique, not just “new.”

 To read the whole article, simply click on the link above to go to www.IndustryWeek.com, or simply continue reading the full text below.

—————————————————————————————————————————————————————————-

Product Selection versus Product Development

What the product development team can learn from shopping on Amazon.com

We just finished the biggest shopping season of the year from Thanksgiving to Christmas.  A lot of people were making a lot of decisions about where to spend their hard earned money – mostly for the benefit of others with gifts.   During that same period design engineers around the world were rushing to finish up pressing projects – and, probably as fast as possible, because they had a lot of vacation left to use, before the end of the year.

We make decisions every day in our personal and professional lives.  But, do we make decisions the same way in both worlds?   I don’t believe so.   People might argue that decisions made at work involve much more complexity.  After all, how much product development is really going on in most homes?  However, a lot of product selection is going on in people’s personal lives.  When considering complex product purchases, product selection starts to resemble product development in many ways.  Let’s take a look at how people (including design engineers) make decisions when shopping (product selection) vs. how they make decisions in the corporate world (product development).

Consider the ubiquitous Amazon.com.  Customers’ product selection experience on Amazon is overwhelmingly positive:  Amazon scores 89 out of 100 in customer satisfaction, the top online retailer score in 2012.  But product selection is *easy* right?  Wrong.  Look at Figure 1.  Product Selectors on Amazon must consider multiple product attributes and, moreover, these attributes mirror the considerations of a product developer very closely.   Product Selectors must consider performance, cost, delivery time, quality, capital investment, etc., without any salesman or other expert to guide them.   But the really amazing thing is that the Product Selectors using Amazon are able to both prioritize these product attributes and consider them simultaneously.

Product selection on Amazon Hiller Associates

CLICK ON PICTURE TO ENLARGE

So, how do the same people who are Amazon customers typically consider product attributes in the corporate world?  Very differently is the short answer, as we see in Figure 2.  First of all, people at work do not tend to trade-off product attributes simultaneously, but in series.  Moreover, often each functional group in a product company (marketing, engineering, manufacturing, etc.), tends to be concerned with one dominating attribute, almost to the exclusion of other product attributes.  How does the typical series-based consideration of product attributes that is common in the corporate world compare to the simultaneous trade-off approach that the customers of Amazon use?   Exact numbers are difficult to find, but some sources say only 60% of products are successful.  While not a precise comparison, the difference seems meaningful:  Amazon scores 89 of 100 on the customer satisfaction index, whereas product companies have 60% successful products.

product development in series hiller associates

CLICK ON PICTURE TO ENLARGE

Why is this?   Don’t people get college degrees to be great product engineers, buyers, etc.?  Don’t they get paid well to do these jobs?  In contrast, most people have limited knowledge of the products they select on Amazon and spend hundreds or thousands of their own dollars to buy them.

There are at least three reasons why the product selection and product development processes differ, and the corporation can learn from all three.

Clear attribute prioritization

Which product attribute is more important:   time-to-market, product cost, or performance?  There’s no right or wrong answer, in general, but there is a right answer for any given situation.    The question is: does the product developer KNOW the priorities of different attributes.  As an individual shopper, you may not explicitly write down the prioritization, but you know it. Your preferences and value system are welded into your DNA, so it is clear.  However, companies are not individuals, but collectives of them.  It is the responsibility of the product line executive to make these priorities clear to everyone.

This is similar to requirements engineering, but at a strategic level.  Requirements are typically specific and only apply to a narrow aspect of the product.  I am talking about the high level product attribute priority.  Ask your product line executive:  “In general, as we go through our development, what should typically ‘win’ in a trade-off decision.”  If the executive cannot give you a concise and simple answer, he has some work to do to earn his big salary.  For example, he should be able to say something, such as “We must meet all minimum requirements of the product, but we need to get to market as fast as possible, so we meet or beat our delivery date.  Oh, and we need to keep an eye on product cost.”

The product executive needs to write the priorities down and share them with all.  In this case, he might write:

  1. First Priority: Time-to-market
  2. Constraint: minimum performance requirements met
  3. Second Priority:  Product Cost

This doesn’t mean the product executive will not change the priority later as the conditions change, but for now, the whole organization is clear on the priorities.  This sounds very simple, but most people in product development are unsure of what the clear priorities are.  Therefore, they make up their own.

Simultaneous attribute trade-off and value optimization

The second thing that we learn from Amazon shopping is to consider all the constraints and targets for product attributes simultaneously.  As we see looking at Figure 1 versus Figure 2, people naturally do this on Amazon, but organizations typically let a different priority dominate by functional silo.   There are often arbitrage points (optimums in the design space) that will allow excellent results in one attribute, by only sacrificing minimally on another attribute.  For example, the product executive may have said time-to-market is first priority, but he is likely to be happy to sacrifice one unit of time-to-market for an increase of 10 units of performance.  This doesn’t mean that the organization is changing their priorities, but that the strategic priorities discussed above simply function as a guiding light that the product development team pivots around to find the point of maximum value for the customer.

Filter for relevant information, not just more or new information

Recent research is revealing the dangerous downsides of our always-on, always-new, always-more information society.  To be sure, social media, like all technologies has the potential for adding a lot value.  The question is: do you control the information or does it control you.  The research featured in Newsweek shows three problems that have grown in importance over the last decades:

  • “Recency” Overpowering Relevance – The human brain tends to vastly overweight new information in decisions vs. older information, and our modern digital world throws tons of new information at us.
  • Immediacy vs. Accuracy – the flip side of the first problem is that real-time nature of our online world pushes people to make quick decisions.  Accuracy and thoughtfulness are seen as inefficient delays, especially in today’s corporate environment.
  • Information Overload – More information does not lead to better decisions according to research.  Humans quickly reach a point where people make bad decisions because they have too much information.  They cannot process it all and do not properly filter it.  The brain literally shuts off certain processing centers, which causes bad decisions.

What can Your Product Development Team Do to Promote Better Decisions?

To answer this, let’s first ask how Amazon is able to overcome these challenge.  To overcome the Recency vs. Relevancy challenge, Amazon ensures that recency is not the default option for the display of Amazon customer reviews.  Instead, helpfulness of the review (relevance), as judged by other customers, is the default order.  Amazon does not push immediacy.  There are no annoying pop-ups offering a deal, if you buy in ten minutes. Certainly, Amazon does make buying easy and fast, but shopping at Amazon from the comfort of one’s home is a relaxing experience that promotes thoughtfulness.  Finally, Amazon does not overload the customer with information. This is no small task, given that Amazon may offer literally hundreds of items to the customer among which he must decide.   Amazon does this by presenting the information on a huge variety of products in a standard way, and by providing simple and powerful filters that discard large amounts of extraneous information.

In order to overcome these new information challenges in your own product development work, ask yourself these three questions:

Information relevance in product cost hiller associates

CLICK ON PICTURE TO ENLARGE

  1. Relevancy – How relevant is this new information.  If I had received this information on day one of my design cycle, how much of a difference would it have made in my decisions up until now?  Is the information relevant or just “new?”
  2. Urgency – Do we need to make this decision today?  How long do we have to consider the problem before the decision must be made?
  3. Uniqueness – Is this new piece of information truly unique or just a variation of something I know already?  If it is a repeat, file it mentally and/or physically with the original information, and forget about it. It is it truly unique, consider whether the new information would be a primary influencer of you design or not.  Most of the time information is just that: information, not relevant unique knowledge.  In this case, once again, file and forget.

The world of online journals, social media, corporate social networks, and interconnected supply-chain applications is here to stay.  It brings a world of new opportunity for better and more up to date information for product development.  It also brings a deluge of extraneous information, and we need to accept this and learn to manage this.  Amazon.com manages these challenges well.  Your product development team can manage these challenges too using the principles outlined above.

Share
Feb 142013
 

I just read the following article and was smiling wryly while experiencing a BFO (blinding flash of the obvious).

The Death of Core Competence Thinking

This article talks about the slow… the FAR too slow… death of “Core Competence” thinking.  This is the concept that organizations should only focus on the 1-2 things that they are best at, e.g. marketing, and everything else should be “outsourced.”  This idea was pushed with a passion after the release of The Core Competence of the Corporation by C.K. Prahalad and Gary Hamel of Harvard Business School.  I guess as an HBS graduate myself I have to now bear the sins of my forefathers?  Actually, these concepts go back further to the work of the famous 18th century British economist, David Ricardo, who perfected the idea of “Comparative Advantage.”

Comparative Advantage Competence Hiller Associates

CLICK TO ENLARGE IMAGE

Comparative Advantage says that there is a difference between absolute advantage in making something and relative advantage.  For example, in the graph to the right, we see that Country A has an absolute advantage in BOTH manufacturing guns and wheat, but the gap (Comparative Advantage) between Country A and Country B is bigger in guns than wheat.  Therefore, Ricardo would say that Country A should use ALL its resources to make guns until diminishing returns lowers its Comparative Advantage below Country B’s in producting wheat.

If the chart isn’t clear, here’s a great video teaching explanation of Comparative Advantage to a background soundtrack of AC/DC; I bet your micro economics class was never this fun.

You can see how closely Ricardo’s Comparative advantage is to Core Competence theory.  In fact, Core Competence builds on Ricardo’s work as a foundation.

The opposite of core competence thinking is the “Conglomerate” (a company that makes many products in many different industries) and/or “Vertical Integration” (owning the whole supply chain for a given product).  Core Competence theory rails against these strategies, and American business bought it hook, line, and sinker.  Why?  Because, Wall Street bought it hook, line, and sinker?  Why?

  1. When Conglomerates are split up into individual companies, they must report their financials individually, so Wall Street gets better information.
  2. When Vertical Integration is stopped and companies begin outsourcing, they will often see and immediate (though often short term) improvements in Cost of Goods Sold.  And, even if they don’t see this product cost improvement, Wall Street will give the company kudos by raising the company’s stock price, anyway.

Case closed, right?

Wrong.

Until the last ten years, companies have been terrible at calculating, realizing, and internalizing the Total Cost of Ownership of outsourcing.  I.e. product cost is not only about the price paid for the part, but additionally, shipping, logistics, quality/inventory risk when you have 6 weeks of parts on the ocean, inventory costs, the friction of dealing with foreign cultures with different languages in vastly different time zones, etc.  Even when firms began to calculate these costs, TCO started a cultural conflict with the proponents of outsourcing and core competence.

Enter the low key and industrious Japanese.  Actually, the Japanese were a force on the scene since the 1980’s, but people did not really internalize the Japanese concept of “Lean” (click for video explanation) manufacturing until the mid-1990’s.  Lean has many useful concepts, but two striking features of the Lean manufacturing are (1) to have one’s suppliers geographically as close as possible to the upstream plant and (2) to often own part or all of the suppliers.

In other words Lean ~= Vertical Integration.

But how does Vertical Integration = better Product Cost Management?  Check back next week!

(or, you can subscribe by email, facebook, twitter, RSS, or linked-in to the right, and we’ll check back for you!)

 

Share
Feb 112013
 

There were a lot of comments last week to the article we posted with the title: Only 17% Percent of Companies Meet Product Cost Target

Many people complained about the dearth of knowledge of the design engineer in Design for Manufacturability.  In the discussion, we also started to propose some solutions to overcome this problem.  However, one comment that sparked my interest was a comment about WHY design engineers often overtolerance parts that went beyond “they don’t know any better.”   The comment paraphrased was:

A big problem we have is that we are making parts directly from the CAD model. A lot of Catia based models have a general tolerance of +- .005 [in.] on the entire part .including fillet radii and edge breaks. …these features have to be penciled off with a ball end mill instead of using a standard tool with a radius on it can kill profit on a job when you miss it when quoting.

That is a fascinating observation.  There is no doubt that the Product Lifecycle Management companies will be pleased as punch that people are finally taking their drum beating on “model is master” seriously.  FYI – I agree that the model should be master and that drawings should be generated from the 3d master data.  However, this improvement to PLM adherence highlights what happens when new technology (a tool) is foisted upon a problem without without understanding the current processes and outcomes that the incumbent tool is satisfying.  In this case, the old tool is paper drawings.  With the incumbent tool, there was a title standard block that for companies, and that title block would give helpful bounding constraints such as:

Unless otherwise specified:

All dimensions +/- o.o1 inches

All radius and fillets +/1 0.02 inches

Etc.

That helpful and protective title block may not be there with a 3d, model onl,y strategy.  All the evangelism on “tolerance by exception” goes right out the window what the CAD system now has default values that are overtoleranced by definition.  The CAD system itself becomes… The Evil Robot Overtolerancer.

The good news is that the Evil Robots can be controlled, and you don’t even need anyone named Yoshimi to help you do it.  However, it will require some thought, before you change the default tolerances in your CAD system.  Some considerations to think about are:Yoshimi Product Cost Hiller Associates

  • What were the default tolerances in the title block on your drawings when the drawing was master?
  • Can these tolerances be reduced?
  • How surgically will your CAD system allow you to set default tolerances?
  • Do you need different tolerence ‘templates’ depending on the primary manufacturing process.  E.G. tolerance defaults may be very different for a casting that is machined than for a piece of sheet metal.
  • How will you make your design engineers aware of these new default tolerances?

Whatever decision you make, make sure all the right people are at the table to make it together, including design engineering, the drafting team (if separate from design), purchasing, and manufacturing (including suppliers, if parts are made externally).  If done thoughtfully and correctly, the setting of default tolerance will bring the Evil Robot Overtolerancer under control.  If these changes are made in a vacuum or carelessly, you may find that you have made the Evil Robot 10x more powerful as an agent of chaos and profit destruction.

You want to be dealing with the friendly Autobots, not the Decepticons, right?

transformers product cost hiller associates

That’s today’s Product Cost Killing Tip!

If you have Product Cost Killing tips that you would like to share, please send them to answerstoquestions@hillerassociates.com.

 

 

 

 

 

 

Share
Feb 042013
 

People complain about the profitability of products, especially early in production, but how often do products actually miss their profitability at launch?

According to the latest research by Hiller Associates, most companies miss product cost targets.  We asked almost forty  people from a variety of corporate functions “How often do you meet or beat product cost targets at launch?”   The results follow the familiar 80/20 rule of many business phenomena.  On 17% of respondents said that their companies meet cost targets Often or Very Often.

Product Cost Results goals at launch Hiller Associates

CLICK TO ENLARGE

That is not an impressive showing.  We would not accept 17% as a pass completion percentage from a NFL quarterback.  That’s not even a good batting average in baseball.  So why do we put up with this in our companies?  It’s also interesting that almost the same percentage of respondents (15%) don’t know enough about product profitability to even guess how well their companies are doing.

Companies are understandably careful with releasing actual product profit numbers.  Still, it would be great to have a more in-depth academic study done, in which actual financials were analyzed to answer the same question.

Percent meeting product cost summary Hiller AssociatesHow often does your company meet its product cost targets?  Does anyone know in your company know? These are questions you cannot afford not to ask. Is your firm the 17%… or the 83%.  If you are in the 83%, consider starting or improving your efforts in Product Cost Management.

 

 

Share
Jan 312013
 

One of my fellows in the world of product cost and design, Mike Shipulski, just posted the following:

The Middle Term Enigma

 

 

The general synopsis of it is:

  1. Firms focus more and more on the short term
  2. The “short term” is shorter and shorter.
  3. Short term leads to minimization and typically damages long term success
  4. On the other hand, the firms (especially execs) fear the long term plan as expensive and risky
  5. So why not focus on the “medium term”

Our Opinion:

Mike is right.  The short term thinking kills companies and actually wastes a lot of time and money – paradoxically.

I would offer the following addition:  Short, Medium, and Long term all have their places, but there has to a be a thoughtful and maintained plan for each. You just can’t make a plan today and then look at it in a year.  Every 2-3 months, you should be re-assessing and moving the plan accordingly.  However, you should not see whipsawing, but just gentle, organic fine tuning as you gain more information.

I also would like for Mike to define the Short, Medium, and Long term.  I realize that this changes product to product, but a general guideline would be helpful.

 

Share
Jan 292013
 

Hiller Associates recently was the keynote speaker at aPriori’s first customer conference.  It was a great opportunity to both teach and learn from experts that came from a wide range of industries and geographies.

Hiller Associates’ President, Eric Hiller, discussed several topics, of which we’ll mention two here.  The entire presentation can downloaded for FREE.  Just click on the slide below and get the presentation:   Best_Practices_for_Starting Your Procuct Cost Management Journey or Improvement.

Variance in Cost Numbers

One of the main themes discussed was the possibility of getting an “accurate” cost, meaning how possible is it to get a cost that is within a certain percentage of a fixed point of reference, such as a supplier quote.  There are several ways to look at this problem that we may discuss in subsequent weeks on this blog.

Eric Hiller at aPriori STARS 1 2012 product cost Hiller Associates

Eric Hiller presenting the keynote speech at the aPriori Customer Conference

However, in summary, the presentation asked the question:  what cost variance is inherent in your system already?   For example, if your 3 quotes from supplier have a range of 30% from highest to lowest, then is it realistic to expect the cost that you calculate in a product cost management software to be closer than 30% away from a random quote?

It was refreshing to see how open the audience was to these concepts.   The reactions to the variance concept went from wide-eyed amazement from people who were new to the cost management field, to thoughtful reflection from the veterans.  In fact, the veterans reacted like men who had been reminded of a truth that they knew all along.  Often, such common truths are forgotten due to immersion in the day-to-day challenges of keeping a company profitable.  We call this concept having a “blinding flash of the obvious” – a BFO.  Everyone in the room had that BFO, and no one wanted to argue about it.  Instead, there were many comments throughout the conference that further explored this concept.

Culture is the biggest loser

Another theme of the presentation was driven by the latest research in Product Cost Management done by Hiller Associates.  Those who follow us regularly know that we segment problems in our consulting work into four root causes:  Culture, Process, Roles/People, and Tools.

Our latest research shows that cultural problems are the clear bottleneck in most firms’ Product Cost Management journeys.  The respondents overwhelmingly agreed.  When Eric ask the attendees which area was their firm’s biggest PCM bottleneck, the conference participants voted as follows, based on a rough estimate of hands in the air:

Best Practices for Product Cost Management Hiller Associates

CLICK TO GET FULL PRESENTATION

  • Culture 60-70%
  • Process 20-30%
  • People/Roles 0-5%
  • Tools     5-10%

That’s fairly shocking at a conference whose organizers are a Product Cost Management TOOL vendor.  [Next time HA will have to set our honorarium higher for taking the pressure off of any problems with the vendor’s product!]  Joking aside, culture is obviously the  biggest problem and it is not an easy thing to change.  In fact, companies often buy a PCM software tool hoping that it will somehow magically fix their bigger cultural problems.

It reminds one of obesity problems.  Many companies have a culture of binging on product cost during design.  In purchasing & manufacturing they continue with cost obesity denial — not know what the cost calorie count is until the parts arrive at the door with an invoice.  However, instead of changing their cost eating and exercising habits, they look for a magical cure in the form of a software tool.  Let’s call this “the shake-weight approach” to product cost management.

We’re not disparaging the shake-weight, or any other home exercise equipment.  Certainly, all home exercise equipment can help you lose weight, just as we are sure that all of the PCM Tools can help one reduce cost.  But, you have to use these tools regularly and properly.  PCM software tools are too much like home exercise equipment.  People buy them thinking that the tool will magically solve cost obesity.   They use the tool twice and then it sits in the corner unloved, unused, and unmaintained… and, yet, people wonder why they are still product cost obese!  It’s not the tool that the problem, it’s your culture.  Much like changing your eating lifestyle, changing the PCM culture is really hard and tricky to do.  That’s why cultural issues are often at the forefront of most of the engagements that we do with clients at Hiller Associates.

However, it was refreshing to see that the attendees at aPriori’s conference did seem to understand this problem, or at least were very open to the idea.   So, maybe we are making progress on this point.  Or, maybe  HA needs a TV show “The Biggest Cost Loser” in which Hiller Associates works with companies to increase product profit with weekly product cost “weigh-ins.”  What TV viewer wouldn’t watch that kind of riveting drama…

Now, get out there and do some product cost push-ups!

If you would like to see the entire presentation from the conference, just click on the slide image above and get the presentation:  “Best Practices for Starting Your Product Cost Management Journey or Improvement.” 

 

Share
Skip to toolbar