Apr 082020
 

So… the bad news is that you (and/or your colleagues) missed Eric Hiller’s webinar class on Design-to-Value, sponsored by GLG.  

But, the good news is, it was video captured and is available on-demand and for FREE, any time you want to watch it!  (Just fill out the registration form at the bottom and the video will start.)

And, let’s be honest, you’ve probably exhausted all the “A” content on your streaming services.   Why not spend 1-hour learning about the impact that DtV has on companies from one of the world’s top practitioners.   And, Eric will touch on should-cost / how to create cost models to support the DtV effort, to.

(Plus, this actually *IS* real work and research to help your company.)

Agenda for the presentation

  • 2 min – Who’s teaching today?
  • 5 min – What is Design-to-Value and what impact can it have?
  • 30 min – An Introduction to the DtV process
    • Scoping the project and choosing products / services
    • Assembling the team
    • Sourcing candidates and competitors
    • Fact pacts & tear downs
    • Ideation workshops
    • Prioritization & ROI biz cases
    • Clustering & Chartering
  • 5 min – A quick look at should-cost (bottom-up cost models)
  • 5 min – How to get started with a DtV project
  • 15 min – Questions, answers, and discussions

CLICK HERE to learn about Design-to-Value!

 

Share
Jun 052014
 

IW_LogoNEW ARTICLE in Industryweek.com by Hiller Associates

Synapsis:  No matter how badly you think you are pinned down in a pricing negotiation, there are always tools for leverage that can help you improve your position. Relative should costing is one of these powerful tools.

To read the article at Industryweek.com, click here.

Or, you can read the full article below

==================

Relative Cost Power – How to not know the cost of your products and win negotiations, anyway

With product cost accounting for 70 to 90% of every revenue dollar on the income statement, it’s not hard to understand why cost is such a big deal to many companies. In the last decade, there has been a renewed focus on the world of Product Cost Management, including techniques like Design for Manufacturing (DFM), Should-Costing, Spend Analytics, etc. Many of these techniques are used (or are intended to be used) *before* the sourcing phase of the Product Life Cycle. While procurement professionals should be involved up front in product design, a lot of the responsibility for success with these techniques will rest on the design engineering staff.

Regardless of whether your company is best-in-class in Design-To-Cost, or whether you have the most cost-oblivious design staff in the world, your designs must be eventually made or bought. With the dominance of outsourcing, it’s more difficult than ever to know what the should-cost of a product is? Purchasing agents are told that they should know the should-cost of products before they walk into a negotiation with a supplier. However, that is not easy, and the fact is:

Your supplier will typically know more about their costs than you do.

So, how do you precede in a negotiation where your supplier has more and better information? Are you completely at their mercy? And, what if your supplier holds oligopoly, or even monopoly, supplier power? Are you just a price taker?

ANSWER: No, you don’t have to be a price taker, at least in the case of buying multiple variants of a similar spend item.  Recently, I wrote a blog post about how the human brain does not like non-linearity, discontinuity, or non-monotonic functions. This is a fancy way of saying that people are very good at detecting pricing inconsistencies within multiple similar products.

This inconsistency is the key to being able to better control a negotiation, in which you really don’t know what the absolute cost of a product should be, or where you know the should-cost, but have little buyer power. Let’s explain this with an example:

Case Study in Relative Should-cost: Pick-up Truck Driveshafts

Hiller_Associates_Relative_Should_DriveshaftsIn the late nineties, I had the privilege of being the product development owner for the drivelines (axles and drive shafts) used in the full-size pick-up that was best-selling vehicle in the world for over 30 years. This truck made over 100% of the profit for my employer (making up for losses on other vehicle lines). I was very young and inexperienced at the time, so it was a great experience, but a big challenge.

Within the first few weeks in the assignment, I was told that it was time to negotiate the contract for the driveline commodity (over $450 million annually) for the upcoming 2004 vehicle. I was told that this was a very complex process, but that the purchasing group would lead it. However, my first meeting with our purchasing team lasted about 10 minutes… long enough for the purchasing team to say: “Oh, your supplier is *that* supplier? We don’t get involved with them. Have a nice day.” And, they walked out of the room.

After the shell shock of the experience wore off, my manager explained that our supplier was the former components division of our employer (the OEM) and that our CEO had desperately wanted to spin-off the component divisions from the OEM. To do this, our CEO had negotiated a deal with the powerful automotive union, agreeing that the union members would technically work for us (the OEM), but be leased to the newly spun-off supplier. If the OEM did not give enough business to the components supplier to keep the union members employed, the OEM was responsible for paying them a large portion of their wages, while they waited to be deployed somewhere else.

Effectively, this removed almost all negotiation power from us, the OEM. Therefore, our purchasing team had made a decision at the executive level to not participate in negotiations with this particular supplier. Instead, these negotiations were dumped into the laps of the product development team, with some support from finance.

Suppliers with Complete Supplier Power

The product development team had its own problems already. The marketing team were demanding much better performance and quality out of our parts. But, the finance team demanded that those parts be cheaper than the previous generation of parts! And now, we had an AWOL purchasing team. In terms of the old Porter’s Five Forces framework, our supplier had tremendous supplier power! What to do?

I certainly was not an expert in drivelines yet, but I had just completed a master’s thesis focused on product cost. So, I first reached out to the cost estimation team within the OEM. They helped us understand what the absolute cost might be for a driveshaft. We were also able to negotiate with the purchasing to do an “unofficial” price study with other driveline suppliers.

Our first negotiation with our supplier (the OEM’s former component divisions) was pleasant, but utterly futile. I excitedly explained what we thought the absolute part cost of these driveshaft parts should be, and hinted that we had quotes from other suppliers to prove it. Our supplier, being quite shrewd, politely explained why their product was, obviously, so much more valuable, combined with a tangible undercurrent of, “Well that’s nice that you have should-costs and quotes from other suppliers, but we really don’t care, because you have to use as a supplier regardless.”

Relative Should-Cost to the Rescue

Now what were we going to do? These lines of argumentation (absolute should-cost and competitive quoting) were not prevailing on the supplier. So I tried something different: Relative Should-Costing.

Hiller_Associates_Relative_Should_CostA driveshaft is complex in many ways, but in reality, it is a modular part design. It’s constructed mostly of the end yokes coupled with an extruded or seam-welded tube between those yolks. If we had a longer truck we simply extended a tube. (For technical safety reasons, we had three variants: a 1-piece steel driveshaft, 1-piece aluminum, and a 2-piece steel.)   In total, we had over 70 part numbers of these three designs, and we knew the price quote for each variant.

Using a spreadsheet, I simply estimated a reasonable cost for one driveshaft tube for each of the 3 variants. With this estimate, it was easy to calculate a per-inch cost of that tube. By subtracting, I knew what all the other parts in the driveshaft (e.g. end yokes) approximately cost. That was all I needed. I didn’t need to know what the absolute cost of each driveshaft should be.

I just needed to know what each similar part should cost RELATIVE to another part.

In the second negotiation, we politely questioned the supplier on their confidence in their pricing ability. They professed with great certainty that they knew how to price and stood by those prices. Then we coyly pulled out the part numbers for the three drive shafts for which I had estimated the absolute, and asked if they stood by those prices. They eagerly declared they stood by those prices. Gotcha!

At that point, we started asking how Part B that was 5 inches longer of extruded tube than Part A could cost $10.00 more than Part A, when the tube extension was only worth $0.30. The supplier was not sure and asked for time to investigate. We had several more meetings on the topic, but in the end, the supplier could not give any logical reason why their pricing for similar drive shafts varied in bizarrely non-linear ways.

The supplier reduced the cost of the entire driveshaft commodity by about 8%, resulting in about $35 million a year, straight to the bottom line of my employer, the OEM.

Should the discount have been bigger? Yes. Did my calculations show that the supplier should have given us more money? Yes. But, did we get a significant concession from a supplier who held every card in this negotiation? Yes we did!

In reality, the supplier still could have refused to reduce their costs. However, the point of these Relative Should-Cost negotiation techniques is to bring logic and facts to bear to increase leverage in a situation where you seemingly have no leverage.

The win occurred when the supplier just couldn’t answer why their own pricing was internally inconsistent with itself.

This is a good lesson for suppliers to learn, as well. When quoting a basket of similar parts, it’s wise to make sure that you understand your own pricing and reflect the underlying costs in a logical and linear way to your customer. This greatly reduces the risk of your customer casting doubts and driving your pricing down, perhaps unfairly.

Diagnostic vs. Leverage Tools and Absolute vs. Relative Should-Cost

The case study above is an example of the difference between using a Relative Should-Costing technique, versus an Absolute Dollar Should-Costing technique. If this sounds interesting to your company, you may want to read more in my previous article in Industryweek.com, Your Should-cost Number is Wrong, But It Doesn’t Matter. In this article, we talk further about using should-costing, not only as a diagnostic gage to tell you what the cost is, but as a tool for leverage to move the cost down.

Remember, no matter how badly you think you are pinned down in a pricing negotiation, there are always tools for leverage that can help you improve your position. Relative-Should costing is one of these powerful tools that should be in your tool box.

Share
Mar 202014
 

Hiller Associates posted the following article at ENGINEERING.COM yesterday.  You can read it there at this link, or just keep reading below!

=============================================================

Another solid piton in the cliff of making product cost mainstream in CAD / PLM Products?

CATIA users can now get a faster and more effective way to design and source composites products with the highest profit by bringing the estimation ability closer to the designer’s native environment. Galorath Incorporated debuted its newest integration of SEER® for Manufacturing with the Dassault Systems 3DEXPERIENCE® Platform in CATIA at the JEC Composites conference in Paris. The new product is called the SEER Ply Cost Estimator.

Who is involved?

Hiller_Associates_Seer_Catia

Galorath Incorporated grew out of a consulting practice focused on product cost estimation, control, and reduction that began in the late 1970

s. Over the last 30 years, Galorath built their costing knowledge into the SEER suite of software products. SEER is one of the independent product cost estimating software companies.

Dassualt Systems is one of the “big 3” Product Lifecycle Management (PLM) companies in the world.

Hiller Associates spoke with Galorath CEO Dan Galorath, Vice President of Sales & Marketing Brian Glauser, and SEER for Manufacturing product lead Dr. Chris Rush and got a full product demo.

What does this integration do?

The integration allows users of CATIA to use SEER’s costing software for composite materials within the CATIA environment. In CATIA, the engineer designs a lay-up for a composite part, generating a Ply Table (a critical design artifact for composite parts that specifies material, geometry, and some manufacturing info). That activates the integrated SEER Ply Cost Estimator so that the designer (or the cost engineer or purchasing person aiding him) can set up more part-specific costing choices and preferences within the CATIA environment.

When ready, the user pushes the cost analysis button. The information is processed by SEER Ply Cost Estimator which sends the ply table data and other information to the interfacing SEER-MFG software to compute cost. The cost data is returned and presented to the user, once again within a native CATIA screen.

How broad is the capability?

Click to ENLARGE!

Click to ENLARGE!

Currently, the integration of SEER is applicable for parts made of composite materials. It’s a strong starting point for the integration partnership because SEER has a long experience in the field of costing composites, working with companies in the defense and aerospace verticals. Composites are also becoming more mainstream in other industries, such as automotive and consumer products. Galorath has been a major player in the US Government’s Composites Affordability Initiative (CAI), a 50/50 government/industry funded consortium including Boeing, Lockheed and Northrop Grumman that was formed to drive down the costs of composites. Galorath has also worked with Airbus in the area of composites parts costing.

Galorath’s Brian Glauser says that the SEER Ply Cost Estimator has hundreds of man-years invested, much from previous work with CAI and with aerospace companies that resulted in several of the modules already in the SEER-MFG standalone product.

The first version of the SEER Ply Cost Estimator handles many composites manufacturing processes, materials, concepts of complexity, and both variable and tooling costs. However, it does not yet directly cost the assembly of one part to another.

The initial integration will be to CATIA v5, but SEER and CATIA have signed a v6 agreement as well. That integration will follow later.

Galorath (and likely Dassault) are hoping that the SEER Ply Cost Estimator will be well received by customers and help drive many product cost benefits. If this happens, there may be demand from Dassualt’s end customers not only to improve the SEER Ply Cost Estimator, but to expand the SEER/CATIA integration to other manufacturing processes covered in SEER’s stand-alone software products such as machining, plastics, sheet metal and assembly processes.

What does it mean for Functional Level Groups?

Philippe Laufer, the CEO of CATIA was quoted saying :

“Using Galorath’s SEER for Manufacturing in CATIA…will help companies perform early trade-off analysis on the use of various materials and composites processes before manufacturing even takes place.”

Well, that has been one of the goals in Product Cost Management for a long time. If your company uses composites, the tool has the following possibilities:

  • Engineering – identify which design choices are driving cost and by how much
  • Purchasing/Manufacturing – get an early warning of what to expect from suppliers before asking for quotes or estimates (should-cost)
  • Cost Engineering –focal point for the cross-functional discussion about cost to drive participation, especially from engineering

It’s important to realize that this integration will have its limitations, as with any costing product. First, the current integration applies only to composites. While expensive, composites are only one type of part on the Bill of Material (BOM). You will have to go beyond the current integration of SEER/CATIA to cost the full BOM, perhaps to SEER’s standalone costing product or to those of its competitors.

Second, remember that cost is far harder to “accurately” estimate than many physical performance characteristics. While meeting an absolute cost target is important, even more important are the following:

  1. Continuous Design Cost Improvement – If your company consistently designs lower cost products because you have superior cost estimation information, you WILL beat your competitors.
  2. Correct Cost and Margin Negotiation – If your company is better at negotiating quotes because it can give suppliers a better understanding of what it will cost them to make your part and you can negotiate a margin that is not too high, but adequate to keep your suppliers healthy, you WILL beat your competitors1.

What does it mean for the C-Level?

Philippe Laufer of CATIA also says:

“This [the SEER Composites integration] leads to finding out the most efficient way of manufacturing a product while meeting cost, performance, functionality, and appearance requirements.”

The C-suite doesn’t really care about composites or ply tables in and of themselves, but it does care about revenue and profit. Of course every well-marketed product will claim to improve these metrics. Regarding product cost, the good news is that Galorath and Dassault are aiming at a big target. Companies that use a lot of composites can have very high costs. For example, Boeing and Airbus have Cost of Goods Sold of 84.6% and 85.5% and Earnings before Tax of 7.2 and 3.6%, respectively2. Those COGS figures are big targets on top of a highly leveraged COGS/Profit ratio.

What does it mean for Product Cost Management becoming mainstream in the enterprise software stack?

We asked Dan Galorath how long it would be before Product Cost Management was as much of the PLM ecosystem as finite element, manufacturing simulation, or environmental compliance. He replied:

“Cost estimation software will never be on every designer’s workstation, but I don’t believe that is what it means for Product Cost Management to be considered ‘mainstream.’ It’s not fully mainstream yet, but a greater need is being driven by outsourcing and the tight business environment. The procurement folks can’t only rely on internal manufacturing knowledge like they used to. They need tools like SEER to fill the gap and move the cost knowledge base forward.”

We agree with Mr. Galorath. This is another step, another piton to secure Product Cost Management onto the PLM cliff, as PCM continues to climb this steep hill.

This is the first integration point between independent Product Cost Management software companies and the PLM/ERP world since September 2012, when Siemens PLM purchased Tsetinis Perfect Costing3. PTC has built some level of cost tracking ability into Windchill, and Solidworks (owned by Dassault) has developed the first couple versions of a costing calculation module for their product.

There is still a lot of ground to cover. There are quite a few independent product cost management software tools that have costing intellectual property that can accelerate the process, especially if the big PLM companies acquire them or partner with them. When that will happen is anybody’s guess, but for now it looks like CATIA users, at least, have a viable solution for composites costing… and maybe more in the future.

1 For more information, see the Hiller Associates Industry Week Article: Your Should-cost Number is Wrong, But It Doesn’t Matter

2 Per www.morningstar.com, trailing 12 months COGS, 3/13/2014

3 Siemens buys Perfect Costing Solutions (Tsetinis), Hiller Associates, September 2012

Share
Apr 172013
 

 

Hello Internet and Product Cost Management industry! We’ve had strong interest in our latest article on the 2012 revenues of the Product Cost Management market. There have been several good questions that have made us want to clarify some of the assumptions in the analysis, so that people are clear on what is and is NOT included in the estimate.

 

Most importantly, this is an estimate of the REVENUE of the group of included vendors today in 2012. This does NOT represent the Total Addressable Market for Product Cost Management software.

The total addressable market is many times larger than the revenue of the included vendors that are estimated in 2012.  In fact, we plan to write follow-up articles that discuss the total addressable market, as well as the growth rate of the current vendors in reaching that total addressable market  Here’s some of the assumptions we made in the analysis:

  1. Only the Vendors noted are estimated – The uncertainties do not explicitly take into account other vendors. They reflect the fact that most of these are private companies whose numbers are not public and that vary from year to year. Obviously, it matters where one draws the line in the analysis.
  2. Focused on the estimation of manufactured products, not construction – there are many products in the market that focus on construction estimation, for example estimates for building an office building, an oil platform, a refinery, etc.  We consider this a wholly different market. In our own experience we have rarely if ever see the companies that specialize in construction estimation also compete for the same customers for which the companies listed in this Monday’s article compete.
  3. Not focused on job shops – there is also a separate market for software used by small “job shops.” These are small, mostly family owned businesses, that typically manufacture one type of part, for example sheet metal, machined castings, etc.  Some of the included vendors may sell to a few job shops, but there software is capable of being used by bigger enterprises.
  4. Generally Available Software – We only included vendors whose primary business involves selling a legitimate “generally available” software product (not a consulting business with internal tools)

Given these constraints, we believe this group represents over 90% of the revenue in the market today. If you know of other competitors who meet the criteria above and make over $2 million USD a year in revenue, let us know.

Keep the questions coming! We are glad there’s so much interest.

 

Share
Mar 042013
 

 

We are still on our epic quest to find the DARPA study (a.k.a. the legendary seminal study reported to say that ~80% of product cost is determine in the first ~20% of the product lifecycle).  However, during our search we have been aided by Steve Craven from Caterpillar.  No, Steve did not find the DARPA study, but he did send us a study attempting to refute it.

 

Design Determines 70% of Cost? A Review of Implications for Design Evaluation
Barton, J. A., Love, D. M., Taylor, G. D.
Journal of Engineering Design, March 2001, Vol. 12, Issue 1, pp 47-58
 

Here’s a summary of the paper and our comments and thoughts about this provocative article.

Where’s DARPA and Can We Prove this 70-80% number?

First, the authors question the existence of the DARPA study and say that most studies that support DARPA’s findings reference other corporate studies that are alleged to support DARPA’s findings.  Most of these corporate studies are difficult to trace.   They authors then analyze a Rolls-Royce study (Symon and Dangerfield 1980) that investigates “unnecessary costs.”   In the Roll-Royce study, Symon and Dangerfield find that the majority of unnecessary costs are induced in early design.  However, Barton, Love, and Taylor make the point that unnecessary costs are NOT the same as the TOTAL cost of the part itself.   That’s fair.

The authors then go into a more “common sense” line of discussion about how the costs induced at different stages of the product lifecycle are difficult to disaggregate.  The difficulty occurs  because design choices often depend on other upstream product cost choices and the knowledge or expectation of downstream supply chain and manufacturing constraints.  This section of the paper concludes with a reference to a study by Thomas (The FASB and the Allocation Fallacy from Journal of Accountancy) which says that “allocations of this kind are incorrigible, i.e. they can neither be refuted nor verified.”

We at Hiller Associates agree with these assertions in the sense that these statements are tautologically true.  Maybe someone should have given this study to Bob Kaplan of Harvard Business School before he invented Activity Based Costing in the 1980’s in collaboration with John Deere?  After all, wasn’t ABC all about the allocation of costs from indirect overhead?  However, Kaplan’s attempt illustrates the reality of the situation outside of academia.  We in industry can’t just throw up our hands and say that it’s impossible to allocate precisely.  We have to make a reasonable and relevant allocation, regardless.  If it is not ‘reliable’ from a canonical accounting definition point of view, we just have to accept this.

Is DARPA Actually Backwards in Its Cost Allocations?

What if the DARPA study’s 80/20 claim is more that an allocation problem?   What if DARPA is actually promoting the opposite of the truth?   The author references a paper by Ulrich and Pearson that may reverse DARPA.  Ulrich and Pearson investigated drip coffee makers and conclude that the design effect on product cost accounted for 47% of cost, whereas manufacturing accounted for 65% of product cost variation.  They did, of course, make their own assumptions for that type of possible manufacturing environments that could have made the 18 commercially available coffee makers.

Considering the pre-Amazon.com world in 1993 when the Ulrich and Pearson study was done, it brings a smile to my face thinking of MIT engineering grad students at the local Target, Kmart, or Walmart:

CLERK:  Can I help you?
GRAD STUDENTS: Uh, yeah, I hope so.  We need coffee makers.
CLERK:  Oh, well we have a lot of models, what is your need…
GRAD STUDENTS: Awesome, how many do you have?
CLERK:  Uhh… I guess 17-18 models, maybe.
GRAD STUDENTS: Score!  We need 18.
CLERK:  18 of which model?
GRAD STUDENTS: Oh, not 18 of one model.  One of each of the 18 models.
CLERK:  What!  Huh… wha-why?
GRAD STUDENTS:  We’re from MIT.
CLERK:  Ooohhhhh…. right…
GRAD STUDENTS:  Uhh… Say, what’s your name?
CLERK:  Um… Jessica… like my name tag says.  You say you go to MIT?
GRAD STUDENTS:  Um, yeah, well Jessica, we’re having a party at our lab in Kendall Square this Friday.  If you and your friends want to come, that would be cool.    What do you say?
CLERK:  Uh, yeah right… how about I just get you your “18” different coffee makers.  Good luck.

 

… but we digress. Is product cost determined over 50% by manufacturing technique rather than design?   That seems a bit fishy.

Design for Existing Environment

With the literature review out of the way, the authors get to business and propose their hypothesis:

That consideration of decisions further down the chain are beneficial can be illustrated with a new ‘design for’ technique, Design For the Existing Environment (DFEE) that aims to take into account the capacity constraints of the actual company when designing the product… This contrasts with the conventional DfX techniques that take an idealized view of the state of the target manufacturing system.

They then talk about a simulation that they did which they hope takes into account inventory, profit, cash flow, missed shipments to customers, etc.  They run 5 scenarios through their simulation:

  1. A baseline with New Design 1 that lacks sufficient capacity needed by the customer demand
  2. A New Design 2 that uses DFEE to use the existing manufacturing environment and can meet customer demand.
  3. Making New Design 1 by buying more capacity (capital investment)
  4. Pre-Building New Design 1 to meet demand
  5. Late deliver of New Design 1

Not surprisingly, the authors show that scenario 2, using their DFEE technique, beats the other alternatives, considering all the metrics that they calculate.

Thoughts from Hiller Associates

This article is from over ten years ago, but it is thought provoking.  Is 80% of the cost determined in the first 20% of design?  We don’t know.  We certainly believe that over 50% of the cost is determined by design.  In our professional experience, a large part is controlled by design, even allowing for the relationships between design, purchasing, manufacturing, and supply chain.  We’ve personally observed cases in which moving from one design to another allowed for the use of another manufacturing process that reduced total cost by 30%-70%.

Overall, the authors bring up a valid point that goes beyond the traditional ringing of the Total Cost of Ownership (TCO) bell.  They present a simulation in which they claim to calculate Total Cost of Ownership in a rigorous way.  The problem is that the calculation is too rigorous (it took them 4 hours per simulation).  That kind of time and, moreover, the complexity underlying such a model is likely not practical for most commercial uses.   However, a more simplified estimation of Total Cost of Ownership is more appropriate.  In fact, Hiller Associates has helped our client’s teams use flexible tools like Excel, along with a well designed process, to estimate a Total Cost of Ownership.  Is that an end point?  No, but it is a beginning.  Later, as a client’s culture, process, and team improve, more advance Product Cost Management tools can be added into the mix.  And, we do mean TOOLS in the plural, because no one tool will solve a customer’s Product Cost Management and Total Cost of Ownership problems.

Hopefully, we will see some more academic work on the product cost problem.  But, until then, we’re still searching for the original DARPA Study.  Anyone know where it is?

References
  1.  Design Determines 70% of Cost? A Review of Implications for Design Evaluation, Barton, J. A., Love, D. M., Taylor, G. D., Journal of Engineering Design, March 2001, Vol. 12, Issue 1, pp 47-58
  2. Symon, R.F. and Dangerfield, K.J., 1980 Application of design to cost in engineering and manufacturing.  NATO AGARD Lecture Series No. 107, The Application of Design To Cost And Life Cycle Cost to Aircraft Engines (Saint Louis, France, 12-13 may, London, UD 15-16), pp. 7.1-7.17
  3. Thomas, A.L., 1975, The FASB and the Allocation Fallacy, Journal of Accountancy, 140, 65-68.
  4. Ulrich, K.T., and Pearson, S.A., 1993, “Does product design really determine 80% of manufacturing cost? Working Paper WP#3601-93 (Cambridge, MA: Alred P. Sloan School of Management, MIT).
Share
Feb 212013
 

Last week we talked about the struggle in corporate strategy between Core Competency structures and Lean manufacturing. Whereas Core Competency thinking naturally leads to more outsourcing and extended supply chains, Lean manufacturing would advocate for a geographically tight supply chain, often with more vertical integration.

So, what does this have to do with Product Cost Management. The answer is “knowledge.”

The Lack of Manufacturing Knowledge In Design

One of the biggest complaints that I get from my clients is that their teams have lost or are rapidly losing product cost knowledge in the last 10 years. This is especially acute with design engineering teams, but also effects other parts of the organization, such as purchasing. Years ago, the engineering curriculum at universities became so overloaded that manufacturing began to be pushed to the side in the education of most engineers (excepting the specific “manufacturing” engineering major). In fact, at most top engineering schools today, there is only one high level survey course in manufacturing that is part of the required curriculum.

However, manufacturing and its evangelistic design missives (design-to-cost, design-for-manufacturability, design-for-assembly, etc.) were still learnable skills that the young engineers and others could pick up on the job, over time. This is because most product companies were not only in the business of final assembly, but also in the business of sub-assembly, as well as manufacturing components from raw materials. These companies employed large amounts of manufacturing engineers who were resources for the design and purchasing teams. Even for parts and subassemblies that were purchased, the suppliers were likely close by the design centers and had long standing relationships with the OEMs.

Designers and purchasing people could literally walk down to a manufacturing floor in an internal plant or drive a few minutes to a supplier. Conversely the manufacturing engineer would walk upstairs to question engineering about a design. This is nearly impossible when suppliers are often in different countries and the firm that designs does little manufacturing themselves

The Effect of Lack of Manufacturing Knowledge on PCM Tools

One of the ways that industry has tried to remedy this situation is with sophisticated Product Cost Management software. This software codifies a lot of the tribal knowledge that resided in the manufacturing engineers head. However, these tools assume that the tool users have (1) the will and (2) the skill set to properly use a PCM Software.

There is no doubt that the PCM and DFM/DFA tools today are far more advanced than they were, even ten years ago.  However, the value one derives from a tool is not a function of the tool’s capability alone.   There is a bottleneck problem of using a tool to its full potential.  We could say that the value the PCM tool actually gives to the organization equals:

Value of PCM Tool = (Will to use tool) * (Ability to use tool) * (Potential of the tool)
 

People often forget about the ability component, but this is true with any tool.  People buy expensive tools, e.g. golf clubs, hoping to improve their performance.   However, 90% of the time, they cannot even use the set clubs they have to their full potential.  Worse yet, often more expensive or sophisticated tools are more powerful and have the potential to give more value, but they are often less forgiving of errors.  If you don’t know how to use them, they will HURT your performance.

In the past, with a Lean (vertically integrated and geographically close) supply chain, people used primitive PCM tools (often only spreadsheets).  On a scale of 1 (worst) to 10 (best), on average what I hear from industry is that there capability to use the tool was higher, but the tool was limited and cumbersome.  The users, including design engineers, knew what decisions to make in the tool, but the tool was cumbersome.   Currently, we have more of the opposite problem.  The PCM tools are better and much easier to use, but most design engineers are somewhat baffled on how to make what seems like the simplest of manufacturing input decisions in the tools.  The “Will to use the Tool” is another problem altogether that is beyond our discussion today.  However, my experience, in general, would be represented by the following table:

Tool_effectiveness

These results will vary company to company, and even, from design team to design team within the company.  Regardless, I wonder if we are at a breakeven state from where we were in the past today in the value we get from PCM tools… or maybe, we have even lost ground.  The sad thing is  that the PCM tools today ARE more user friendly and requires less of an expert to use.  However, is the loss of manufacturing knowledge in design engineering is so bad that it has overwhelmed the PCM tooling ease-of-use-improvements?

What Can You Do to Help the Situation in Your Company?

Obviously, nothing is as good as the osmosis of manufacturing learning that occurs from a tightly coupled, geographically close, and vertically integrated supply chain.  However, the state of your firm’s supply chain is likely out of your control personally. There is some positive movement with the re-shoring and re-integration trends in industry, in general. However, there are steps you can take to improve the value your firm derives from PCM tools.

  • Send Engineers Back-to-School – do you offer (or better yet, mandate) classes in Product Cost Management, DFM/DFA, Target Costing, etc. for your design team? This should be part of the continuing education of the design engineer. I am not talking about training on the PCM tools themselves (although that is needed, too), but general classes on how different parts are made, the different buckets of cost, the design cost drivers for each manufacturing process, etc.
  • Design Cost Reviews – This is a very low tech way to create big wins. Design reviews in which design engineers review each other’s work and offer cost saving ideas should be a regular facet of your PCM process. Even better: include the engineer’s purchasing counterpart, company manufacturing experts, and a cost engineer to lead the review
  • Embed Experts – Does the design team have at least one advanced manufacturing engineer or cost engineering expert for no more than 20 design engineers? If not, you should consider funding such a resource. Their salary will easily be paid for by (a) the cost reductions they they help your team identify for products already in production, (b) the costs that help the team avoid in designs before production, and (c) the speed their efforts will add to time-to-market by helping the team avoid late changes and delays due to cost overruns.

In the past, vertical integrated, geographically close supply chains helped Product Cost Management in a passive way.  The pendulum may be swinging back to that structure.  However, even if it does, don’t rely on the “passive” Product Cost Management to help. Take the active measures described above and get more value out of your PCM Software investment.

Share
Feb 192013
 

IndustryWeek.com has just published a new article authored by Hiller Associates title:

Product Selection versus Product Development (What the product development team can learn from shopping on Amazon.com)

 Synapsis:

The process of product selection that people do in their personal lives (e.g. shopping on Amazon) is strikingly similar to the process of product development that people encounter in their professional lives. Interestingly, people are often better at making the complex decisions associated with product selection than they are at similar decisions in product development.

There are three things we can learn professionally from our product selection experience on Amazon:

  • Make the priority of your product attribute needs clear.
  • Simultaneously and continuously trade-off attributes to optimize the products value.
  • Information about the product only matters if it is urgent, relevant and/or unique, not just “new.”

 To read the whole article, simply click on the link above to go to www.IndustryWeek.com, or simply continue reading the full text below.

—————————————————————————————————————————————————————————-

Product Selection versus Product Development

What the product development team can learn from shopping on Amazon.com

We just finished the biggest shopping season of the year from Thanksgiving to Christmas.  A lot of people were making a lot of decisions about where to spend their hard earned money – mostly for the benefit of others with gifts.   During that same period design engineers around the world were rushing to finish up pressing projects – and, probably as fast as possible, because they had a lot of vacation left to use, before the end of the year.

We make decisions every day in our personal and professional lives.  But, do we make decisions the same way in both worlds?   I don’t believe so.   People might argue that decisions made at work involve much more complexity.  After all, how much product development is really going on in most homes?  However, a lot of product selection is going on in people’s personal lives.  When considering complex product purchases, product selection starts to resemble product development in many ways.  Let’s take a look at how people (including design engineers) make decisions when shopping (product selection) vs. how they make decisions in the corporate world (product development).

Consider the ubiquitous Amazon.com.  Customers’ product selection experience on Amazon is overwhelmingly positive:  Amazon scores 89 out of 100 in customer satisfaction, the top online retailer score in 2012.  But product selection is *easy* right?  Wrong.  Look at Figure 1.  Product Selectors on Amazon must consider multiple product attributes and, moreover, these attributes mirror the considerations of a product developer very closely.   Product Selectors must consider performance, cost, delivery time, quality, capital investment, etc., without any salesman or other expert to guide them.   But the really amazing thing is that the Product Selectors using Amazon are able to both prioritize these product attributes and consider them simultaneously.

Product selection on Amazon Hiller Associates

CLICK ON PICTURE TO ENLARGE

So, how do the same people who are Amazon customers typically consider product attributes in the corporate world?  Very differently is the short answer, as we see in Figure 2.  First of all, people at work do not tend to trade-off product attributes simultaneously, but in series.  Moreover, often each functional group in a product company (marketing, engineering, manufacturing, etc.), tends to be concerned with one dominating attribute, almost to the exclusion of other product attributes.  How does the typical series-based consideration of product attributes that is common in the corporate world compare to the simultaneous trade-off approach that the customers of Amazon use?   Exact numbers are difficult to find, but some sources say only 60% of products are successful.  While not a precise comparison, the difference seems meaningful:  Amazon scores 89 of 100 on the customer satisfaction index, whereas product companies have 60% successful products.

product development in series hiller associates

CLICK ON PICTURE TO ENLARGE

Why is this?   Don’t people get college degrees to be great product engineers, buyers, etc.?  Don’t they get paid well to do these jobs?  In contrast, most people have limited knowledge of the products they select on Amazon and spend hundreds or thousands of their own dollars to buy them.

There are at least three reasons why the product selection and product development processes differ, and the corporation can learn from all three.

Clear attribute prioritization

Which product attribute is more important:   time-to-market, product cost, or performance?  There’s no right or wrong answer, in general, but there is a right answer for any given situation.    The question is: does the product developer KNOW the priorities of different attributes.  As an individual shopper, you may not explicitly write down the prioritization, but you know it. Your preferences and value system are welded into your DNA, so it is clear.  However, companies are not individuals, but collectives of them.  It is the responsibility of the product line executive to make these priorities clear to everyone.

This is similar to requirements engineering, but at a strategic level.  Requirements are typically specific and only apply to a narrow aspect of the product.  I am talking about the high level product attribute priority.  Ask your product line executive:  “In general, as we go through our development, what should typically ‘win’ in a trade-off decision.”  If the executive cannot give you a concise and simple answer, he has some work to do to earn his big salary.  For example, he should be able to say something, such as “We must meet all minimum requirements of the product, but we need to get to market as fast as possible, so we meet or beat our delivery date.  Oh, and we need to keep an eye on product cost.”

The product executive needs to write the priorities down and share them with all.  In this case, he might write:

  1. First Priority: Time-to-market
  2. Constraint: minimum performance requirements met
  3. Second Priority:  Product Cost

This doesn’t mean the product executive will not change the priority later as the conditions change, but for now, the whole organization is clear on the priorities.  This sounds very simple, but most people in product development are unsure of what the clear priorities are.  Therefore, they make up their own.

Simultaneous attribute trade-off and value optimization

The second thing that we learn from Amazon shopping is to consider all the constraints and targets for product attributes simultaneously.  As we see looking at Figure 1 versus Figure 2, people naturally do this on Amazon, but organizations typically let a different priority dominate by functional silo.   There are often arbitrage points (optimums in the design space) that will allow excellent results in one attribute, by only sacrificing minimally on another attribute.  For example, the product executive may have said time-to-market is first priority, but he is likely to be happy to sacrifice one unit of time-to-market for an increase of 10 units of performance.  This doesn’t mean that the organization is changing their priorities, but that the strategic priorities discussed above simply function as a guiding light that the product development team pivots around to find the point of maximum value for the customer.

Filter for relevant information, not just more or new information

Recent research is revealing the dangerous downsides of our always-on, always-new, always-more information society.  To be sure, social media, like all technologies has the potential for adding a lot value.  The question is: do you control the information or does it control you.  The research featured in Newsweek shows three problems that have grown in importance over the last decades:

  • “Recency” Overpowering Relevance – The human brain tends to vastly overweight new information in decisions vs. older information, and our modern digital world throws tons of new information at us.
  • Immediacy vs. Accuracy – the flip side of the first problem is that real-time nature of our online world pushes people to make quick decisions.  Accuracy and thoughtfulness are seen as inefficient delays, especially in today’s corporate environment.
  • Information Overload – More information does not lead to better decisions according to research.  Humans quickly reach a point where people make bad decisions because they have too much information.  They cannot process it all and do not properly filter it.  The brain literally shuts off certain processing centers, which causes bad decisions.

What can Your Product Development Team Do to Promote Better Decisions?

To answer this, let’s first ask how Amazon is able to overcome these challenge.  To overcome the Recency vs. Relevancy challenge, Amazon ensures that recency is not the default option for the display of Amazon customer reviews.  Instead, helpfulness of the review (relevance), as judged by other customers, is the default order.  Amazon does not push immediacy.  There are no annoying pop-ups offering a deal, if you buy in ten minutes. Certainly, Amazon does make buying easy and fast, but shopping at Amazon from the comfort of one’s home is a relaxing experience that promotes thoughtfulness.  Finally, Amazon does not overload the customer with information. This is no small task, given that Amazon may offer literally hundreds of items to the customer among which he must decide.   Amazon does this by presenting the information on a huge variety of products in a standard way, and by providing simple and powerful filters that discard large amounts of extraneous information.

In order to overcome these new information challenges in your own product development work, ask yourself these three questions:

Information relevance in product cost hiller associates

CLICK ON PICTURE TO ENLARGE

  1. Relevancy – How relevant is this new information.  If I had received this information on day one of my design cycle, how much of a difference would it have made in my decisions up until now?  Is the information relevant or just “new?”
  2. Urgency – Do we need to make this decision today?  How long do we have to consider the problem before the decision must be made?
  3. Uniqueness – Is this new piece of information truly unique or just a variation of something I know already?  If it is a repeat, file it mentally and/or physically with the original information, and forget about it. It is it truly unique, consider whether the new information would be a primary influencer of you design or not.  Most of the time information is just that: information, not relevant unique knowledge.  In this case, once again, file and forget.

The world of online journals, social media, corporate social networks, and interconnected supply-chain applications is here to stay.  It brings a world of new opportunity for better and more up to date information for product development.  It also brings a deluge of extraneous information, and we need to accept this and learn to manage this.  Amazon.com manages these challenges well.  Your product development team can manage these challenges too using the principles outlined above.

Share
Feb 112013
 

There were a lot of comments last week to the article we posted with the title: Only 17% Percent of Companies Meet Product Cost Target

Many people complained about the dearth of knowledge of the design engineer in Design for Manufacturability.  In the discussion, we also started to propose some solutions to overcome this problem.  However, one comment that sparked my interest was a comment about WHY design engineers often overtolerance parts that went beyond “they don’t know any better.”   The comment paraphrased was:

A big problem we have is that we are making parts directly from the CAD model. A lot of Catia based models have a general tolerance of +- .005 [in.] on the entire part .including fillet radii and edge breaks. …these features have to be penciled off with a ball end mill instead of using a standard tool with a radius on it can kill profit on a job when you miss it when quoting.

That is a fascinating observation.  There is no doubt that the Product Lifecycle Management companies will be pleased as punch that people are finally taking their drum beating on “model is master” seriously.  FYI – I agree that the model should be master and that drawings should be generated from the 3d master data.  However, this improvement to PLM adherence highlights what happens when new technology (a tool) is foisted upon a problem without without understanding the current processes and outcomes that the incumbent tool is satisfying.  In this case, the old tool is paper drawings.  With the incumbent tool, there was a title standard block that for companies, and that title block would give helpful bounding constraints such as:

Unless otherwise specified:

All dimensions +/- o.o1 inches

All radius and fillets +/1 0.02 inches

Etc.

That helpful and protective title block may not be there with a 3d, model onl,y strategy.  All the evangelism on “tolerance by exception” goes right out the window what the CAD system now has default values that are overtoleranced by definition.  The CAD system itself becomes… The Evil Robot Overtolerancer.

The good news is that the Evil Robots can be controlled, and you don’t even need anyone named Yoshimi to help you do it.  However, it will require some thought, before you change the default tolerances in your CAD system.  Some considerations to think about are:Yoshimi Product Cost Hiller Associates

  • What were the default tolerances in the title block on your drawings when the drawing was master?
  • Can these tolerances be reduced?
  • How surgically will your CAD system allow you to set default tolerances?
  • Do you need different tolerence ‘templates’ depending on the primary manufacturing process.  E.G. tolerance defaults may be very different for a casting that is machined than for a piece of sheet metal.
  • How will you make your design engineers aware of these new default tolerances?

Whatever decision you make, make sure all the right people are at the table to make it together, including design engineering, the drafting team (if separate from design), purchasing, and manufacturing (including suppliers, if parts are made externally).  If done thoughtfully and correctly, the setting of default tolerance will bring the Evil Robot Overtolerancer under control.  If these changes are made in a vacuum or carelessly, you may find that you have made the Evil Robot 10x more powerful as an agent of chaos and profit destruction.

You want to be dealing with the friendly Autobots, not the Decepticons, right?

transformers product cost hiller associates

That’s today’s Product Cost Killing Tip!

If you have Product Cost Killing tips that you would like to share, please send them to answerstoquestions@hillerassociates.com.

 

 

 

 

 

 

Share
Dec 102012
 

I just read an article on the site “Strategy + Business” called Building Cars by Design.  It caught my eye for two reasons.  First, the fact that a strategy site would deign to talk about engineering concepts was a pleasant surprise.  Second, the article discussed Design-to-Cost and Design-to-Value.

If we strip off the automotive context, the main premises of the article from a Product Cost Management point of view are as follows:

  • Design-to-Cost means designing a car to a specific absolute cost
  • Design-to-Cost is bad because it does not take into account “value”
  • Design-to-Value needs to be used instead of Design-to-Cost, i.e. the product company needs to think about what features that customers value and then deliver these.

I applaud the authors for opening up a discussion on these topics.  However, I feel this article is incomplete and does not tell the whole story about these concepts.  It also doesn’t really say how to do any of these things or point the reader to somewhere he can further learn how.  Here’s my specific suggestions for improvement.

  • Define Design-to-Cost properly, please – Maybe this is just a bit of nomenclature nit-picking, but I have never thought Design-to-Cost means designing a product to a specific cost.   That is what “Targeting Costing” advocates.  Design-to-Cost is about considering cost as a design parameter in your product development activities.  I.E. the design engineer balances cost with other goals (performance, quality, etc.) with the goal of delivering any group of features at the lowest cost possible.
  • Define How to Calculate “Value” to the Customer – The authors say [paraphrasing] that a company should *just* find out what the customers value and then design a product that delivers those things.  I am sure most companies do want to do this, but they don’t know HOW.  I realize that how to calculate value is too complex for the article, but the authors don’t even provide a resource for the reader to learn more.  For example, I studied under Dr. Harry Cook and I am a friend and business colleague of Dr. Luke Wissmann.  At very least, the authors could have pointed the reader to a book on the subject, such as the one Wissmann and Cook wrote:  Value Driven Product Planning and Systems Engineering.
  • What if the Customer Can’t Afford the Value? – It’s difficult to know what the authors mean (even theoretically) by design-to-value.  Regardless, the authors seem to assume that the customer can always afford this value, but I don’t believe this is true, especially in the a second or third world context, which is the focus of the article.

Regarding the last point, I will do my best to illustrate the problem.  Take a look at the figure below in which I graph the value the customer gets from the product versus the price the customer pays for the product.  Presumably, the authors in the article are saying that customers would be willing to pay up to the point that the slope of the value/price decreases substantially (the curve flattens).  But, that assumes the customer has the money to spend – kind of a Field of Dreams Strategy, i.e. “If you build in value, they will pay.”

Product Value versus features and cost Hiller Associates

Click on picture to ENLARGE!

But, what if the customer truly does value a set of features, but he just doesn’t have the funds to purchase all of the value?  In this case, we have to concede that there is a Minimum Viable Product (MVP) needed for the customer to purchase. This term, MVP, is most often used in software development and start-ups.  It is the minimum set of features and functionality that a product must have to have ANY value to the customer.    If you can’t master design-to-cost in your product so that it both includes the MVP features the customer needs  and allows you make adequate profit under the price ceiling of your customer, the product will not be successful.

If the customer has less funds than the MVP to deliver in your product, they can’t afford it.  Similarly, even if the customer has more funds than the MVP requires, but less than when the value/cost curve flattens, you cannot employ a blind strategy of maximizing value to the flattening point of the curve and price near it.  You are still going to have to set your price below your customer’s funds to succeed.

So, are the authors of the article talking about design-to-value to the point that the value/price flattens or to the point where the price ceiling of the customer intersects the curve?

Anyone? Anyone?  Bueller? Bueller? Bueller?

 

Share
Jul 092012
 

It’s been a couple of weeks, since we discussed the Voices series, so if this post is interesting to you, you may want to go back and read the first two:

In these first two articles we introduced several of the voices that are always present in the Product Cost Management conversation, including:

  • The Voice of Hopefulness – the Pollyanna voice that assumes product cost will just work itself out in the end.  It is a voice of justification to ignore Product Cost Management, because the team is just too busy at XYZ point in the development process to seriously consider product cost.  Hope is NOT a strategy.
  • The Voice of Resignation – the nihilist voice that assumes that you have to accept high prices because the three suppliers that purchasing quoted gave you pricing far higher than what seems reasonable
  • The Voice of Bullying – the seemingly unreasonable scream of the customer telling you what your product should cost — not based on reality, but based on the customer’s own financial targets.

However, there is another voice in the conversation that can bring some reason to the cacophony.  It is a voices of reason — the Voice of  Should-cost.

Buck-up Cowboy. The Voice of Should-cost Can Help

Should-cost is just what it sounds like, using one or more techniques to provide an independent estimate of what the cost of a part or product “should” be.  The question is, what does “should” really mean?  For many, the definition depends on the type of cost being calculated, as well as personal should-cost calculation preferences.   I will provide my own definition here, mostly targeted at providing a should-cost for a discretely manufactured part.

Should-Cost – The process of providing an independent estimate of cost for a part, assembly, component, etc.  The should-cost is based on a specific design, that is made with a specific manufacturing process, and at a supplier with a specific financial structure.  Or, the should-cost is calculated assuming a fictitious supplier in a given region of the world that uses the best manufacturing technology, efficiency operating at maximum sustainable capacity.

I realize that this is a broad definition, but as I said, it depends what you want to estimate.  For instance, do you know the supplier’s exact manufacturing routing, overhead and labor rates, machine types, etc.?  In this case, do you want to estimate what it “should” cost to manufacture the part under these conditions?  OR… do you want to know what the cost “should” be for a new supplier who is well-suited to manufacture your design and has a healthy but not overheated order book?  Although you could make many other assumptions, the point is:   KNOW YOUR ASSUMPTIONS.  You will note that I said nothing about margin.  Some people call this a “Should-Price,” while others call it a “Should-Cost” referring to what they will pay vs. what the part costs the supplier to make.  The only difference is that you will also make an assumption for a “reasonable” margin for a Should-Price.

The important point is that the team relying on the should-cost information must define the scenario for which they want a should-cost estimate.  There is nothing wrong with wanting an answer for all these scenarios. In fact, it’s preferable. Run the calculation / estimate more than once.

Should-cost, Should Be a Choir, not a Solo Act

Manufacturing cost is a very tricky thing to calculate.  I often say that the true cost of the economic resources to make a part or product is a number known but to God.  Put statistically, you can’t know the true meaning or standard deviation of a population, you can only estimate it from the samples that you take.  People take two common approaches to should-cost.

The Pop Star Solo Act

The popular solution that too many people pursue is the solution pictured at the right.

No Easy Button in Product Cost Hiller Associates

There’s no easy button to should-cost

They want the easy button — the single source of truth.  They want the plasticized overproduced solo pop star version of should cost, i.e. the easy button tool.  There’s nothing wrong with this and there are some really good should-cost solutions available, but none of them are infallible.  In addition, it is not appropriate to put the same should-cost effort into each part or assembly in a problem.  One should focus where the money is.  However, too many people, especially cost management experts, become sycophants of one particular tool to the exclusion of others.

Single estimates in Product Cost Hiller Associates

The Lonely World of a Solo Should-cost Voice

 

Looking at the diagram to the left, you can see what the landscape looks like when you make your comparisons to one point in cost space. It is an uncertain, scary world when you only have one point of reference.  In this case, all one can do is try to force a supplier to match the should-cost output of your favorite tool.

 

 

The Andrews Sisters, Competitive Trio Quoting

The other very popular approach comes from the purchasing department:  three competitive quotes.  If the auto-tuned single pop star should-cost is too uncertain, purchasing will listen to a trio instead.  Why three quotes?

Supplier quotes in Product Cost Hiller Associates

The Trio of Should-cost Quoting

No one seems to know, but in EVERY purchasing department with which I have ever worked, three shall be the number of the quoting, and the number of the quoting shall be three.  [If you are an engineer, you know my allusion.  If not, watch the video to the left!]   The trio of quotes in the diagram to the right do help clarify the picture a little better, but there is still too much uncertainty and what I call “commercial noise” to really believe that the quotes alone bound what the should-cost plus a reasonable margin is in reality.

An Ensemble of Should-Cost Estimates

Returning to our statistics example, one of the first things you learn in statistics is that it takes about 33 samples to characterize a bell curve distribution.  At 33 samples, you can start to approximate the true mean and standard deviation of the actual population.  I am not saying that one needs 33 estimates of should-cost to triangulate on the true cost, but you should get as many as you can within a reasonable time frame.  Have a look at the diagram at the right to see this illustrated.    Instead of the single pop star approach or the Andrews Sisters trio of quotes, hopefully what you get is a well-tuned small chorus of voices who start to drown out the Voices of Resignation, Hope, and Bullying.  The chorus of should-cost estimates start to bound the “true” should-cost of the part or product and can give the team a lot more confidence.

Triangulating on Product Cost Hiller Associates

Chorus of Should Cost [CLICK TO ENLARGE!]

Sometimes the team does not have time to assemble all the voices of should-cost.  Not all parts or products are worth assembling the full choir.  More often than not, the organization is either unaware of the should-cost voices at its disposal, or are just too lazy to assemble them.

Don’t let your organization be lazy or sloppy with respect to should-cost, and remember that the best music is made when groups of instruments and voices work together, not when one person sings in isolation.

 

p.s. Bonus PCM points if you can guess what a cappella group is pictured in the thumb nail to the post

Share
Skip to toolbar