Moderator

Feb 142013
 

I just read the following article and was smiling wryly while experiencing a BFO (blinding flash of the obvious).

The Death of Core Competence Thinking

This article talks about the slow… the FAR too slow… death of “Core Competence” thinking.  This is the concept that organizations should only focus on the 1-2 things that they are best at, e.g. marketing, and everything else should be “outsourced.”  This idea was pushed with a passion after the release of The Core Competence of the Corporation by C.K. Prahalad and Gary Hamel of Harvard Business School.  I guess as an HBS graduate myself I have to now bear the sins of my forefathers?  Actually, these concepts go back further to the work of the famous 18th century British economist, David Ricardo, who perfected the idea of “Comparative Advantage.”

Comparative Advantage Competence Hiller Associates

CLICK TO ENLARGE IMAGE

Comparative Advantage says that there is a difference between absolute advantage in making something and relative advantage.  For example, in the graph to the right, we see that Country A has an absolute advantage in BOTH manufacturing guns and wheat, but the gap (Comparative Advantage) between Country A and Country B is bigger in guns than wheat.  Therefore, Ricardo would say that Country A should use ALL its resources to make guns until diminishing returns lowers its Comparative Advantage below Country B’s in producting wheat.

If the chart isn’t clear, here’s a great video teaching explanation of Comparative Advantage to a background soundtrack of AC/DC; I bet your micro economics class was never this fun.

You can see how closely Ricardo’s Comparative advantage is to Core Competence theory.  In fact, Core Competence builds on Ricardo’s work as a foundation.

The opposite of core competence thinking is the “Conglomerate” (a company that makes many products in many different industries) and/or “Vertical Integration” (owning the whole supply chain for a given product).  Core Competence theory rails against these strategies, and American business bought it hook, line, and sinker.  Why?  Because, Wall Street bought it hook, line, and sinker?  Why?

  1. When Conglomerates are split up into individual companies, they must report their financials individually, so Wall Street gets better information.
  2. When Vertical Integration is stopped and companies begin outsourcing, they will often see and immediate (though often short term) improvements in Cost of Goods Sold.  And, even if they don’t see this product cost improvement, Wall Street will give the company kudos by raising the company’s stock price, anyway.

Case closed, right?

Wrong.

Until the last ten years, companies have been terrible at calculating, realizing, and internalizing the Total Cost of Ownership of outsourcing.  I.e. product cost is not only about the price paid for the part, but additionally, shipping, logistics, quality/inventory risk when you have 6 weeks of parts on the ocean, inventory costs, the friction of dealing with foreign cultures with different languages in vastly different time zones, etc.  Even when firms began to calculate these costs, TCO started a cultural conflict with the proponents of outsourcing and core competence.

Enter the low key and industrious Japanese.  Actually, the Japanese were a force on the scene since the 1980’s, but people did not really internalize the Japanese concept of “Lean” (click for video explanation) manufacturing until the mid-1990’s.  Lean has many useful concepts, but two striking features of the Lean manufacturing are (1) to have one’s suppliers geographically as close as possible to the upstream plant and (2) to often own part or all of the suppliers.

In other words Lean ~= Vertical Integration.

But how does Vertical Integration = better Product Cost Management?  Check back next week!

(or, you can subscribe by email, facebook, twitter, RSS, or linked-in to the right, and we’ll check back for you!)

 

Share
Feb 112013
 

There were a lot of comments last week to the article we posted with the title: Only 17% Percent of Companies Meet Product Cost Target

Many people complained about the dearth of knowledge of the design engineer in Design for Manufacturability.  In the discussion, we also started to propose some solutions to overcome this problem.  However, one comment that sparked my interest was a comment about WHY design engineers often overtolerance parts that went beyond “they don’t know any better.”   The comment paraphrased was:

A big problem we have is that we are making parts directly from the CAD model. A lot of Catia based models have a general tolerance of +- .005 [in.] on the entire part .including fillet radii and edge breaks. …these features have to be penciled off with a ball end mill instead of using a standard tool with a radius on it can kill profit on a job when you miss it when quoting.

That is a fascinating observation.  There is no doubt that the Product Lifecycle Management companies will be pleased as punch that people are finally taking their drum beating on “model is master” seriously.  FYI – I agree that the model should be master and that drawings should be generated from the 3d master data.  However, this improvement to PLM adherence highlights what happens when new technology (a tool) is foisted upon a problem without without understanding the current processes and outcomes that the incumbent tool is satisfying.  In this case, the old tool is paper drawings.  With the incumbent tool, there was a title standard block that for companies, and that title block would give helpful bounding constraints such as:

Unless otherwise specified:

All dimensions +/- o.o1 inches

All radius and fillets +/1 0.02 inches

Etc.

That helpful and protective title block may not be there with a 3d, model onl,y strategy.  All the evangelism on “tolerance by exception” goes right out the window what the CAD system now has default values that are overtoleranced by definition.  The CAD system itself becomes… The Evil Robot Overtolerancer.

The good news is that the Evil Robots can be controlled, and you don’t even need anyone named Yoshimi to help you do it.  However, it will require some thought, before you change the default tolerances in your CAD system.  Some considerations to think about are:Yoshimi Product Cost Hiller Associates

  • What were the default tolerances in the title block on your drawings when the drawing was master?
  • Can these tolerances be reduced?
  • How surgically will your CAD system allow you to set default tolerances?
  • Do you need different tolerence ‘templates’ depending on the primary manufacturing process.  E.G. tolerance defaults may be very different for a casting that is machined than for a piece of sheet metal.
  • How will you make your design engineers aware of these new default tolerances?

Whatever decision you make, make sure all the right people are at the table to make it together, including design engineering, the drafting team (if separate from design), purchasing, and manufacturing (including suppliers, if parts are made externally).  If done thoughtfully and correctly, the setting of default tolerance will bring the Evil Robot Overtolerancer under control.  If these changes are made in a vacuum or carelessly, you may find that you have made the Evil Robot 10x more powerful as an agent of chaos and profit destruction.

You want to be dealing with the friendly Autobots, not the Decepticons, right?

transformers product cost hiller associates

That’s today’s Product Cost Killing Tip!

If you have Product Cost Killing tips that you would like to share, please send them to answerstoquestions@hillerassociates.com.

 

 

 

 

 

 

Share
Feb 042013
 

People complain about the profitability of products, especially early in production, but how often do products actually miss their profitability at launch?

According to the latest research by Hiller Associates, most companies miss product cost targets.  We asked almost forty  people from a variety of corporate functions “How often do you meet or beat product cost targets at launch?”   The results follow the familiar 80/20 rule of many business phenomena.  On 17% of respondents said that their companies meet cost targets Often or Very Often.

Product Cost Results goals at launch Hiller Associates

CLICK TO ENLARGE

That is not an impressive showing.  We would not accept 17% as a pass completion percentage from a NFL quarterback.  That’s not even a good batting average in baseball.  So why do we put up with this in our companies?  It’s also interesting that almost the same percentage of respondents (15%) don’t know enough about product profitability to even guess how well their companies are doing.

Companies are understandably careful with releasing actual product profit numbers.  Still, it would be great to have a more in-depth academic study done, in which actual financials were analyzed to answer the same question.

Percent meeting product cost summary Hiller AssociatesHow often does your company meet its product cost targets?  Does anyone know in your company know? These are questions you cannot afford not to ask. Is your firm the 17%… or the 83%.  If you are in the 83%, consider starting or improving your efforts in Product Cost Management.

 

 

Share
Jan 312013
 

One of my fellows in the world of product cost and design, Mike Shipulski, just posted the following:

The Middle Term Enigma

 

 

The general synopsis of it is:

  1. Firms focus more and more on the short term
  2. The “short term” is shorter and shorter.
  3. Short term leads to minimization and typically damages long term success
  4. On the other hand, the firms (especially execs) fear the long term plan as expensive and risky
  5. So why not focus on the “medium term”

Our Opinion:

Mike is right.  The short term thinking kills companies and actually wastes a lot of time and money – paradoxically.

I would offer the following addition:  Short, Medium, and Long term all have their places, but there has to a be a thoughtful and maintained plan for each. You just can’t make a plan today and then look at it in a year.  Every 2-3 months, you should be re-assessing and moving the plan accordingly.  However, you should not see whipsawing, but just gentle, organic fine tuning as you gain more information.

I also would like for Mike to define the Short, Medium, and Long term.  I realize that this changes product to product, but a general guideline would be helpful.

 

Share
Jan 292013
 

Hiller Associates recently was the keynote speaker at aPriori’s first customer conference.  It was a great opportunity to both teach and learn from experts that came from a wide range of industries and geographies.

Hiller Associates’ President, Eric Hiller, discussed several topics, of which we’ll mention two here.  The entire presentation can downloaded for FREE.  Just click on the slide below and get the presentation:   Best_Practices_for_Starting Your Procuct Cost Management Journey or Improvement.

Variance in Cost Numbers

One of the main themes discussed was the possibility of getting an “accurate” cost, meaning how possible is it to get a cost that is within a certain percentage of a fixed point of reference, such as a supplier quote.  There are several ways to look at this problem that we may discuss in subsequent weeks on this blog.

Eric Hiller at aPriori STARS 1 2012 product cost Hiller Associates

Eric Hiller presenting the keynote speech at the aPriori Customer Conference

However, in summary, the presentation asked the question:  what cost variance is inherent in your system already?   For example, if your 3 quotes from supplier have a range of 30% from highest to lowest, then is it realistic to expect the cost that you calculate in a product cost management software to be closer than 30% away from a random quote?

It was refreshing to see how open the audience was to these concepts.   The reactions to the variance concept went from wide-eyed amazement from people who were new to the cost management field, to thoughtful reflection from the veterans.  In fact, the veterans reacted like men who had been reminded of a truth that they knew all along.  Often, such common truths are forgotten due to immersion in the day-to-day challenges of keeping a company profitable.  We call this concept having a “blinding flash of the obvious” – a BFO.  Everyone in the room had that BFO, and no one wanted to argue about it.  Instead, there were many comments throughout the conference that further explored this concept.

Culture is the biggest loser

Another theme of the presentation was driven by the latest research in Product Cost Management done by Hiller Associates.  Those who follow us regularly know that we segment problems in our consulting work into four root causes:  Culture, Process, Roles/People, and Tools.

Our latest research shows that cultural problems are the clear bottleneck in most firms’ Product Cost Management journeys.  The respondents overwhelmingly agreed.  When Eric ask the attendees which area was their firm’s biggest PCM bottleneck, the conference participants voted as follows, based on a rough estimate of hands in the air:

Best Practices for Product Cost Management Hiller Associates

CLICK TO GET FULL PRESENTATION

  • Culture 60-70%
  • Process 20-30%
  • People/Roles 0-5%
  • Tools     5-10%

That’s fairly shocking at a conference whose organizers are a Product Cost Management TOOL vendor.  [Next time HA will have to set our honorarium higher for taking the pressure off of any problems with the vendor’s product!]  Joking aside, culture is obviously the  biggest problem and it is not an easy thing to change.  In fact, companies often buy a PCM software tool hoping that it will somehow magically fix their bigger cultural problems.

It reminds one of obesity problems.  Many companies have a culture of binging on product cost during design.  In purchasing & manufacturing they continue with cost obesity denial — not know what the cost calorie count is until the parts arrive at the door with an invoice.  However, instead of changing their cost eating and exercising habits, they look for a magical cure in the form of a software tool.  Let’s call this “the shake-weight approach” to product cost management.

We’re not disparaging the shake-weight, or any other home exercise equipment.  Certainly, all home exercise equipment can help you lose weight, just as we are sure that all of the PCM Tools can help one reduce cost.  But, you have to use these tools regularly and properly.  PCM software tools are too much like home exercise equipment.  People buy them thinking that the tool will magically solve cost obesity.   They use the tool twice and then it sits in the corner unloved, unused, and unmaintained… and, yet, people wonder why they are still product cost obese!  It’s not the tool that the problem, it’s your culture.  Much like changing your eating lifestyle, changing the PCM culture is really hard and tricky to do.  That’s why cultural issues are often at the forefront of most of the engagements that we do with clients at Hiller Associates.

However, it was refreshing to see that the attendees at aPriori’s conference did seem to understand this problem, or at least were very open to the idea.   So, maybe we are making progress on this point.  Or, maybe  HA needs a TV show “The Biggest Cost Loser” in which Hiller Associates works with companies to increase product profit with weekly product cost “weigh-ins.”  What TV viewer wouldn’t watch that kind of riveting drama…

Now, get out there and do some product cost push-ups!

If you would like to see the entire presentation from the conference, just click on the slide image above and get the presentation:  “Best Practices for Starting Your Product Cost Management Journey or Improvement.” 

 

Share
Jan 292013
 

 

If you would like to download the presentation (Best Practices for Starting Your Product Cost Management Journey or Improvement)  for FREE, please fill out the form below and click the button.

A link to the .pdf of the presentation will then appear below that you can click on.

Your Name (required)

Your Company(required)

Your Title(required)

Your Telephone(required)

Your Email (required)

 

AFTER you fill out the form and click “Download File” the link to the file WILL BE (IS?) above!

Share
Jan 162013
 

It’s one of the most famous studies in the world of product development and probably the most famous study in the history of Product Cost Management.  It was done in the 1960’s (reportedly) by DARPA (The US Defense Advanced Research Projects Agency of the United States Government).  It’s so famous that it is typically referred to as “The DARPA Study.”  Some claim that the entire software category of Product Lifecycle Management has used the study for its primary justification for being.

But, if it is so famous… where is it?

I have searched for the study a couple times on the internet.  It is easy to find The DARPA Study of product cost referenced loosely, but I could not find the original study, or even a formal citation… and as we know, if you can’t find something after thirty minutes of searching on Google, it does not exist, right?

Fortunately, I was able to dust off my engineering masters thesis and find the following studies that corroborate DARPA’s alleged claim.

Cost Committed vs spent in Product Cost Management Hiller Associates

CLICK TO ENLARGE

DARPA’s claim what bold and powerful and if we paraphrased it, we would say:

“80% of a product’s cost is determined in the first 20% of activities in design and development”

I have even found a great webpage that lists FOUR PAGES of references to studies, corroborating DARPA’s results:

Design Phase Cost Rational

… but it does not have a reference to the original DARPA Study.

So, can anyone help?  Can someone send me:

  1. A link to “The DARPA Study” on the web? AND/OR
  2. A .pdf of the study? AND/OR
  3. At least the proper academic citation to the study?

Anyone, Anyone, Bueller, Bueller, Bueller…

Cost Committed vs. spent for product cost hiller associates

CLICK TO ENLARGE

 

Share
Jan 022013
 

I was just reading a really interesting article by Matthew Littlefield called Cost of Quality Definition.  I applaud the article for several reasons.  It is straightforward, clear, and short.  I especially like that Matthew acknowledged that Cost of Quality is not only in negative things that are avoided (warranties, recalls, scrap, etc.), but also that there are costs to prevent these negative consequences (cost of appraisal and prevention).

This sounds like a trivial thing, but I remember living through the 1990’s where some academics and practitioners had a cultic obsession with quality.  They would hammer you with the idea of cost of ‘poor’ quality.   As a university student and engineer I would say, “Well, yes, but obviously you pay something to ensure good quality and avoid recalls, customer satisfaction loss, etc., right?  I mean, there is a level of quality that is not worth while attaining, because the customer does not value it and will not pay for it.”  The quality obsessed would look at me like I had just uttered vile heresy and inform me that having good quality NEVER cost the organization anything – only poor quality did.  Mr. Littlefield’s definition makes a lot more sense.

What does not make sense is Mr. Littlefield’s engaging, but definitionless graph in the article.  The axes are not labeled, either with specific financial units, or with general conceptual terms.  Furthermore, in the paragraphs before and after, his discussion is about the trade-off needed to find the minimum between Cost of Good Quality and Cost of Poor Quality… but the graph has three axes?    Maybe on axis Total Cost and the others are Cost of Good Quality and Cost of Poor Quality?

Can  someone explain?

Share
Dec 182012
 

 

Hiller Associates has teamed up with CIMdata , a global leader in Product Lifecycle Management consulting and PLM industry analyst coverage to bring you the first annual Product Cost Management Survey.

It takes less than 10 minutes and you will be rewarded by receiving a free copy of the results and report of the learnings that we gain about product cost.

 

 

Product Cost Management Survey

 

 

 

http://www.esurveyspro.com/Survey.aspx?id=03984de4-e017-455d-914b-507bb529c308

 

Share
Dec 102012
 

I just read an article on the site “Strategy + Business” called Building Cars by Design.  It caught my eye for two reasons.  First, the fact that a strategy site would deign to talk about engineering concepts was a pleasant surprise.  Second, the article discussed Design-to-Cost and Design-to-Value.

If we strip off the automotive context, the main premises of the article from a Product Cost Management point of view are as follows:

  • Design-to-Cost means designing a car to a specific absolute cost
  • Design-to-Cost is bad because it does not take into account “value”
  • Design-to-Value needs to be used instead of Design-to-Cost, i.e. the product company needs to think about what features that customers value and then deliver these.

I applaud the authors for opening up a discussion on these topics.  However, I feel this article is incomplete and does not tell the whole story about these concepts.  It also doesn’t really say how to do any of these things or point the reader to somewhere he can further learn how.  Here’s my specific suggestions for improvement.

  • Define Design-to-Cost properly, please – Maybe this is just a bit of nomenclature nit-picking, but I have never thought Design-to-Cost means designing a product to a specific cost.   That is what “Targeting Costing” advocates.  Design-to-Cost is about considering cost as a design parameter in your product development activities.  I.E. the design engineer balances cost with other goals (performance, quality, etc.) with the goal of delivering any group of features at the lowest cost possible.
  • Define How to Calculate “Value” to the Customer – The authors say [paraphrasing] that a company should *just* find out what the customers value and then design a product that delivers those things.  I am sure most companies do want to do this, but they don’t know HOW.  I realize that how to calculate value is too complex for the article, but the authors don’t even provide a resource for the reader to learn more.  For example, I studied under Dr. Harry Cook and I am a friend and business colleague of Dr. Luke Wissmann.  At very least, the authors could have pointed the reader to a book on the subject, such as the one Wissmann and Cook wrote:  Value Driven Product Planning and Systems Engineering.
  • What if the Customer Can’t Afford the Value? – It’s difficult to know what the authors mean (even theoretically) by design-to-value.  Regardless, the authors seem to assume that the customer can always afford this value, but I don’t believe this is true, especially in the a second or third world context, which is the focus of the article.

Regarding the last point, I will do my best to illustrate the problem.  Take a look at the figure below in which I graph the value the customer gets from the product versus the price the customer pays for the product.  Presumably, the authors in the article are saying that customers would be willing to pay up to the point that the slope of the value/price decreases substantially (the curve flattens).  But, that assumes the customer has the money to spend – kind of a Field of Dreams Strategy, i.e. “If you build in value, they will pay.”

Product Value versus features and cost Hiller Associates

Click on picture to ENLARGE!

But, what if the customer truly does value a set of features, but he just doesn’t have the funds to purchase all of the value?  In this case, we have to concede that there is a Minimum Viable Product (MVP) needed for the customer to purchase. This term, MVP, is most often used in software development and start-ups.  It is the minimum set of features and functionality that a product must have to have ANY value to the customer.    If you can’t master design-to-cost in your product so that it both includes the MVP features the customer needs  and allows you make adequate profit under the price ceiling of your customer, the product will not be successful.

If the customer has less funds than the MVP to deliver in your product, they can’t afford it.  Similarly, even if the customer has more funds than the MVP requires, but less than when the value/cost curve flattens, you cannot employ a blind strategy of maximizing value to the flattening point of the curve and price near it.  You are still going to have to set your price below your customer’s funds to succeed.

So, are the authors of the article talking about design-to-value to the point that the value/price flattens or to the point where the price ceiling of the customer intersects the curve?

Anyone? Anyone?  Bueller? Bueller? Bueller?

 

Share
Skip to toolbar