“It is always cheaper to do the job right the first time.”

May 16th, 2016 Comments off

This quote from businessman Philip Crosby has often served as the rallying cry for advocates of modern management practices.  At its heart is a simple truth; it takes less time and effort to improve a process than it does to revise each unit created from that process.  In fact, Crosby’s work showed that measurement systems geared towards improving manufacturing processes returned far more than they cost; hence the title of his bestselling book Quality is Free!

But does this logic hold for the advertising development process?  As demonstrated by the below case study the answer is a resounding ‘Yes!’  The advertiser involved installed an inexpensive, early stage copytest system containing both sales calibrated behavioral measures and diagnostics for understanding consumers’ conscious and unconscious motivations.  At the start of the process only half the ads produced annually met their sales effectiveness target.  But with each new test the brand team and their agency uncovered insights to apply in the next set of creative.  By the fourth year the rate of ads meeting their target had grown to nearly two-thirds and was still climbing.

The cost savings from needing to produce fewer ads not only covered the cost of testing but also fueled a shift from non-working to working dollars.  Furthermore each working dollar worked harder as the average sales effectiveness of the ads on air also improved.  By the fourth year the sales per media dollar spent was seen to have increased by over 30% versus the first year.


Categories: Ad Pre-Testing Tags:

MASB’s Game Changing Brand Investment and Valuation Project – Part IV

November 30th, 2015 Comments off

In parts I, II, and III of this blog series we discussed the added benefits and technical details of incorporating brand preference throughout the brand building process.  During this time we have received numerous requests for more details with the most common being of the form, “Are brand preferences important for my category?”  Typically this question has come from brand stewards competing in categories where the products or services are either not bought on a recurring, individual basis, are not “bought” at all, or had dynamically changing competitive sets.  So we wanted to take a brief moment to reiterate the breadth of category representation designed into the MASB BIV project and to provide examples from published cases demonstrating the importance of brand preference in other category types.

The MASB-sponsored, multi-year longitudinal study was conducted with the cooperation of six blue chip corporations from a variety of industries including fast moving consumer goods, food, beverages, and autos.   Each of these participants chose two categories to be included.  The resulting twelve categories represented a wide variety of product types and market conditions.  Individual unit prices ranged from under one dollar to over thirty thousand dollars.  Some of the product categories lent themselves to spontaneous purchase while others required greater deliberation which could include third party influencers in the decision making process.  Some of the categories were highly fragmented while others had only a small number of competing brands.  Typical consumer purchase cycles could vary from a week to up to a decade.

Despite these category differences, brand preferences were shown to be the strongest predictor of individual brand unit share both across and within the twelve categories examined.




Unit share was chosen as the dependent variable for two reasons.  One is that marketing is primarily focused on creating conscious (cognitive) and unconscious (affective) predispositions to choose the advertised brand over competitors.   So for a measure of brand strength to be relevant it must explain the percent of choices allocated to the brand.  The second, equally important reason is that for all of the categories included in the study units sold drive financial cash flow models.  By combining an estimate of a brand’s unit share of market at a given price point and cost of production with assumptions of future category size based on population and category penetration trends, a projection of cash flow can be made.  A discounted cash flow calculation can then be used to create a brand valuation.  Hence, explaining unit share for these categories is fundamental to brand valuation.

But there are other categories where the cash flows do not source from unit sales within a relatively stable competitive set.  For these categories the dependent variable changes but the role of brand preference remains the same.  Here are some examples drawn from previously published MSW•ARS studies.

Web Search Engines

In our first example the service isn’t bought at all!  When a person uses a search engine like Google or Bing they aren’t charged.  Rather the revenue stream comes primarily from advertising revenue from the searches conducted.  So the key variable to understand is the share of searches.


Credit Card Networks

Credit card networks are similar to search engines as the users don’t purchase them.  Plus there is an added complication that the network (e.g. MasterCard, Visa, American Express, Discover) oftentimes shares the value proposition with partner brands in the cards issuance (e.g. financial institutions, retailers).   But even with this complication brand preference for the networks themselves plays a key role in determining their share of the cards in circulation.


Restaurants and Retailers

For restaurants and retailers brand preference exerts itself in numbers of visitations and/or the amount purchased on each visit.  Collectively this translates into receipts for products acquired through each brand’s outlets.  The following graph shows the relationship between brand preferences for casual dining restaurants and their percent of receipts captured.



Pharmaceutical brands are unique in that the ultimate decision of which to use is necessarily made in partnership with an expert, their doctor.  Still, patient brand preference plays an important role in the process as demonstrated by this meta-analysis comparing preferences for pharmaceutical brands for five afflictions to their corresponding share of prescriptions.


Auto Insurance

Every subscription based service faces a moment-of-truth in which a customer’s decision to change to a competing service will likely lock the spurned brand out of that customer’s consideration for a period of time, sometimes several years.  This makes consistently maintaining brand preference critical not only for growing share but combatting churn, as demonstrated by this chart of auto insurance brands.


Movie Box Office Openings

When it comes to movies, predicting opening weekend ticket sales is of paramount importance.  But this is difficult given the constantly changing theater environment – each week the mix of competitors changes with up to one half being entirely new!  As a meta-analysis covering one hundred fifty three movie releases shows, not only is brand preference the most important single element for determining a new release’s share of a weekend’s total box office receipts, but when combined with other elements can accurately project weekend gross well before the opening!



Please contact your MSW●ARS representative to learn more about how our brand preference approach has been integrated across our entire suite of solutions.

Categories: MASB Tags:

MASB’s Game Changing Brand Investment and Valuation Project – Part III

October 13th, 2015 Comments off

In Part I and Part II of this blog series we discussed the empirical strengths and corporate needs driving brand preference’s adoption.  But one aspect that pleasantly surprises those new to the technique is how easy it is to deploy relative to other measures.

Most common brand metrics are collected through the use of a closed-ended question followed by a Likert or intention style scale.  An example would be the common stated purchase intent question:

How likely are you to buy [INSERT BRAND] in the next [INSERT TIME PERIOD]?

  1. Definitely will buy
  2. Probably will buy
  3. Might or might not buy
  4. Probably will not buy
  5. Definitely will not buy

While on the surface this looks fairly simple, in practice it is difficult to extract meaningful, sales calibrated information from it.  Since this is a stated measure it is subject to each respondent’s subjective interpretation and cognitive bias.  One respondent’s understanding of “Definitely”, “Probably”, and “Might or might not” can vary dramatically from another.  And while this effect can be averaged out across large samples, it makes subgroup comparisons very difficult; with psychographic and demographic groups oftentimes exhibiting substantial mean differences.  Without strong normative data (which is oftentimes very difficult to achieve) this can lead to false relative conclusions.

Worse yet, differences can also be manifested by seemingly innocuous changes in survey deployment like changing question order or sample sources.  As demonstrated in the ARF Foundations of Quality project, different panels produce substantially different response levels even when great effort is applied to demographic balancing.  This occurs for even the most straightforward stated questions, such as reported product usage, at rates which exceed those expected from sampling error.


And even when the above factors are rigorously controlled the stated questions still require a scale translation to calibrate the results with in-market performance.  This translation, which in itself is subject to estimation error, results in a ‘black box’.  This slows down the analytic process and can also reduce confidence in the results by end users because the linkage is no longer intuitive.

Brand Preference by comparison is much more robust.  The incentivized act of choosing from a competitive set replicates much of the dynamics of an actual purchase occasion.  Therefore respondents intuitively understand the exercise and the results naturally calibrate to sales performance.  This makes it an ideal method for sub-group comparisons as no norms or translations are needed for interpretation.


But perhaps most exciting is how respondents respond to the brand preference exercise.   Surveys consisting of closed-ended and open-ended questions can quickly disengage respondents leading to straightlining, speeding, satisficing, and other bad survey taking behavior.  In an attempt to combat this insight teams have been compelled to continually reduce the number of questions asked in a survey and the number of options, especially brands rated, included within attribute tables.   Essentially depth of research is being traded off for response quality.

Including a brand preference exercise within such surveys counteracts this trend.  Not only does it provide valuable information for each brand within a category in a very time efficient manner, the nature of the exercise improves engagement in much the same manner as gamification.  In fact, when brand preference is added to a survey it is common to see self-reported survey length drop while survey satisfaction ratings rise.

As an example of this, we recently created for a client a first-of-its-kind brand preference based, behavioral in-store shelf optimization testing platform.  Respondents have often viewed traditional approaches to this type of research as tedious and not worthwhile.  By contrast, the results for this new approach have been outstanding.  On a ten point scale, 98% of respondents rated the system a 5+ and 55% rated the new system a perfect 10.



But perhaps more impressive than this quantitative assessment is the open-ended survey feedback respondents chose to share.  Comments like these were common:

“LOVE that it was short and to the point, no dragging it out.”

“…the ease of instructions. They were not confusing.”

“There was not a lot of ambiguous stuff. Well prepared.  User friendly.”

“It was very different than other surveys I’ve taken, and I appreciated that variety!”

 “This survey was very different, fun, interesting, and relevant. I like the conciseness of it and that it didn’t ask the same questions over and over again. Nice survey and great topic.”

“I was actually a little disappointed when the end questions came up. I wanted to shop more.”

Simply put, when it comes to survey deployment MSW•ARS brand preference is unlike any other metric.  MSW•ARS Brand Preference can be incorporated into a wide variety of research and can even become a standard key performance indicator in your reporting, particularly in your tracking data.

Please contact your MSW●ARS representative to learn more about how our brand preference approach has been integrated across our entire suite of solutions.

Categories: MASB Tags: