Archive for the ‘MASB’ Category

MASB’s Game Changing Brand Investment and Valuation Project – Part III

October 13th, 2015 Comments off

In Part I and Part II of this blog series we discussed the empirical strengths and corporate needs driving brand preference’s adoption.  But one aspect that pleasantly surprises those new to the technique is how easy it is to deploy relative to other measures.

Most common brand metrics are collected through the use of a closed-ended question followed by a Likert or intention style scale.  An example would be the common stated purchase intent question:

How likely are you to buy [INSERT BRAND] in the next [INSERT TIME PERIOD]?

  1. Definitely will buy
  2. Probably will buy
  3. Might or might not buy
  4. Probably will not buy
  5. Definitely will not buy

While on the surface this looks fairly simple, in practice it is difficult to extract meaningful, sales calibrated information from it.  Since this is a stated measure it is subject to each respondent’s subjective interpretation and cognitive bias.  One respondent’s understanding of “Definitely”, “Probably”, and “Might or might not” can vary dramatically from another.  And while this effect can be averaged out across large samples, it makes subgroup comparisons very difficult; with psychographic and demographic groups oftentimes exhibiting substantial mean differences.  Without strong normative data (which is oftentimes very difficult to achieve) this can lead to false relative conclusions.

Worse yet, differences can also be manifested by seemingly innocuous changes in survey deployment like changing question order or sample sources.  As demonstrated in the ARF Foundations of Quality project, different panels produce substantially different response levels even when great effort is applied to demographic balancing.  This occurs for even the most straightforward stated questions, such as reported product usage, at rates which exceed those expected from sampling error.


And even when the above factors are rigorously controlled the stated questions still require a scale translation to calibrate the results with in-market performance.  This translation, which in itself is subject to estimation error, results in a ‘black box’.  This slows down the analytic process and can also reduce confidence in the results by end users because the linkage is no longer intuitive.

Brand Preference by comparison is much more robust.  The incentivized act of choosing from a competitive set replicates much of the dynamics of an actual purchase occasion.  Therefore respondents intuitively understand the exercise and the results naturally calibrate to sales performance.  This makes it an ideal method for sub-group comparisons as no norms or translations are needed for interpretation.


But perhaps most exciting is how respondents respond to the brand preference exercise.   Surveys consisting of closed-ended and open-ended questions can quickly disengage respondents leading to straightlining, speeding, satisficing, and other bad survey taking behavior.  In an attempt to combat this insight teams have been compelled to continually reduce the number of questions asked in a survey and the number of options, especially brands rated, included within attribute tables.   Essentially depth of research is being traded off for response quality.

Including a brand preference exercise within such surveys counteracts this trend.  Not only does it provide valuable information for each brand within a category in a very time efficient manner, the nature of the exercise improves engagement in much the same manner as gamification.  In fact, when brand preference is added to a survey it is common to see self-reported survey length drop while survey satisfaction ratings rise.

As an example of this, we recently created for a client a first-of-its-kind brand preference based, behavioral in-store shelf optimization testing platform.  Respondents have often viewed traditional approaches to this type of research as tedious and not worthwhile.  By contrast, the results for this new approach have been outstanding.  On a ten point scale, 98% of respondents rated the system a 5+ and 55% rated the new system a perfect 10.



But perhaps more impressive than this quantitative assessment is the open-ended survey feedback respondents chose to share.  Comments like these were common:

“LOVE that it was short and to the point, no dragging it out.”

“…the ease of instructions. They were not confusing.”

“There was not a lot of ambiguous stuff. Well prepared.  User friendly.”

“It was very different than other surveys I’ve taken, and I appreciated that variety!”

 “This survey was very different, fun, interesting, and relevant. I like the conciseness of it and that it didn’t ask the same questions over and over again. Nice survey and great topic.”

“I was actually a little disappointed when the end questions came up. I wanted to shop more.”

Simply put, when it comes to survey deployment MSW•ARS brand preference is unlike any other metric.  MSW•ARS Brand Preference can be incorporated into a wide variety of research and can even become a standard key performance indicator in your reporting, particularly in your tracking data.

Please contact your MSW●ARS representative to learn more about how our brand preference approach has been integrated across our entire suite of solutions.

Categories: MASB Tags:

MASB’s Game Changing Brand Investment and Valuation Project – Part II

August 4th, 2015 Comments off

In Part I of this blog series we discussed ten technical characteristics of brand preference which made it suitable for adoption into market research tools.  But just because something can be done doesn’t mean it should be done.  In fact, one of the issues identified early on by Marketing Accountability Standards Board (MASB) was that the sheer number of metrics in use could lead to a type of analytical paralysis; that is, an inability of insights groups to efficiently advise other functions of the organization.  This has been euphemistically referred to within the group as “swimming in data”.


Given MASB’s focus this primarily revolved around the plethora of metrics being applied to quantify the overall financial success of marketing activities.  But from our experience addressing this “swimming in data” issue is even more urgent for tactical research applications, especially brand tracking.  It is not uncommon to see between fifty and one hundred different category and brand attributes being monitored.  Each of these attributes captures a dimension of “equity” deemed important for brand success.  But how does an analyst combine these metrics to quantify the total health of the brand?

One popular approach is to apply structural modeling of the attributes versus sales data.  The resulting model provides a means of “rolling up” attributes into one number.  However, there are several challenges with this approach.  One is that such a model often becomes viewed as ‘black box’ by other functional areas.  This lack of transparency and simplicity fuels distrust and slows down adoption of insights.  But even worse is that such a model is only applicable to the environment in which it is derived.  Technological, financial, and even style trends can dramatically change the relative importance of attributes within a category thus uncoupling the model’s link to sales.  For example, being viewed as ‘having fuel efficient models’ is much more important for an auto brand when gas prices are high than when they are low.

Brand preference offers a better approach to the “swimming in data” issue.  As a truly holistic measure it captures the influence of all these attributes.  This was confirmed in the MASB Brand Investment and Valuation project.  Several of the marketers participating in the brand preference tracking trials provided equity data from their internal tracking programs for comparison purposes.  Across the categories investigated there were seven other brand strength ‘concepts’ commonly used.


A correlation analysis was used to contrast their relationship to changes in brand share of market versus that of brand preference.  What was found is that the strength of their relationships to share varied by category and oftentimes fell below the correlation level deemed moderately strong by Cohen’s Convention (Jacob Cohen, Statistical Power Analysis for the Behavioral Sciences; 1988).  Furthermore, all of these other metrics exhibited correlations to market share substantially below that of brand preference.


But brand preference didn’t just demonstrate stronger relationships to market share than these other measures, it also captured their individual predictive power.  This is most readily seen by contrasting each measure’s explanatory power of preference to that of market share.  All seven measures exhibit a stronger relationship to preference than to market share.  Given that the preference was gathered on a completely different sample than the other measures, this strongly suggests that the explanatory power of these other measures is acting through preference in explaining market share.


In addition to these seven common concepts, category specific attributes were also examined.  Of the seventy metrics examined not a single one showed potential to substantially add to the predictive power of preference.

Probably the most extreme example of the advantage of brand preference as a holistic tracking measure comes during a product safety recall.  During these situations it is not unusual to see top-of-mind awareness levels peak near one hundred percent.  At the same time, brand attributes such as safety and trust which typically show milder importance rise to the top.  Under these conditions a structural model’s ability to explain sales may not just drop to zero but actually turn negative.  That is, it will report brand strength rising even as sales precipitously drop!  Since brand preference not only captures the changing level but also the changing importance of these other dimensions, it remains a valuable tool for navigating such times at it will accurately monitor progress in rebuilding the brand in the hearts and minds of consumers.

The Tylenol tampering incident illustrates this.  As the nation watched several people die from the poisoning, brand preference plummeted thirty-two points.  The Tylenol brand could no longer be trusted.  Concurrent with this brand preference drop, Tylenol’s market share fell thirty-three points.  As Johnson & Johnson addressed the situation responsibly, the brand’s previous place in the minds of consumers was slowly rebuilt.  This set the stage for a rebound in brand sales as tamper protected versions of the brand’s products made their way onto store shelves.


Because of its ability to accurately monitor the total health of a brand, the MSW●ARS Brand Preference measure is quickly becoming viewed as the ‘King of Key Performance Indicators’.  But there are other very pragmatic reasons for incorporating it into your tracking and other research.  In future blog posts we will discuss these and how easy it is to do.

Please contact your MSW●ARS representative to learn more about our brand preference approach.

Categories: Advertising Tracking, MASB Tags:

MASB’s Game Changing Brand Investment and Valuation Project – Part I

July 20th, 2015 Comments off

How much is my brand worth in financial terms?  How much will my marketing grow its value?

Despite their seeming simplicity, these two questions have frustrated brand practitioners for decades.  It is well accepted that there is a link between brand building activities and corporate profits.  After all, the entire field of marketing is based upon this proposition.  Yet it is equally well accepted that there is no standardized approach that companies can rely on to quantify brand value in the dollar-and-cents terms applied to other assets.  This puts marketing at a severe disadvantage within boardroom discussions of resource allocations, as its expenditures are all too often seen as pure costs rather than investments in the business.  And this is despite a growing realization that intangibles account for up to eighty percent of overall corporate value with brands being at the top of the list.

But one industry group is actively working to change this.  The Marketing Accountability Standards Board (MASB) created the Brand Investment and Valuation (BIV) project to establish the quantitative linkages between marketing and financial metrics.   The solution they have proposed is as simple as the questions themselves:  Identify a “brand strength” metric which captures the impact of all branding activities, understand how this metric translates into financial returns (ultimately cash flow), and then use this to calculate a brand value and to project the return from future marketing investments.


Of course this begs the question, does such a “brand strength” metric exist?  And if so, is it practical enough to be used?  After an exhaustive search of research literature, MASB identified brand preference as the most likely candidate for the brand strength metric.  Brand preference (also known as brand choice) is defined within the common language in marketing dictionary as:

One of the indicators of the strength of a brand in the hearts and minds of customers, brand preference represents which brands are preferred under assumptions of equality in price and availability.

The ability of brand preference to isolate brand strength from other market factors (e.g., price and distribution) separates it from other marketing measures.  Furthermore, previous studies demonstrated that the behavioral brand preference approach pioneered by MSW•ARS met MASB’s predetermined ten criteria of an ideal metric:

  1. Relevant:  It has been proven to capture the impact of all types of marketing and PR activities.  Over the last 45 years it has been used to measure the effectiveness of all forms of media (e.g. television, print, radio, out-of-home, digital), events (e.g. celebrity and event sponsorships), and brand news (e.g. product recalls, green initiatives).  It has also been shown to capture both conscious and unconscious customer motivations and so applies equally to rational, emotional, and mixed branding strategies.
  2. Predictive:  Its ability to accurately forecast financial outcomes has been demonstrated in a number of studies.  This includes studies comparing preference to sales results calculated from store audits, in-store scanners, pharmaceutical prescription fulfillments and new car registrations.  When applied to advertising, changes in brand preference have been proven to predict changes in the above sales sources from control market tests, split media tests, pre-to-post share analysis and market mix modeling.  In fact, Quirk’s Magazine noted over a decade ago that “this measurement has been validated to actual business results more than any other advertising measurement in the business”.
  3. Objective:  It is purely an empirical measure by nature.  No subjective interpretation is needed.
  4. Calibrated:  It has been applied to the broad spectrum of brands and categories and its correlation to sales has proven consistent across geographies.  Furthermore, it self-adjusts to the marketplace where it is collected so it has the same interpretation without any need for historic benchmarks.


  1. Reliable:  It has been shown to be as reliable as the laws of random sampling allow.  This is true both for brand preference gathered at a point in time and for changes over time caused by marketing activities.  The table below summarizes this consistency in measuring changes.  Changes in brand preference caused by 49 campaigns were each measured twice among independent groups of costumers.  Observed variation between the pairs was compared to what would be expected from random sampling.  The ‘not significant’ conclusion confirms that the measure is as reliable as the laws of random sampling allow.


  1. Sensitive:  It is able to detect the impact of media even from one brand building exposure (e.g., a single television ad shown once).
  2. Simple:  It is easily applied and understood.  It can be incorporated within any type of customer research including tracking, pre-testing, post-testing, segmentation, strategy, product concept.
  3. Causal:  While it captures the effect of product experience, it is not driven by just product experience.  In fact, it has been proven predictive of trial for new products for which consumers have no experience.
  4. Transparent:  It doesn’t rely on ‘block box’ models or norms.
  5. Quality Assured:  Its reliability and predictability are subject to continuous review.

To verify its suitability as the brand strength metric, MASB included an aggressive trial of brand preference as part of its BIV project.  A cornerstone of this endeavor was a longitudinal tracking study sponsored by six blue chip corporations and conducted by MSW•ARS Research.  The two year study covers one hundred twenty brands across twelve categories with a variety of market conditions.  In part II of this article we will review several of the key findings from this project, which are already changing industry perceptions on measuring brand value and making brand building investments.

The MSW•ARS Brand Preference measure can be incorporated into a wide variety of research and can even become a standard key performance indicator in your reporting, particularly in your tracking data.  In future blog posts we will discuss this and how you can easily apply it.

If you don’t want to wait then please contact your MSW•ARS representative to learn more about our brand preference approach.