If youre looking for free online slots, this is the place to go. You can play any of the greatest free slots games on our site without having to download or register. Weve done a lot more than just gather hundreds of free online slots, though. We also provide all of the information you need to get the most out of your free slot playing. Well teach you how to pick the best machines for your needs, explain the benefits of free play, and discuss why real money slot games are a good idea..

The Persuadables℠; the consumer segment with the highest ROAI

February 23rd, 2022 Comments off

The Persuadables℠;  the consumer segment with the highest ROAI.

Essentially, it’s a narrowing of your target from a broad-based approach to those consumers who already have a likelihood of choosing your brand.  And because they already have a predisposition towards your brand, they are more responsive to advertising, easier to move and ultimately return a higher ROAI.  Identifying those consumers and then what message will pull them towards your brand is the key.

The concept is relatively straight forward – of course, like most other things, the devil is in the details.

Fortunately, MSW has been doing this since the early 1960’s. The idea grew out of our early work on Brand Loyalty and is actually related to the old debate of which is a more efficient way to increase sales; a strategy of penetration (increasing your user base) or a strategy of increased usage (getting those who already use to use more).  The answer in most cases is increased usage.  Getting non-users, who express no interest in future use, to choose your brand is the least efficient strategy (they don’t use it, are not interested in using it and aren’t likely to be convinced, and if you can convince them it’s going to cost a lot).  The next least efficient strategy is getting loyal users to increase their usage (they already use it, how much more can they buy).  The most efficient way to target is in the middle; to get repertoire users (those who rotate through your brand and other brands) and non-users who are “attracted” to increase their use of your brand over others (increased loyalty or increased share of requirements) or buy for the first time.  It’s this last group that some people are now calling the “Movable Middle”, but what we refer to as the “Persuadables℠”.

The genesis of the thinking was that…  The nucleus of a brand’s franchise are the consumers who are loyal to it – but every brand also has a proportion of consumers who are “Persuadable”.

 And in order to bring about favorable sales results, the advertiser must accomplish two main objectives: 

1)  ATTRACT new buyers

2)  RETAIN possible defectors

Now, identifying who to ATTRACT and RETAIN is one thing – activating them is another.  Some people refer to this group as the “Movable Middle”; identify them using a traditional type of segmentation analysis, heavy on demographics, and then use that segmentation to target media buys.

While identifying and targeting is a good start, it doesn’t provide the most effective framework because it leaves out the most important piece.

At MSW, we do it different and better.  Our framework for a truly actionable and effective process is four-steps:

Yes, we identify and enable targeting of media buys, but we also inform our clients “HOW” to Persuade (or Move) this target – what messaging will have a positive impact on sales.

Including the “HOW” comes from our heritage in advertising and communications research –research centered on “HOW” to persuade these “Persuadables” to buy or buy more of your brand.

How we do it; Step 1, Identify Persuadables℠.

We have two techniques that we use in combination to identify the Persaudables℠.

  • We don’t define the Persaudables℠ as others do, as consumers with a 20-80% probability of choosing your brand, as the result of purchase interest question. Purchase Interest, by itself,  is actually one of the least accurate ways of predicting brand choice.  MSW’s CC Brand Preference is the most proven and sales validated measure available to predict brand choice.
  • We combine our proven and sales validated CC Brand Preference™ and our Brand Franchise Analysis™ to more accurately identify and segment consumers relationship with, and likelihood to purchase your brand.

How we do it; Step 2, Identify HOW to Persuade Them.

  • Then we use our Persuasion Driver Analysis™ to inform HOW to move the Persaudables℠.

How we do it; Step 3, Test Messaging.

  • Testing messaging and communications to ensure ad impact on future sales volume and market share.Messaging and Creative is the one thing that has the most impact on both short-term sales lift and long-term brand equity.
  • MSW’s suite of advertising assessment and creative guidance tools used to test messaging and ensure impact on Persuadables.

How we do it; Step 4, Target Media.

  • And finally, targeting the Persaudables℠ using the segmentation analysis completes the process. While MSW Research is not a media buyer, we do provide our clients’ buyers with segments, definitions, targeting and everything they need to effectively execute and buy.

Some final thoughts…

  • Roughly 1/2 of category users are Persuadables℠
  • 2/3rds of new penetration comes from Persuadables℠
  • Only 1/3 of new penetration comes from all other segments
  • Persuadables℠ are twice as important as all other segments; as they deliver 2/3rds of new penetration

 


In What MSW Research Products/Services is this Persuadables℠ Segmentation Analysis available?

We offer the Persuadables℠ Segmentation Analysis throughout the communication development process.

1. It is available in early stage Foundation & Purpose research as part of our Brandscape℠ product which is a category and brand equity analysis that deconstructs equity into base pillars for marketing planning. It is the first step in our communications development process that includes rational and emotional measurement that’s hard-linked within our brand equity improvement paradigm.

We build the Persuadables℠ Segmentation Analysis into Brandscape as an integrated analysis to understand your brand’s position relative to direct and indirect competitors, as well as your place with regard to mission, vision, core values, niche markets, strengths and weaknesses.

Identify the strongest brand claims/benefits to find those which are worthy of building communications around.

Determine if you have the opportunity to leverage and/or increase existing brand equity by extending or spinning-off your brand.

Explore the interaction between a brand and it’s customers over the duration of the relationship.

Understand your customers’ state of mind, complete experience and interaction with every brand touch point they encounter from awareness and need generation to usage and loyalty.

2. It is available in Screening & Development research as part of all our Early-Stage Message Screening (Sifter™) and Copy Testing (TouchPoint™) products.

Again, we build the Persuadables℠ Segmentation Analysis into these products as a fully integrated analysis to bolster the actionability of the outputs – insuring in-market impact.

3. And finally Persuadables℠ Segmentation Analysis is built into our tracking services as a fully integrated analysis to; a) confirm that historical communications strategies are on target and performing as predicted and, b) be viewed as a leading indicator to guide future decision making with regard to messaging and targeting. 

Contact us today to discuss identifying and activating your Persuadables℠ with one our senior executives.

Categories: The Persuadables Tags:

Assessing the Utility of MSW’s Insight Rabbit Copy Testing Scores in Predictive Analytics: A Validation Case Study

February 2nd, 2022 Comments off

Copy testing has been utilized by advertisers for decades to assess the quality of advertising copy. MSW’s TouchPoint™ copy testing system has been extensively validated, showing that test results on key metrics are predictive of subsequent sales results from airing the tested advertising. A partnership between MSW and predictive marketing analytics firm Keen set out to assess the utility of test scores from MSW’s Insight Rabbit DIY copy-testing platform at improving predictions from Keen’s MIDA decision support system.

The MIDA (Marketing Investment Decision Analysis) platform is designed to help marketers decide how to invest in marketing activities. MIDA users can develop optimized investment scenarios to meet specific business objectives such as hitting revenue targets or meeting budget constraints. It does this by applying a Bayesian modeling approach to a wide range of a brand’s historical marketing and performance data.

Could the use of copy quality metrics improve forecasting of business outcomes and hence, be used as an input to MIDI to improve the allocation of marketing dollars? To address this question, new advertising for a major packaged food brand was selected. This brand had developed two different campaigns with different communication objectives that tied back to the brand’s strategy. The brand intended to air both campaigns concurrently.

The television ads developed for each of the two campaigns were tested using MSW Research’s Insight Rabbit Pulse Lite copy testing solution. Results are shown in the graph below.

Both ads were adequate in terms of the secondary Break Through metric which assesses the degree to which an ad leaves viewers with a memorable and branded impression. However, Copy A scored much stronger in terms of the CCPersuasion™ metric which assesses the degree to which the ad positively influences preference for the advertised brand. Prior validation studies have shown CCPersuasion to be the strongest predictor of an ad’s selling power. Copy A scored significantly above the Fair Share benchmark with an index of 161, suggesting it is a very strong piece of copy. On the other hand, Copy B indexed 113 versus the norm and would be considered slightly above average at best.

Historical performance of the brand’s television investment was measured in MIDA to quantify the expected returns on investment for an average (or benchmark) ad for the brand. Then an initial forecast was developed before the start of the campaign using this historical performance enhanced by the MSW copy test results along with planned media delivery levels.

After the campaign had been running for six months, MIDA was updated with actual sales and television campaign delivery data. As seen below, the ROI for Copy A was approximately 90% higher than would have been expected from the historical benchmark ad performance level. On the other hand, Copy B’s ROI was only about 5% higher than the benchmark expectation.

This actual performance was extremely consistent with the copy test results which suggested that Copy A was a very strong ad, and that copy B was slightly above average. This result illustrates the utility of MSW copy test scores in the a priori forecasting of investment levels through integration with decision support systems. The integration of MSW copy test scores with a decision support system like MIDA would help steer marketing dollars toward more deserving initiatives, improve forecasts and bolster in-market effectiveness of brands’ marketing programs.

Categories: Ad Pre-Testing, Validation Tags:

Unusual Statistical Phenomena, Part II: Stat Testing of Percentages

January 24th, 2022 Comments off

Sometimes when looking at the results from survey data, we see something that makes us say ‘huh?’ or ‘that doesn’t look right’. When the odd results persist after verifying the data were processed correctly (always a good practice), there is typically still a logical answer that can be uncovered after doing some digging. Sometimes the answer lies with something that we will call ‘unusual statistical phenomena.’  This is part 2 of a series that will look at some of these interesting – or confounding – effects that do pop up now and then in real survey research data.

This time we will look at an unusual phenomenon that can occur when doing something typically considered fairly mundane – testing for statistical significance between percentages. An example will help to illustrate this phenomenon which periodically causes us to question stat testing results.

Let’s say we have fielded the same survey for two different brands. One part of the survey collects respondent opinions of the test brand using a battery of attribute statements with a 5-point agreement scale. The base size for each survey was 300.

Stat testing was conducted between results for the two brands for Top Box percentages on each of the attribute statements. However, some of the results are questionable. Specifically, for the attribute “Is Unique and Different” Brand B’s score was higher than Brand A’s by 4 percentage points, which was statistically significant at the 90% confidence level (denoted by the “A” in the chart below); while for the attribute “Is a Brand I Can Trust” Brand B’s score was higher than Brand A’s by 6 percentage points, which was NOT statistically significant at the 90% confidence level. How could this be!

How can a difference of 4 points be statistically significant while a difference of 6 points is not, even with the same base sizes? To understand how this can happen, let’s first look at the basics of how a statistical test for comparing percentages works.

First, a t-value is computed according to this formula:

Then this t-value is compared to a critical value. If the t-value exceeds the critical value then we say that the difference between the percentages is statistically significant.  The critical value is based on the chosen confidence level and the base sizes of the samples from which the percentages were derived.

In our example, we chose the 90% confidence level for both statistical tests and the base sizes are the same, so the critical value for both tests is the same. We also know the difference between the percentages (the numerator of our equation) is what appears anomalous as the difference of 4 led to a t-value that exceeded the critical value, while the difference of 6 did not exceed the critical value. Therefore, the issue must lie with the Standard Error of the Difference.

Let’s next examine what a Standard Error represents. Our surveys were fielded among a sample of the overall population. If we sample among women 18 to 49 in the United States, we will infer that our results are representative of the entire population of interest, which is all women 18 to 49 in the United States. However, it is unlikely that the measures we compute from the sample (such as the percentage that say Brand A “is a brand I can trust”) will be exactly the same as the percentage would be if we could ask everyone in the entire population of interest.  There is some uncertainty in the result because we are asking it of only a subset of the population. The Standard Error is a measure of the size of this uncertainty for a given metric.

In our equation, the denominator is the Standard Error of the Difference between the percentages. While not precisely correct, the Standard Error of the Difference can be thought of as the sum of the individual Standard Errors for the two percentages being subtracted (the actual value will be somewhat less due to taking squares and square roots). As the graph below illustrates, the Standard Error for a percentage is a function not only of the sample size, but also of the size of the percentage itself.

Specifically, for any given sample size the Standard Error is largest for values around 50% and decreases as values approach either 0% or 100%. For a base size of 100 (the dark blue line), the Standard Error is close to 5 for percentages near 50%, but decreases close to 2 for very small or very large percentages.  You can think about this as it being harder to estimate the percent incidence of a characteristic of a population when around half the population has that characteristic versus when almost all (or almost none) of the population has that characteristic.

In our example, the percentages for Is a Brand I Can Trust are close to 50%, so at a base size of 300 the individual Standard Errors would each be a little under 3. In contrast the percentages for Is Unique and Different are around 10%, so at a base size of 300 the Standard Errors would each be around 1.5.  That’s a big difference!

It follows that the Standard Error of the Difference for Is a Brand I Can Trust would be much larger than for Is Unique and Different. In fact, the actual values are 4.08 for Is a Brand I Can Trust and 2.34 for Is Unique and Different. Again, a big difference. If we divide the differences in the percentages by these values for Standard Error of the Difference, we get t-values of 1.47 and 1.71, respectively. Given the critical value is approximately 1.65, we see that the t-value for the difference of 6 is below the critical value (hence not statistically significant); while the t-value for the difference of 4 is above the critical value (hence is statistically significant).

Hopefully this takes some of the mystery out of stat testing and helps in understanding why what can appear to be anomalous results may actually be correct.

Categories: Special Feature, Uncategorized Tags: