Research Digest

How Brands Can Level Up Their Creative Testing For Performance

Abstract

  • Brands should test creative on a quarterly cadence, assessing creative performance every 30 days across visit rates, conversion rates, CPA targets, and ROAS.
  • The gold standard of creative testing allows for between $10,000 to $15,000 in spend against each creative set, with three different sets as a minimum.
  • Prioritize testing against look and feel first before drilling down into other creative elements like call-to-action

We speak with MNTN Lead Customer Success Manager, John Pankowski about creative testing on Connected TV and how it relates to performance.

Developing solid Connected TV creative is one half of the equation, but the best way to ensure it drives a solid ROI for your campaign is to test and iterate. Brands that fail to employ this mindset might be wasting precious resources in creating endless amounts of creative that does not move the needle. What better way to learn how to set up the ideal testing schedule than through the teams who are responsible for campaign performance? MNTN Research sits down with MNTN’s own Lead Customer Success Manager, John Pankowski, who details what brands need to think about when creating their creative.

How Should Brands Approach Creative Testing?

It really comes down to defining what the objectives are for the brand, as this influences testing schedules. Brands need to ask themselves the question: “What is the overall objective of this creative?” For example, is it more awareness focused or direct response? Are they promoting a product outside their bread and butter? The brands that have better alignment with this and have objectives and metrics they’re looking to achieve with creative in mind are the most successful.

Describe the Ideal Creative Testing Schedule.

We’re always looking at a 30 day period where we put the creative together and test it equally. We look at what creative performs the best across visit rates, conversion rates, CPA targets, and ROAS, and provide brands with a recommendation for the next month. Brands should operate on a quarterly creative testing cadence as follows for optimal results:

  • Month One: Test creative
  • Month Two: Make iterations and optimizations
  • Month Three: Implement new creative and test this against some of the better performing legacy creatives.

One of our clients, an education provider, saw a 5% improvement in their CPA. Another commercial real estate client saw a 3.15% improvement in their visit rate and 11% improvement in their CPA. We see swings of anywhere from 3-15% on their target goals if they’re effectively testing with creative.

What Best Practices Do You Recommend for Certain Verticals That Are Moving the Needle for Performance?

Since brands and verticals each have their different nuances, I suggest instead taking a look at historical performance and incorporating those elements in your creative. For example, one sporting retail brand we work with has seen from previous performance that adding an element of talent has a strong impact on their campaigns. They developed two sets of creative, one with a ‘regular’ golfer and another with a golfer who was more high-profile in the social media space, and the latter creative performed better.

However, testing one element isn’t enough—the best thing to do is test multiple variables and consistently iterate. Brands often think it’s a one-and-done thing but we recommend they think long term and test using a number of variables across six months to a year, such as:

  • Talent
  • Animation
  • Video

We can then refine what the creative performance looks like from that lens. Brands who do that have more data points to optimize off. I think the more hypotheses that brands have, great—so let’s test that in the form of creative and continue to iterate on it.

Is There a Sweet Spot or Golden Standard for Creative Testing That Works for Brands?

Reaching statistical significance is dependent on a brand’s typical conversion volume or rate, which can vary by brand. However, a good rule of thumb is allocating between $10,000 to $15,000 in spend against each creative set, with three different sets as a minimum. Each of these creatives should have their own look, feel and sentiment. Run as many of these assets in a 30 day period to give enough directional data to gauge whether one creative set has a better visit rate and conversion rate than the other two. The delta between these different assets will give you a pretty good indication of your highest performing creative.

If you want to get more detailed into specific A/B testing, then you need to limit the other variables in play and do a true A/B test where across that subset, only one variable changes across an identical creative set. This eliminates all the other factors as you don’t want to have three different looks and feels and also three different call-to-actions in the same test. Keeping the same look and feel while only changing one variable will ensure that you’ll get a better read on what’s performing better.

Is There One Variable That You Suggest Brands Start With, and Why?

I recommend brands start testing against look and feel first, as this is where you will see the biggest changes in performance. For example, featuring male talent in one ad, female talent in another, and a combination of both genders in the third ad. Once you’ve determined the highest performing element, you can then use this as a base to A/B test other elements like title cards, colors or voiceover.

We’ve found that an A/B test between a Shop Now or a Sign Up Now call-to-action is probably not going to give you as much incremental lift as a brand new creative look and feel within a particular ad. It coincides with human behavior as people feel a certain sentiment or emotion, which is more telling than a call-to-action.

Subscribe to the MNTN Research Weekly

Sign up to receive a weekly feed of curated research, sent straight to your inbox.