Optimizing Search and Digital Marketing With Multi-Channel Attribution Modeling
Success of digital marketing is often attributed to the assists of other channels. While attending ClickZ Live San Francisco’s session on "Optimizing Search and Digital Marketing With Multi-Channel Attribution Modeling" by Crispin Sheridan, I had a good opportunity to provide our SEW readers with the SAP viewpoint on how they created their attribution models as well as the impact it had on their business. Unfortunately, Sheridan could not attend the session due to an illness. However, he was replaced by a well-known advocate in the search industry, Bill Hunt of Back Azimuth Consulting.
SAP started the switch to attribution modeling based on the needs of their business and identifying their customer journey. The long sales cycles, multiple individuals involved in interacting with all media types on multiple platforms, and the significant spend on pull and push marketing were certainly driving factors. They knew that all of the assists by their channels, from the first touch to the last, mattered, but how much? Where does the customer journey start and where does it end, and what is the best investment mix to drive maximum revenue?
By looking at their tactics, they found that the last-click attribute unfairly assigns the credit to direct or pull channels (example, SEM), where the first-click attribute unfairly assigns credit to awareness or the push channels (for example, display ads). After putting together a report of each siloed marketing tactic, the results were stunning - each division of digital marketing was claiming credit for leads that weren’t necessarily attributed to them; even worse, the real lead amount was very different from the reality.
Once the SAP team knew how impactful not providing attribution modeling to their reporting was, they had to review which types of attribution models would work for them. Putting together the types of attribution models helped them decide how they would give credit and why. The three models to look at for attribution methods are:
Linear: splits everything evenly
Weighted: assigns credit on a curve, requiring them to make judgment calls
Recency: based on time stamps, which assigns credit based on length of time passed.
Each one, they found, has its benefits as well as its challenges.
It was not only the macro media mix modeling that would come into play here. Beyond determining which attribution models would work for the organization, they also had to look at the micro attributions of each digital marketing tactic, as one change can greatly influence the impact of another attribution.
By testing and learning on the micro level, they were able to understand how channel overlap impacts conversion rates, the true reach and frequency across touch points, and the contribution of each action, channel, and message, as well as the optimal sequence of channel exposure to drive conversions.
The first requirement to attribution modeling is setting up good tagging governance and processes. It’s critical that the master tag is across all digital touch points and that the tag management system that you use provides the ability to tag each individual component correctly so you can do multivariate testing when it comes to optimizing the micro attributes.
Next, every touch contribution needs to be indexed based on its contributing factors to the last click.
Initially, SAP found that they might be buying more exposure than necessary, there was a lot of channel overlap, and that optimizing the volume would net better results.
Once this was done, they were able to determine frequency of exposure and conversion rates of each specific tactic. This would finally result into the report that they’d been waiting for, which answered the question of what impacts the conversion and to what degree.
This led to the action of re-allocating budgets from banners to SEM. Surprisingly, they found banners act more as closers, where SEM acts more as an opener. SEM gained 14 percent additional conversion credit because it indexes very high as an introducer and has a high overlap with other vehicles. Introducers get the majority of the credit from the models.
The Test Lab
Now that the attribution modeling systems were working, the next step to SAP’s success was to put together a globally mandated test lab over a period of two years. Out of the first 80 tests that were launched, they were able to generate more than 25,000 incremental inquiries, increase their lead value, and gain more than 100 important insights.
5 Key Optimization Discoveries From the Test Lab
1: Turn heroes into action heroes
This was a multivariate test, where in addition to the button in hero, SAP tested body with images versus no images and then tested a more human-oriented image for the background of the hero – the big win was getting an offer (CTA) into the hero and injecting some clear action in that key area. SAP found that the more human touch on the hero image didn’t help and also found that whether they had thumbnail images or not in the body content didn’t have meaningful impact on response/conversion. This lead to a 19 percent increase in conversions.
2: Photographic imagery sharpens response
As SAP started to leverage more pictograms, questions started to pour in about what works best, pictograms or photographic images. Par for the data-driven course, we put the question to the test. Photographic images have been driving more response consistently across tests from various geographies. This led to a 46 percent increase in conversions.
3. Great results don’t have to be complex
The ultimate test is one that yields great gains with very little cost. SAP is always searching out quick wins and low-hanging fruit. In the example that was provided, a simple copy change that leverages the word "download" – which reinforces a "tangible" resource that will always be accessible – helped drive 47 percent gains in the related lead form submission rate.
4. Make those CTAs easy to spot (and click)
CTAs are often the keys to the conversion, so in SAP’s testing efforts they sought what the impact would be to make them easier to find. The result was 7 percent more clicks, which, when factoring in the snowball effect of syndicating out the result across thousands of pages and dozens of country websites, has profound impact on overall business value.
5. Test surprises happen, which is why we test
SAP has a mix of content offers that are gated (behind registration) and not gated (freely available). The organization thought it would be useful to give site visitors some visual distinction as to what requires registration. They decided to test this and see what the impact was on their lead generation efforts and found that the visual cues actually suppressed registration rates. The theory in hindsight was that the distinctions fueled more efforts to "seek out" the free stuff. This led to a 17 percent increase in conversions.
Finally, the results come to this:
The key takeaways from Sheridan’s well-written presentation were this:
Distinguish between macro and micro optimization – Correctly form a macro attribution model, then look at each tactic to create micro optimization and evaluate the impact.
Drive to your most relevant attribution model(s) – You may have been putting too many resources into the wrong marketing channel. Learn from the data and attribute your budgets correctly.
Ensure proper tagging structure (master tag) – Make sure your tagging is correctly implemented so that the data you are seeing is correct. You may also need a full-time "tag manager" role.
Apply multivariate modeling and testing to optimize the impact of each tactic – You will certainly find that by doing so, your conversions may dramatically increase.
All in all, once you’ve "wired up" your digital landscape and are collecting the data, the correct model or models will be easy enough for you to find.