In the first article in this two-part series, we demonstrated the importance of using an ad server for large scale campaigns. Now we’ll continue by taking a concrete look at this practice in action, in one of our media campaigns.
The goal of the campaign was to maximize subscriptions to a financial product. All our media led to a microsite landing page, but it was also possible to subscribe to the product directly from the client’s main site. Due to the limitations of the website we were working with, we were only able to track soft conversions, which consisted of users opening a form. The campaign caused the conversion rate on the client’s main site to explode. So, we did a correlation analysis to find out what was causing this increase in conversions, and were able to identify that our total campaign impressions was the most influential factor. Here’s a data comparison:
In analyzing the graphic, we immediately see a strong increase in requests on the site, in spite of the fact that no ads were driving traffic there. Next, we see that the variation of conversions correlates strongly to impressions: the higher the number of campaign impressions, the more conversions there were on the site. These were users (whether or not they clicked on our ads) who went directly to the client’s website to subscribe to the product. The correlation coefficient was 0.82 for impressions as opposed to 0.65 for clicks. This clearly demonstrates that the ads a user sees have an impact on their behaviour. An analysis based solely on a campaign’s post-click conversions fails to consider the influence that an ad might have if it is seen but not clicked. That’s why we decided to take an even closer look!
The next step was to make a link between the client’s business objectives and our campaign results. There were two elements to consider:
Since we were only able to measure soft conversions, we compared different metrics to the actual results achieved by the client.
The first set of data we analyzed allowed us to establish a correlation between the requests received by the client, and our campaign.
*This consists of requests completed on the corporate site, without clicking on an ad. The campaign directed clicks to the microsite.
The strongest correlation was found between the total number of forms started (on the client’s site and the microsite) and the total number of requests received. The correlation was 0.87, which is very high, but not terribly surprising. The main takeaway is that the variation in forms started is consistent with that of forms submitted. We can also conclude that the campaign impressions, clicks, and expenses were not the most influential factors.
The most interesting learning came when we analyzed the accepted requests, because in the end, what matters most are the client’s actual sales.
Right away we see that there is a low correlation to the forms initiated on the microsite. From this we can deduce that a higher percentage of the requests sent in through the microsite were not approved, in comparison with those that came in through the main site. Does this mean that the campaign attracted less qualified users? We demonstrated earlier that campaign impressions had an impact on the number of requests coming through the client’s main site – they went up during the campaign period. The answer, therefore, is no. In addition, we see here that the greatest correlation was to campaign impressions. Here’s a visual representation of that correlation:
Through this analysis we were able to demonstrate the influence our ads (clicked and not-clicked) had on the client’s business results. In comparing the number of accepted requests each week during the campaign to the number of accepted requests in the six weeks prior to the start of the campaign, we were able to attribute 282 requests to the campaign. This number allowed the client to calculate the ROI of their campaign, as well as the incremental cost per accepted request. In the case analyzed, the effect was seen within two weeks of the launch of the campaign. A very attractive incremental impact for the client!
The systems currently in place don’t allow us to do a precise calculation of accepted requests by type of media placement. The detailed analysis below is therefore based in large part on an analysis of trends. One of the difficulties with crossing that bridge is that we need to connect the data collected by our ad servers to our analytics solution. These solutions are mainly concerned with tracking the source that drove the user to the site, but not whether the user saw an ad beforehand. That information, however, is essential for optimizing a display campaign. In the meantime, we have started inserting a campaign conversion pixel on the version of the form that’s on the client’s main site, in order to optimize for an additional conversion point.
A buzzword that we’re hearing a lot these days is “cross-device attribution.” Remember that we’re targeting users on multiple devices (mobile, tablet, laptop, etc.). If we could reliably link different devices to a single user, it would be amazing!
In short, we are already looking forward to seeing this technology evolve, to help us become even more efficient at optimization!