Bad data misleads marketers into making poor decisions and up to 80% of the information brands feed into analytics tools is “not fit for purpose”. That was the stark warning delivered to delegates at Econsultancy Live by Chris Liversidge, CEO and founder of QueryClick.
His presentation laid out a scenario familiar to many marketers. With the vast majority of digital advertising pounds going into the walled gardens of Facebook and Google (including YouTube), it can be very difficult to collate data so different channels can be accurately reported on. Graphs displayed for delegates showed a typical story of how real-life campaigns run through different analytics packages give very different results for each channel.
Liversidge’s point is that marketers have no way of knowing which tool is correct and end up intuitively assuming the truth sits somewhere in the middle of an overstated and understated result offered by different analytics packages. The question remains, though, what is the real picture?
Stitching streams together
Seeing that real picture, according to Liversidge, requires solving the clear problem of analytics packages using session information that is incomplete or corrupted. The way modern customers shift between devices, sometimes being logged in, sometimes not, makes it hard for tools to give accurate figures. Yet marketers still make important budget decisions based on those inaccurate results.
Campaigns that are not performing well and channels that are not delivering a return on ad spend may appear to be worthy of extra budget, often at the expense of those which are actually doing a much better job than the tools in use might suggest.
So, instead of relying on data-driven analytics packages, which use well-established methodologies to piece together a customer journey, Liversidge has spent the past seven years using machine learning to rebuild lost connections in the path to conversion. It was this “broken data’ he believed would hold the answer to a fuller picture. He described the process as “stitching together the various sessions that are representative of broken sessions” so marketers can “understand an individual behind the devices they are using to trigger the pixels in analytics packages”.
This work has included collaborating on a deep-learning neural network which, he claims, can take a more granular dive into statistics and release them from “data silos”. The progress made was shown through charts for a real campaign for a well-known supermarket which showed which channels had been performing better than others and so informed future decisions on the reallocation of budget to where return on advertising spend (ROAS) could be improved.
Three steps to better attribution
Breaking away from the most widely used analytics packages involves a three-stage process, Liversidge explained. In the rebuild phase, a more accurate tool using machine learning needs to rebuild how it handles click streams to “see the individual behind the devices more clearly”. In the unification stage, data sets from various channels and silos need to be combined and evaluated alongside each other. Only then can the tool move into the attribution stage and ensure channels and campaigns can be given credit for their role in a conversion, allowing marketers to get a “fully featured, full-pictured view” of where budget should be prioritised.
While the outcomes and recommendations will vary between clients and campaigns, Liversidge pointed out his work identifies two channels delegates should tap into and seek to combine. He described paid social as “a fantastic contributing source of new customers into (early) conversion paths”. At the same time, an increase in attention for video on demand (VOD) provides a big opportunity because increased inventory levels mean campaigns can be bought for around half the price of linear television.
The post Attribution: ‘Stitching’ sessions together gives a unified view on ad spend across Facebook and Google appeared first on Econsultancy.
Read more: econsultancy.com