Every marketing campaign has similar core components. Metrics are the measure of success, and they’re usually applied against a framework for execution. Behind it all is a data attribution model: the mechanism for tracking conversions. In theory, the attribution model informs execution and gives context to the metrics that validate (or dispute) proof of concept.
Except, this isn’t always the case.
It’s possible for a flawed attribution model to inform a successful campaign. Think of it like a child’s shape sorting game. The goal is to put the square block through the square hole… but that square block might also fit through the hexagonal slot—or other shapes when plied (with force). Just because it works doesn’t mean it’s right. Worst still, what you learn isn’t valuable because it’s fundamentally flawed.
The lesson here is that Marketers and Media Strategists can’t delve into data and take findings at face value. To validate that data and the lessons it offers, you need to make sure the attribution model fits. You might find that what your data says isn’t actually what it means.
Select an attribution model with care
As a Marketer or Media Strategist, it’s important to look beyond what data says about performance and instead, analyze the models used to inform the approach responsible for generating that data. Are the results you’re measuring indicative of the right intent based on your approach? Or, put more simply: are you putting the square peg in the square hole?
Doing this means delving deeper into the data beyond the points that affirm or refute your KPIs within the context of your attribution model. For example, last-touch attribution might tell you where and how your audience decides to act, but it tells you very little about how they got to that point. Conversely, first-touch might give you a plethora of data about where you reach your audience, but it doesn’t account for the buyer journey.
More often than not, JXM works on an assumption of multi-touch attribution. We find it’s more inclusive of total campaign performance. It allows us to leverage first- and last-touch data in context—to figure out where the buyer journey starts and ends, and to better-understand the various touchpoints that influence decision-making along the way.
Help your data tell you better truths
Looking below the surface-level story your data tells you means taking a step back to understand your attribution model in the context of your objective. Are you seeing clicks because your CTA is packed with incentive… or are you just targeting a better audience? Do your ads have impeccable timing because of when they appear or where they get views?
There are a multitude of questions worth asking in determining the effectiveness of your campaign against the attribution model you’ve ascribed to it. Moreover, it’s likely your metrics will evolve over time. Does that mean the way you track those metrics and understand that data needs to change as well? Definitely.
If your mind is already spinning, hold on to something bolted down. We’re here to tell you that validating your attribution model is about to get even more complex. The internet is, after all, an ever-changing medium.
New ways of thinking about data attribution
Google Analytics 4 recently rolled out, along with the news that Universal Analytics will sunset in 2023. While Marketing professionals have decried the loss of their beloved analytics dashboards, they’re quickly coming around to what GA 4 promises to offer in terms of improved data attribution—specifically, behavioral modeling and custom channel grouping. These tools promise to fill the gaps left by a lack of cookies or IP identifiers, and they could play a vital role in contextualizing data against attribution models like never before.
GA 4 is a preemptive move toward validating data attribution in an era where consumer privacy is a hot-button topic. There’s concern over a potential future where third-party cookies are rendered useless by strong privacy settings (someday, right?). With no way of attributing data, long-trusted models become antiquated—and with their demise, a loss of understanding about what works, what doesn’t, and why.
These changes to attribution modeling—whether they come to pass in the future or how they affect the digital landscape when they do—prompt a bigger question for Marketers today. That question is simply: “what are you trying to learn about the buyer’s journey?”
Put the square block in the square hole
As kids, it doesn’t take us long to realize that the square block goes in the square hole. As Marketers and Media Strategists, it often takes quite a bit longer to apply this thinking to data attribution—especially if we’re seeing generally positive results. It takes a critical mind to ask, “what does this mean?” in the context of how you measure data—not just in the sense of validating intent.
At JXM, we credit our focus on data attribution with our incessant need to constantly question what we’re doing. Why are we doing it? How are we doing it? How are we measuring it? Is it working? And, most importantly: why is it working? The more we know about what we’re doing, the better we can do it.