For over a decade, digital publishers have been wrestling with an existential strategic question: Should we pursue consumer or advertising revenue as our primary revenue stream? In 2017, that question, and the tradeoff it implies, will become obsolete by the widespread adoption of machine-learning, predictive, and anticipatory analytics. In creating a dynamic meter among publishers, their readers, and their advertisers, these algorithms have the potential to transform how the publishing industry generates revenue.
One exciting side effect of building a dynamic meter is that it puts the entire organization’s emphasis on each individual story. If the story isn’t of high journalistic or engagement value, then it becomes much harder to build a business model around it. This moves publishers away from the all-or-nothing pursuit of scale at the expense of depth for advertising models, or loyalty at the expense of reach for consumer revenue models. Each article has to stand on its own. This changes dramatically the calculation that reporters and editors make in determining whether to cover a story.
All of it makes the notion of having binary on-or-off paywalls and press releases touting “10 free articles a month” seem antiquated.
Most publishers still identify their articles through the traditional tagging mechanisms of People, Places, and Topics. While these tags are helpful for categorizing the “what” of our journalism, they are unsophisticated tools for categorizing the “why” of our journalism. For example, when we tag a story as People: Donald Trump, Places: Washington, D.C., and Topic: Politics, we now know how we can present that article to our reader — but we don’t know why that story exists.
By mining the text in the article using natural language processing and seeking out complex patterns, machine-learning tools like IBM’s Alchemy API can go deeper in describing the emotional drivers of that story. Perhaps the article is really about a reader feeling outrage, or about feeling like the underdog. So while it may seem at first glance to be another story about the Trump transition team, by applying machine learning to that article, it may reveal that the story has more in common with a sports underdog story about Cleveland’s baseball team.
It’s the Kurt Vonnegut theory of storytelling, that every story follows a consistent emotional pattern — come to life on every article.
Once able to identify the emotional response expressed by a reader on individual articles, publishers can then begin to understand reader patterns and how eliciting specific emotions could create measurable value for each unique visitor.
Predictive analytics have the potential to increase revenue for publishers on an article basis while reducing overall cost-per-acquisition. Analytics can also help better convert visitors to subscribers, and most importantly, increase readers’ satisfaction.
Imagine a reader browsing the web on their smartphone while on a train heading into work. They click on a link through Reddit and arrive on your news site where they are served a paywall. Using predictive analytics, we are quite certain that this Reddit mobile reader will not subscribe to your website. In fact, the reader may even post on Reddit just how much she despises your paywall. So, instead of wasting our time trying to get that reader to subscribe, what other kinds of value can you exchange with her that could be of mutual benefit? Perhaps it’s an email newsletter signup form that could begin an inbound marketing relationship? Perhaps it’s a video preroll ad with a high CPM to generate maximum ad revenue? Perhaps it’s a prompt for the reader to “like” you on Facebook so that they can help expand your reach?
By looking at the data provided by past readers, publishers can predict what the ideal value exchange and conversion rate would be for any visitor arriving on any individual article from any referrer, any platform, at any time of day and then serve them a dynamic meter accordingly. Yes, achieving this will require an investment in data analysts, but there are already third-party tools on the market that could reduce the cost of implementation.
Predictive analytics have one fundamental flaw: They’re only based on historical data. By learning on the fly, anticipatory analytics are able to adapt in real-time to the conditions surrounding an article.
Remember that Reddit reader whose historical data suggests is unlikely to subscribe? Well, what if the article she is clicking on was an exclusive investigation directly related to Reddit itself? What if the article was just beginning to gain traction in the digital platform ecosystem but it wasn’t yet being picked up by other news outlets? Would that Reddit reader be more likely to subscribe then?
Anticipatory analytics allow publishers to make value-exchange decisions in real time on each article. If they wished, this could allow them to wall-off an article for subscribers only for the precise amount of time when it is of greatest value, and then, “open it back up” when the value of an ad impression surpasses the value of potential subscription revenue.
Now that we understand how machine learning can identify emotional drivers within stories, how predictive analytics can identify the value exchange between publishers and their individual readers, and how anticipatory analytics can adjust the maximum value exchange on the fly, we can begin to envision how combining all three could become the holy grail for publishers.
For advertisers, a publisher can identify that an article eliciting hope generates higher interest from a Facebook visitor than an article eliciting fear. Knowing hope results in greater advertiser satisfaction when their brand is placed next to those stories, publishers can raise the price of placing an advertisement next to that piece of journalism or insert a relevant piece of sponsored content in that article, while the story is being amplified and accelerating in interest.
From a subscription standpoint, a publisher can adjust paywalls according to a visitor’s likelihood of subscribing. For example, if one identifies that articles eliciting anger generate higher interest from a desktop homepage visitor than those eliciting sympathy and that this anger results in a higher subscription conversion rate while the story is amplifying in interest, you could put a hard wall around that article for that unique moment of time.
None of these scenarios are mutually exclusive. We can combine machine learning, predictive, and anticipatory analytics to optimize the value exchanged from this reader, on this device, coming from this platform, on this article, at this exact moment in time. In other words, a dynamic meter.
What could prevent this prediction from coming true is not technology, but organizational culture. The technology needed to pursue the dynamic meter already exists. The challenges of implementation are less technical and more cultural, as they would require publishers to collaborate across departments. In order to maximize overall revenues, publishers may have to do so at the expense of revenues in one department’s P&L or another. It also requires a serious investment in analytics, product, and technology at a time when budgets continue to shrink in newsrooms across North America. Finally, it requires publishers unifying around the common goal of putting their readers, and their stories, and not their own self-interest, first.
If collaboration, investment, and philosophy can align, then 2017 may be the year when a sophisticated, data-driven approach to revenue comes to fruition, and a dynamic value exchange among readers, the advertisers, and publishers, can be achieved.
David Skok is associate editor and head of digital editorial strategy at The Toronto Star.