A great wave of disruption — anchored in artificial intelligence, robotics, cognitive computing, and big data — is underway. As these technologies move from the fringe to the mainstream, they promise to forever change how news organizations work and what we think of as journalism.
In 2017, a critical mass of emerging technologies will start to converge, finding advanced uses beyond initial testing and applied research. That’s a signal worth paying attention to. Rather than focusing on digital media, technology and journalism alone, I would encourage you think like a futurist, and to look far afield to gain perspective on your year ahead.
Our annual report lists 159 tech trends for the coming year, and a few dozen of them are devoted to news, media and publishing. Here are eight highlights:
Researchers at MIT’s CSAIL have trained computers to not only recognize what’s in a video, but to predict what humans will do next. Trained on YouTube videos and TV shows such as The Office and Desperate Housewives, a computer system can now predict whether two people are likely to hug, kiss, shake hands, or slap a high five. This research will someday enable robots to more easily navigate human environments — and to interact with us humans by taking cues from our own body language. Soon, this kind of technology will enable news organizations to automatically compile video news stories without the direct involvement of human journalists.
In short, an adversarial image is a photo with a tiny modification, usually one that’s imperceptible to humans, that is created in order to help computer scientists adjust machine learning models. In order for machine learning systems to learn, they must recognize subtle differences. For example, a computer scientist might slightly alter an image of a llama — using something as tiny as a few scattered pixels — and fool the system into miscategorizing the image as something completely different, such as a shoe or a cup of coffee. When that happens, an adjustment is made to the system and it continues training. Adversarial images can also be used to knowingly and purposely trick a machine learning system. If an attacker trains a model, using very slightly altered images, the adversarial examples could then be deployed out into other models. There are implications for any service that automatically tags our photos, such as Google and Facebook, and every news organization that distributes content through them.
Some organizations have begun to experiment with temporary products: limited-run newsletters, podcasts that only last a set number of episodes, live SMS offerings that happen only during events. In 2017, expect to see more temporary podcasts, newsletters, and chatbots that are deployed specifically for just one event. This is a revenue and outreach opportunity, as they are vehicles for targeted, short-run advertising.
“Software as a service” is a licensing and delivery model, where users pay for on-demand access. It’s a trend I’m seeing in other industry sectors (health care, retail) and it’s a model that I believe could work for news. In fact, in the near future, it might just be an inevitability.
The central challenge within news organizations is that there are immediate, acute problems — but reasonable solutions will require long-term investment in energy and capital. The tension between the two always results in short-term fixes, like swapping out micro-paywalls for site-wide paywalls. In a sense, this is analogous to making interest-only payments on a loan, without paying down the principal. Failing to pay down the principal means that debt — that problem — sticks around longer. It doesn’t ever go away.
Transitioning to “journalism as a service” would enable news organizations to fully realize their value to everyone working in the knowledge economy — universities, legal startups, data science companies, businesses, hospitals, and even big tech giants. News organizations that archive their content are sitting on an enormous corpus — data that can be structured, cleaned, and used by numerous other groups. How could you rethink news deployed as a service that would include different kinds of parcels: news stories; vetted and fact-checked mini-biographies for other sites and digital services (to replace Wikipedia); verified, searchable databases of people and organizations. An AI-powered service that automatically generates a short report of the opinions on a particular subject, along with a list of quoted experts. A calendar plugin that summarizes the most important news events to pay attention to during the week. All of these services could work outside of the social media landscape, which means that news organizations would not have to share revenue or give away their content for free, but could charge for access.
We are entering an era of conversational interfaces. You can be expected to talk to machines for the rest of your life. We’re already surrounded by conversational interfaces: Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, Google’s watch and phone interface (“OK Google”), among others. Conversational interfaces can simulate the conversations that a reporter might have with her editor, as she talks through the facts of a story. Bottable interfaces and platforms, such as Pandorabots and Chatfuel, will start to replace standard search and FAQs. Meanwhile, journalists will engage in conversations with machines to assist in reporting. IBM Watson’s various APIs, including Visual Recognition, Alchemy Language, Conversation, and Tone Analyzer can all be used to assist reporters with their work.
What happens when a government leaks a cache of sensitive information on WikiLeaks, with the intent of destabilizing another nation? WikiLeaks becomes weaponized. In July 2016, WikiLeaks published 20,000 emails from the Democratic National Committee. By fall, the Obama administration named Russia as the source of the hacked data, citing Russian President Vladimir Putin’s desire to influence the U.S. presidential election. Given the rising political and social tensions within the U.S., Europe, Russia and Middle East, we’re forecasting more leaks in the coming year. This presents some new challenges for news organizations. To start, we’ve only ever had one major leak happen at once. What happens when leaking starts to scale? Are news organizations prepared to investigate multiple leaks at the same time? What ethical considerations will need to be considered, given the current political climate? Better to make plans and decisions now, rather than under duress.
“Doxing” is mining and publishing personal information about a person — organizational doxing is when this happens to an entire company. It’s a term introduced by security expert Bruce Schneier. In the wake of the Edward Snowden leaks, we’ve seen a number of data dumps. WikiLeaks has published troves of data. Hackers broke into Hacking Team, publishing a massive amount of internal data. Sony has been breached, and so have various branches of the U.S. government. This isn’t about stealing credit card information, but rather about making public the personal details of individuals, either to protest against policies, to embarrass companies or to blackmail companies into paying big ransoms to hackers. Because of the success hackers had in 2016, we can expect more organizational doxing in the year ahead. Every single news rganization ought to shore up security and to develop a risk management plan should they find themselves doxed. I strongly recommend reading the “Organizational Doxing and Disinformation” blog post by Bruce Schneier.
Let’s be clear — it’s not because of the recent U.S. election that people suddenly developed this idea of fake news. And it isn’t just election-related fake news that’s being created. Humans have been spreading misinformation since we were first grunting at each other in caves. Fake news is a bigger and more complicated problem than most of us realize. One of the challenges has to do with data: What’s fake to one person may seem very real to someone else. As every research scientist knows, even empirical data is still subject to outside interpretation once a project is reported in the media or talked about by non-scientists. And that’s compounded in this age of social media. We have machine learning algorithms that are just performing their prescribed functions — deliver us content that we’re likely to click on. Six years ago, we at FTI forecasted that this would be an emerging problem. I recommended to a consortium of newspapers that they develop a verification system — a simple line of code — that would travel digitally wherever the news story did. At the time, there wasn’t yet a critical mass of problematic stories as we’re now seeing today, and without an immediate need they didn’t feel a sense of urgency. I hope they feel the urgency now.
Amy Webb is founder of the Future Today Institute and author of The Signals Are Talking: Why Today’s Future Is Tomorrow’s Mainstream.