Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
April 7, 2009, 11 a.m.

The content cascade: How content will flow in digital news enterprises

cascade

Rather than trying to redefine “the basic unit of news” — it used to be the story; is it now the fact, or the topic, the issue, or what? — and what that implies for the work of journalists, going forward it will be most useful to think about content as a cascade, as in a stream running down a rocky glen, always moving, dividing, uniting, filling pools here and there, constantly finding new niches to fill.

The metaphor of content as a cascading stream means there is no unit — a stream is a stream, it has no discernible building blocks. And it means that content doesn’t sit still. It is never static, but always changing.

Now, that’s not really the way things are right now, especially at newspapers. Right now, usually, a reporter goes out, covers an event, comes back, bangs out 20 inches, moves on to the next assignment and never looks back.  The story’s brief online incarnation on the live news site is devoid of hyperlinks; no context is created; and then it disappears into a for-pay archive where few will ever pay the too-high fee to retrieve it. Yes, many newsrooms have moved in the direction of online-first, but even there, it’s mostly publish and move on.

But let me lay out a different vision of how content will flow in a fully digital news enterprise, whether it’s the Associated Press or your friendly neighborhood blog, with benefits at all levels of content creation and consumption: the content cascade.

The content cascade starts with raw information. It can be anything: reporter-gathered data, citizen journo input, crowd-sourced information, audio, video, press releases, government data and reports, industry data.

In a traditional newsroom, this stuff comes in, serves as story fodder, or not, gets piled up under and around the desks of reporters, and stays there — static — until the next general purge.

But in a digital newsroom, it can be digitally archived and organized, and much of it can be made available online to readers interested in digging into it. And of course, the digital newsroom further regards the entire Web as raw data and actively exploits that sourcing avenue.

At the next level in the content cascade model, significance is extracted from content — facts, background, comments and opinions are pulled into traditional “stories” as well as being analyzed, compared, questioned, evaluated, refuted, corrected, updated and otherwise spun and massaged. This happens in editorials, columns, blog posts, blog comments, Tweets, social network interactions, collaborative work by newsroom teams, and, not least of all, in actual conversations at the proverbial dinner tables, water coolers and bus stops, and even in old-fashioned letters to the editor.  There are no walls around this process — it crosses all boundaries including those between rival news enterprises.

In the past, pre-Web, the process, cascade, or stream kind of petered out at this point, or sooner. Feedback from “out there” to editors, reporters and sources was minimal compared to the level of interaction possible today, and the “story” and a few formalized reactions to it were the final products or units of content delivered.

But now, or soon, the cascade will continue. All that massaging in blogs, comments, Tweets, social networks and conversations can now advance content into another stage, in which facts and opinions which have become generally accepted in the process are codified into wisdom — into generally accepted facts, into the fairly still, cool waters of collectively derived truth, into a collaboratively created, edited and augmented wiki.

Not many news enterprises are doing this yet, but we’re going to see more of it, with transformative effects on content management both upstream and down. Why should a reporter write a 20-inch story containing just one or two new facts embroidered with a rehash of background, quotes from interested parties and ancient history, when all that’s really needed is a brief report presenting the new information and referring readers to the wiki for the rest?

And why should a reader, interested in deeper information about a subject in the news, have to search and sift through prior “stories” to figure out the background? Why doesn’t every news site have a wiki, updated constantly with the new facts and views that are gathered in the field and vetted in a reporting/blogging/commenting process?

Why should a reporter have a quota measured in “stories,” whether it’s two courts-and-cops reports a day or one in-depth investigative masterpiece a week? In the cascade model of content management, every reporter follows a portfolio of issues, topics, trends, trials, personalities, businesses, governmental entities, towns, streets, buildings, non-profits — and a day’s work may consist of finishing a major investigative piece on one of these, while blogging about new developments touching on a handful of others, and adding new facts to the wiki entries for a bunch more. And the process of augmenting or correcting the wiki never really ends.

What about us readers? How do we keep up with all this? In the traditional model, whether in print or online, we only scan the latest “stories” to glean what’s of interest. But in a functional content cascade environment, we just watch the stream. We fish, if you’ll pardon the extended metaphor — today perhaps with a set of RSS feed specs, but soon, one hopes, with more sophisticated tools that can deliver to us what we really want and need on whatever device we want it. Whatever we read, a simple hyperlink will always deliver us to the wiki for all the background we might need about a person, place or topic we’re reading about; the wiki can refer us efficiently to a variety of related topics as well as to the raw source material; bookmarking or search can always deliver us back to something we’re interested in; and at any point in the process, we can jump into the fray with a comment, blog post or wiki edit of our own. (This last one makes editors particularly nervous, but there are ways to watch for and prevent mischief.)

The content cascade reduces the frustrating need for active search on the part of readers.  Today, readers looking for background on a topic can attempt “site search” to access a list of prior stories that might shed light on their questions, usually with mixed results and likely frustration; in a content cascade system, they’d be presented, without needing to search, with links to wiki entries related to the topic they’re reading about.

Beyond the wiki, content can continue to have life. The cascade can move on in various forms, including long-form reportage, books, and niche publication content, and it becomes raw material, itself, for all future content creation.

What’s clear is that the flow of content, going forward, will no longer be linear, but convoluted. Facebook refers to its own content presentation as a “stream”; Twitter’s is clearly streamlike; but both are random and chaotic, as well, and both must be seen as elements of the raw material level of the content cascade — bits and pieces that will be rejoined downstream in blog and wiki formats.

Obviously, the thorny and perennial question of “monetization” is absent from this discussion.  But the advantage of the content cascade is its efficiency and its multiplying effect on page views: reporters don’t labor over 20-inch yarns when a 10-word blog update will do; content from all over the web can create page views on a local site; readers contribute content; each drop in the stream (OK, there’s a basic unit for you) can be repurposed almost indefinitely into new content niches.  The resulting vibrancy of the news site draws maximum traffic; let the marketers monetize the value of that.

As it happens, most of what I’ve presented as the content cascade is incorporated in the Intellipedia system*, wherein US intelligence agencies collect and evaluate information gathered in their global networks. Where information once was kept within agency walls, locked in a room with some analysts charged with producing a white paper (this is the kind of thing that got us into Iraq), today it is vetted in a collaborative blogging process that operates across agency lines and pushes the collective wisdom into wikis (with the proviso that Intellipedia does not enforce Wikipedia’s “neutral point of view” policy, but rather expects, or hopes, that consensus will emerge). The same process operates at many, if not most, large corporations in the form of Enterprise 2.0 systems.  Largely unknown in the newspaper field, Enterprise 2.0 is a  billion-dollar business that facilitates internal corporate collaboration and productivity through social networks, blogs and wikis.

What works for U. S. intelligence agencies and for Fortune 500 corporations can work for news enterprises and their readership. While software systems specifically designed to manage the content cascade may emerge and prove useful, it should be possible to begin operating and managing the content cascade by linking and adapting existing systems.

In any event, finding and implementing the right software will not be nearly as difficult as moving people and organizations through the needed organizational and cultural changes.  The content cascade is not intuitive, nor is it automatic once begun; it must be actively taught, managed, encouraged, facilitated, conducted.

*UPDATE, 6:10 p.m.:  This post originally stated that portions of the Intellipedia project were contracted to Google.  But comments from Andrea Baker, “Intellipedia Evangelist”  (check her credentials, she ought to know), lead me to conclude the Chron story (footnoted at the Wikipedia article on Intellipedia) on which I based the Google connection is incorrect.  So I’ve removed that detail.

Photo by satosphere, used under Creative Commons license.

POSTED     April 7, 2009, 11 a.m.
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”