Nieman Foundation at Harvard
HOME
          
LATEST STORY
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
ABOUT                    SUBSCRIBE
Aug. 21, 2018, 11:48 a.m.
Audience & Social

Line is another chat app rife with spam, scams, and bad information. The volunteer-supported Cofacts is fact-checking them in the open

Users forward dubious messages to a chatbot; volunteer editors evaluate their credibility; the bot answers back to the user (and anyone wondering in the future).

There are already numerous fact-checking initiatives across Asia that comb through news articles and public social media posts, but messaging poses its own unique challenge in the fight against misinformation. It’s difficult to deal with fake news circulating in chat groups, where those who don’t receive the message, don’t see it.

It’s a problem that Taiwan is currently grappling with. Much like the more prominently covered WhatsApp, messages on Line, an app that first launched in Japan in 2011 and is popular in places like Taiwan and Indonesia, are convenient to forward, but not easy to rebut. This is especially so in family chat groups, where, for instance, people can find it difficult or awkward to confront relatives who are easily able to forward false or misleading messages, or spam and scams. (Around 17 million people use Line in Taiwan, where its user base is more evenly distributed across all ages.)

Enter Cofacts (collaborative fact-checking, an open, collaborative platform created by the Taiwanese civic tech community g0v (pronounced “gov-zero”). The basic system is simple. Users who receive dubious messages on Line can forward them to the Cofacts (or, Cofacts 真的假的) chatbot. The messages are then added into a publicly viewable database, where they’re fact-checked by volunteer editors before a response is sent back to the Line user. If these editors don’t agree with one another’s fact-checking processes, they can send multiple messages to the user, who then has the opportunity to independently evaluate the information provided. (Who are its volunteers? Cofacts made a little video.)

Any interested volunteers can log into the database of submitted messages and start evaluating the messages, using the Cofacts form. Cofacts offers step-by-step instructions for those who can’t figure out how to use the platform, as well as a set of clear editorial guidelines that helps volunteers weed out uncheckable messages or ones that are “personal opinion,” and what types of reliable sources they can use to back up their fact-checking work.

Based on data collected by the Cofacts team on the messages they’ve received so far, the misinformation debunked on the platform can range from fake promotions and medical misinformation to false claims about government policies.

Johnson Liang, a creator of the Cofacts Line bot, said he became convinced that a bot was the right approach after getting a peek at the admin panel of MyGoPen, another fact-checking account on Line. MyGoPen was being flooded with requests, but most of them were identical or very similar messages from different users. The administrators were overwhelmed with having to respond to each sender individually.

MyGoPen’s struggles led Liang to build a chatbot that feeds the materials it collects into a general database. Once any message is sent to the Cofacts bot, it’s stored in the database. The next time someone forwards the same message, the bot is able to check it against the requests already available in the database, retrieve any existing fact-checks, and send it right back to the Line user.

Over the past three months, Cofacts has received more than 46,000 messages — and the chatbot was able to automatically answer 35,180 of them.

Embracing the ethos of the tech community g0v, openness and decentralization is key to how Cofacts operates. Their datasets, analytics, and even their regular meeting notes are all open access, so anyone (you can access any of them here; the notes are in traditional Chinese) can look at their discussions and design decisions. It’s a practice that even extends to the way they deal with media queries: instead of one-on-one interviews or emails, the Cofacts team requested that this interview be conducted via shared documents accessible to the entire team, allowing people with different roles and perspectives to contribute (or at least be aware of what others were saying).

This model came out of practical concerns.

“Distinguishing users or adding mechanisms like screening the editors takes time,” Liang said. “It takes time to design the rules of background checks. It also takes time to implement them to the system. It takes more time to justify (or modify) the screening rules if there are people challenging these rules. Since we don’t want to waste time on these rules, we just make it available to all people — if anyone is not happy with any reply on the platform, just directly contribute.”

Cofacts acknowledges that this system isn’t foolproof: while it’s built on openness and trust, there’s always the possibility of the platform being hijacked by rogue editors. Liang freely admits that the team might not figure out the best method forward until Cofacts volunteer group achieves a size where the likelihood of getting hacked or hijacked becomes possible.

Still, he keeps an open mind, preferring to think of tools that communities can use, rather than measures to pre-empt and block threats that may or may not yet exist. He cites other platforms, such as Wikipedia and Quora, as models to learn from.

“A more constructive perspective to look at the risk of the platform being flooded with rogue editors is to rephrase it to questions like ‘How we can improve quality of the editor’s replies?’” he said. “This opens up much more rooms for discussions and possible solutions.”

Despite the support of its growing database, fighting hoaxes and misinformation is still an uphill battle for Cofacts. The team receives about 250 new messages a week, but has less than 12 active editors combing through them, meaning that users don’t always receive quick responses to the messages they want fact-checked. They also have no way to figure out what and how rumors are spreading among the majority of Line users who don’t use Cofacts.

But Liang points out the “bright side”: every user who does get an answer to their query also has the ability to become rumor-quashers themselves.

“Although the rumors still spread after we reply, at least we stopped these many users (or chatrooms) from further forwarding such messages every day,” he said. “It empowers [these] users to be gatekeepers on their own.”

A version of this story was first published in Splice Newsroom.

Free sticker scams on the messaging app Line — from Slide 10 of Johnson Liang’s Cofacts presentation at RightsCon 2018.

POSTED     Aug. 21, 2018, 11:48 a.m.
SEE MORE ON Audience & Social
Show tags
 
Join the 60,000 who get the freshest future-of-journalism news in our daily email.
PressPad, an attempt to bring some class diversity to posh British journalism, is shutting down
“While there is even more need for this intervention than when we began the project, the initiative needs more resources than the current team can provide.”
Is the Texas Tribune an example or an exception? A conversation with Evan Smith about earned income
“I think risk aversion is the thing that’s killing our business right now.”
The California Journalism Preservation Act would do more harm than good. Here’s how the state might better help news
“If there are resources to be put to work, we must ask where those resources should come from, who should receive them, and on what basis they should be distributed.”