Nothing Special   »   [go: up one dir, main page]

Exploring how artificial intelligence technologies could be leveraged to combat fake news.


Fake News Challenge Stage 1 (FNC-I):
Stance Detection

Fake news, defined by the New York Times as “a made-up story with an intention to deceive” 1, often for a secondary gain, is arguably one of the most serious challenges facing the news industry today. In a December Pew Research poll, 64% of US adults said that “made-up news” has caused a “great deal of confusion” about the facts of current events 2.

The goal of the Fake News Challenge is to explore how artificial intelligence technologies, particularly machine learning and natural language processing, might be leveraged to combat the fake news problem. We believe that these AI technologies hold promise for significantly automating parts of the procedure human fact checkers use today to determine if a story is real or a hoax.

Assessing the veracity of a news story is a complex and cumbersome task, even for trained experts 3. Fortunately, the process can be broken down into steps or stages. A helpful first step towards identifying fake news is to understand what other news organizations are saying about the topic. We believe automating this process, called Stance Detection, could serve as a useful building block in an AI-assisted fact-checking pipeline. So stage #1 of the Fake News Challenge (FNC-1) focuses on the task of Stance Detection.

Stance Detection involves estimating the relative perspective (or stance) of two pieces of text relative to a topic, claim or issue. The version of Stance Detection we have selected for FNC-1 extends the work of Ferreira & Vlachos 4. For FNC-1 we have chosen the task of estimating the stance of a body text from a news article relative to a headline. Specifically, the body text may agree, disagree, discuss or be unrelated to the headline.

Formal Definition

Example headline

“Robert Plant Ripped up $800M Led Zeppelin Reunion Contract”


Example snippets from body texts and correct classifications


“… Led Zeppelin’s Robert Plant turned down £500 MILLION to reform supergroup. …”

     Correct classification: Agree

“… No, Robert Plant did not rip up an $800 million deal to get Led Zeppelin back together. …”

     Correct classification: Disagree

“… Robert Plant reportedly tore up an $800 million Led Zeppelin reunion deal. …”

     Correct classification: Discusses

“… Richard Branson’s Virgin Galactic is set to launch SpaceShipTwo today. …”

     Correct classification: Unrelated

Data

Rules

Evaluation

Scoring Process Schematic

Baseline

Awards


Timeline




Frequently Asked Questions



Answer:

There are several reasons Stance Detection makes for a good first task for the Fake News Challenge:

  1. Our extensive discussions with journalists and fact checkers made it clear both how difficult “truth labeling” of claims really is, and how they’d rather have reliable semi-automated tool to help them in do their job better rather than fully-automated system whose performance will inevitably fall far short of 100% accuracy.

  2. Truth labeling also poses several large technical / logistical challenge for a contest like the FNC:

    • There exists very little labeled training data of fake vs. real news stories.
    • The data that does exist (e.g. fact checker website archives) is almost all copyright protected.
    • The data that does exist is extremely diverse and unstructured, making hard to train on.
    • Any dataset containing claims with associated “truth” labels is going to be contested as biased.

    Together these make the truth labeling task virtually impossible with existing AI / NLP. In fact, even people have trouble distinguishing fake news from real news.

  3. The dataset we are using to support the Stance Detection task for FNC-1 was created by accredited journalists, making it both high quality and credible. It is also in the public domain.
  4. Variants of the FNC-1 Stance Detection task have already been explored and proven feasible but far from trivial by Andreas Vlachos & his students from U. of Sheffield. Cite: Ferreira & Vlachos (2016) & Augenstein et al. (2016).
  5. We considered targeting the truth labeling task for the FNC-1, but without giving teams any labeled training data. We decided against it both because we thought a competition with a more traditionally structured Machine learning tasks would appeal to more teams, and because such an open-ended truth labeling competition was recently completed, called the Fast & Furious Fact Check Challenge.
  6. Our discussions with human fact checkers lead us to believe that a solution to the stance detection problem could form the basis of a useful tool for real-life human fact checkers. Also see, next question/answer.

Answer:

There are two important ways the Stance Detection task is relevant for fake news.

  1. From our discussions with real-life fact checkers, we realized that gathering the relevant background information about a claim or news story, including all sides of the issue, is a critical initial step in a human fact checker’s job. One goal of the Fake News Challenge is to push the state-of-the-art in assisting human fact checkers, by helping them quickly gather the information they need to make their assessment.

    In particular, a good Stance Detection solution would allow a human fact checker to enter a claim or headline and instantly retrieve the top articles that agree, disagree or discuss the claim/headline in question. They could then look at the arguments for and against the claim, and use their human judgment and reasoning skills to assess the validity of the claim in question. Such a tool would enable human fact checkers to be fast and effective.

  2. It should be possible to build a prototype post-facto “truth labeling” system from a “stance detection” system. Such a system would tentatively label a claim or story as true/false based on the stances taken by various news organizations on the topic, weighted by their credibility.

    For example, if several high-credibility news outlets run stories that Disagree with a claim (e.g. “Denmark Stops Issuing Travel Visas to US Citizens”) the claim would be provisionally labeled as False. Alternatively, if a highly newsworthy claim (e.g. “British Prime Minister Resigns in Disgrace”) only appears in one very low-credibility news outlet, without any mention by high-credibility sources despite its newsworthiness, the claim would be provisionally labeled as False by such a truth labeling system.

    In this way, the various stances (or lack of a stance) news organizations take on a claim, as determined by an automatic stance detection system, could be combined to tentatively label the claim as True or False. While crude, this type of fully-automated approach to truth labeling could serve as a starting point for human fact checkers, e.g. to prioritize which claims are worth further investigation.

Answer:

Yes! Here are two recent papers on related stance detection tasks to the one we’re using for FNC-1:

There is also a very good whitepaper on the state-of-the-art in automated fact checking available from the UK fact-checking organization FullFact.org.

Answer:

Participants are free to use any unlabeled data (as pretrained embeddings or as manifold regularization), but any kind of direct or indirect supervision is not allowed other than the labels Fake News Challenge provides.

Answer:

We will be providing an evaluation script, but other than that there will be no autoscoring system or a leaderboard.

Answer:

We are limiting the duration of the testing phase of FNC-1 to make it extremely difficult for teams to cheat by labeling the test set manually. Given the test set size, this would be very difficult to do in the two day window we are providing. We apologize to teams who cannot work with the timeline we’ve outlined.

Answer:

Shortly after the uproar over fake news and its potential impact on the US elections, Dean Pomerleau proposed using artificial intelligence to address the problem as a casual bet / dare to his friends and colleagues in the machine learning community on Twitter. The initial idea was inspired by the fact AI-based filtering techniques has been quite effective at conquering email spam - a problem that seems on the surface to be quite similar to fake news. Why can’t we address fake news the same way?

Dean was certainly not the first to have this idea. He quickly learned from others who joined the effort to organize the FNC that much fundamental research in AI, ML and NLP has been happening in recent years. The convergence of this groundbreaking research and the widespread recognition that fake news is an important real-world problem resulted in an explosion of interest in our efforts by volunteers, teams and the technology press. The FNC has grown dramatically since that initial bet between friends, to the point where it now includes over 100 volunteers and 72 teams from around the world. While the details of the challenge have evolved from that initial (rather naive) wager, the goal has always remained the same - foster the use of AI, machine learning and natural language processing to help solve the fake news problem.

Answer:

The answer depends on what kind of facts/statements you are talking about fact checking. Well defined, narrow-scoped statements like:

“US Unemployment went up during the Obama years”

could be fact checked (debunked) automatically now with a reasonably amount of additional research.

But a statement like:

“The Russians under Putin interfered with the US Presidential Election”

won’t be possible to fact check automatically until we’ve achieved human-level artificial intelligence capable of understanding subtle and complex human interactions, and conducting investigative journalism.

That’s why we’re focusing in round 1 of the Fake News Challenge (FNC-1) on the stance detection task that is tractable now (we think) and could serve as a useful tool for human fact checkers today if we had it.

A great source about the state-of-the-art in automated fact checking and what the future holds, is this 36-page white paper from FullFact.org.

Answer:

In the eyes of some, ‘fake news’ means “whatever I don’t agree with.” This is not the definition adopted for the FNC. We’ve extensively investigated the various ways credible media experts have defined ‘fake news’ and have boiled it down to what they virtually all share in common. For the purposes of the FNC, we are defining fake news as follows:

Fake News: “A completely fabricated claim or story created with an intention to deceive, often for a secondary gain.”

The “secondary gain” is most often monetary (i.e. to capture clicks and/or ‘eyeballs’ to generate ad revenue), but sometimes the secondary gain may be political.

However several important distinctions need to be made when it comes to the definition of fake news.

First, claims made by newsworthy individuals, even demonstrably false claims, are by definition newsworthy and therefore not considered fake news for the FNC. This is opposed to fabricated claims about newsworthy individuals made by obscure sources seeking to make money and/or a political statement, which are considered fake news by our definition.

Second, our operative definition of fake news explicitly excludes humorous or satirical stories designed to entertain rather than deceive. The same goes from opinion pieces or editorials - they too are excluded from the category of fake news. To qualify for these exemptions, these types of stories must be clearly labeled as such in the story itself, and not, for example, buried somewhere else on the website where the story appears.

From a practical perspective, we guarantee none of the headlines or stories in the FNC-1 task will consist of recent controversial claims made by well-known individuals. Nor will they be humor, satire or OpEd pieces.

Answer:

Fake News Challenge was conceived to inspire AI researchers and practitioners to work on fact-checking related problems. We are in touch with our journalist and fact-checker colleagues to understand what other problems they encounter in their day-to-day work and how that can inform FNC-2. Stay tuned for the next challenge. If you have suggestions, please stop by our Slack and leave a comment. We would love to hear from you!


FNC-I Results

Fifty of the 80 participants made submissions for FNC-1 by the end of June 2nd using a wide array of techniques. The teams got access to the (unlabeled) test data and were scored automatically using the Codalab submission platform. The scoring system produces a raw score based on the differentially weighted scoring metric. The relative score the raw score normalized by the maximum possible score on the test set.

Rank Team name Affliation Score Relative Score
1 SOLAT in the SWEN Talos Intelligence 9556.50 82.02
2 Athene (UKP Lab) TU Darmstadt 9550.75 81.97
3 UCL Machine Reading UCL 9521.50 81.72

Congratulations to our top-3 teams! The top-3 teams will also get a cash prize of USD 1000, USD 600, and USD 400 respectively. In addition to the top-3 teams, we would also like to give a special shoutout to ranks 4 and 5 held by teams at UIUC and U. Arizona. For a complete leaderboard, visit the competition’s Codalab page.


About

Fake News Challenge is a grassroots effort of over 100 volunteers and 71 teams from academia and industry around the world. Our goals is to address the problem of fake news by organizing a competition to foster development of tools to help human fact checkers identify hoaxes and deliberate misinformation in news stories using machine learning, natural language processing and artificial intelligence.



Advisors


Mevan Babakar

Mevan Babakar

Digital Products Manager
Full Fact

Nada Bakos

Nada Bakos

Senior Fellow
FPRI

Hal Daumé III

Hal Daumé III

ML Researcher/Professor
UMD

Alexios Mantzarlis

Alexios Mantzarlis

Director
IFCN

Djamé Seddah

Djamé Seddah

NLP Researcher/Associate Professor
INRIA, Paris

Andreas Vlachos

Andreas Vlachos

NLP Researcher/Lecturer
U. of Sheffield

Claire Wardle

Claire Wardle

Research Director
First Draft News

Organizers


Dean Pomerleau

Dean Pomerleau

Adjunct Faculty, CMU
Organizer, FakeNewsChallenge

Delip Rao

Delip Rao

Founder, Joostware
Organizer, FakeNewsChallenge

Key Volunteers


Brian McMahan

Brian McMahan


Edward Misback

Edward Misback


FNC Volunteer Community

FNC Volunteer Community



Contact

Challenge participants: Please headover to the Slack team to get your questions answered in a timely manner.
Not on FNC Slack? Click here for an invite.
For media and other inquiries: info@fakenewschallenge.org


  1. New York Times. “As Fake News Spreads Lies, More Readers Shrug at the Truth” 

  2. Pew Research Center. “Many Americans Believe Fake News Is Sowing Confusion” 

  3. Dhruv Ghulati, Co-Founder, Factmata. “Introducing Factmata—Artificial intelligence for automated fact-checking” 

  4. William Ferreira and Andreas Vlachos, “Emergent: a novel data-set for stance classification”