Real-time Fake News Detection and Claim Verification Using Meta Curriculum Learning — Part I

Wipro Tech Blogs
9 min readAug 2, 2022

--

By Santanu Pal, Shivam Sharma, Addepelli Sai Srinivas, Sangram Jethy, Nabarun Barua, Vinutha B N

Social media for news consumption is a double-edged sword that enables the wide spread of fake news: misinformation and disinformation. The unrestricted and unwarranted virality of fake content has been observed to be detrimental to different aspects of society, since it distorts content consumers’ points of view regarding socially sensitive topics, such as politics, health, and religion. Fake news spreads easily in online social networks, propagated by social media actors and network communities to achieve specific, mostly malevolent objectives. Ultimately, fake news has a negative effect on the very fabric of democratic societies, As a recognized social ill, it should be fought using an effective combination of human and technical means. Accordingly, fake news detection on social media and the corresponding verification of authenticity have become areas of emerging research.

Fake news campaigns are increasingly powered by advanced AI techniques, and a lot of effort is put into the detection of fake content. While important, this is only a piece of the puzzle if we want to address the phenomenon in a comprehensive manner. Whether a piece of information is considered fake or not often depends on the temporal and cultural contexts in which it is interpreted.

Online social media platforms try to limit the virality of fake news through content moderation. While these measures show some effectiveness in limiting the diffusion of fake information, one big issue still remains unsolved: Identifying disinformation and reporting its status to users is not enough to counter it.

Most fact checking organizations use human validation of information, and the ever-increasing amount of new information on the Internet makes manual verification challenging, time-consuming, and costly. Our research investigates knowledge extraction and integration for deep learning architectures. Following the success of deep learning methods in NLP, the focus of our work is to explore the real-world knowledge that can be extracted from deep learning models, and to identify potentially useful sources of related information that can make the models “smarter” and better in natural language understanding for fake news claim verification tasks. Based on its understanding, the model interprets or explains in natural language all supporting evidence that led to the decision. Our precise aim is to bring together the knowledge interpretation, extraction, and integration lines of research in deep learning and cover the areas in between. Real-time fake news detection and claim verification models rely heavily on the corresponding pertinence of specific evidence. The fact checking, or evidence-based, approach aims to search for evidence for a given claim from external knowledge sources like Wikipedia, news articles retrieved by a web search, etc., to verify veracity (Thorne et al., 2018; Zhou et al., 2019; Zhong et al., 2020). Models were trained and tested on a dataset that was collected from fact-checking websites like PolitiFact, Snopes, etc., to check its truthfulness. In the work (Vlachos et al., 2014; Hanselowski et al., 2019), PolitiFact was to build an automated fact-checking system. The work (Karadzhov et al., 2017; Zlatkova et al., 2019; Shu et al., 2019) uses fact-checking on short claims. Rawat and Kanojia formulated a mechanism in 2021 that was appended to a web crawler to fetch the most relevant documents and reject the redundant ones. Further, the document was then picked using semantic similarity and text summarization.

The computational solutions needed to address such issues have started gaining the necessary momentum, but they have yet to consider the realistic challenges like generalizability, domain-agnosticism, etc., that need to be addressed to justify ongoing efforts towards solving online fake content detection in real time.

The first challenge in tackling those issues that demand computational solutions is the employment of human moderation toward manual adjudication of online claims. This approach poses obstacles in terms of the required cost and time because the volume of the verifiable content incessantly rises. Another challenge is the difficulty in cross-verifying a given claim against reliable information sources. Thirdly, despite the impressive results of state-of-the-art language models like BERT on tasks like natural language inference, textual entailment, etc., systems still struggle to model the fake news detection task, primarily due to the sheer diversity and evolving nature of online content and possibly due to under-explored modeling solutions. Lastly, since the authenticity of viral content or fact verification is critical to many domains (e.g., general news articles, politics, health, religion, regional conflicts), it is of utmost importance that proposed solutions fundamentally factor in the constraints associated with such diverse linguistic content. However, the availability of large-scale training data or cross-domain news datasets for fact verification are extremely scarce. Therefore, most existing fake news detection techniques fail to identify fake news in real-world evaluation settings. A real news stream typically covers a wide variety of domains. We have found empirically that existing fake news detection techniques perform poorly for a cross-domain news dataset despite yielding good results for domain-specific fakes.

In this article, we explore how world knowledge is used by subsequent neural networks to encode, how this knowledge is exploited by the models for performing specific tasks, and ways to enhance the quality of this knowledge by leveraging external resources. More precisely, we systematically examine optimal retrieval and inclusion of external knowledge-based evidence towards fake claim verification tasks, and we design our own pipeline for the same.

Evidence Collection

In fake news claim verification tasks, evidence plays a crucial role in determining whether the provided claim is fake or not fake. Therefore, the retrieved evidence for any given claim should be relevant and contain necessary information about the queried claim.

We propose an approach that improves over the current automatic fake news detection approaches by automatically gathering evidence for each claim. Our approach retrieves top relevant web articles and then selects appropriate text to be treated as evidence sets.

To begin the process of extracting evidence, we first perform a web search of a given claim and gather all the top URLs that appear in the search. Second, only those URLs with high confidence score (>0.7) are shortlisted from the list of collected URLs using a similarity check between the claim and the content in the content of URLs. A predefined confidence threshold score (>0.7) confirms that only relevant URLs are considered for the next step. Next, another similarity check is conducted on each of the shortlisted URL’s content, which provides valuable information for the input claim from each paragraph (similarity threshold score >0.5). Here, texts with the highest similarity are considered for evidence of the input claim. Note that this step differs from traditional pipelines where the generated summaries are used as supporting evidence. The aforementioned similarity scores are calculated using a BERT-based similarity model (See filtering in Figure 1), which calculates the cosine similarity between two embeddings: claims and the corresponding evidence.

Training Pipeline for Benchmarking

We fined tune the BERT-Base model separately on publicly available health or fever dataset (see training pipeline in Figure 1). First, we preprocessed and tokenized the data for the pre-trained BERT model. Next, we send it for fine tuning with Batch Size=5, Number of Epoch=3, Learning Rate=5e-5. Our model uses other hyperparameters with the BERT-Base settings. Once the training was finished, we saved the best model. The fine-tuned model is used further in the testing pipeline to load the model.

Figure 1: Training and testing pipelines

For comparison of our model’s performance on publicly available benchmarked datasets, we make use of Health (Sarrouti et al., 2021) and Fever (Thorne et al., 2018) datasets. Health dataset consists of ~13k evidence-based claims pertaining to the health domain. Fever dataset consists of ~130k fact-check worthy claims on a diverse set of topics.

In Table 1, we show empirically that our pipeline is more robust than Rawat and Kanojia’s 2021 method. To showcase, we fine-tuned BERT in Fever and Health training datasets separately. We randomly selected 500 samples from the Fever test set to test our fine-tuning models (see testing pipeline in Figure 1). Table 1 also shows that our evidence extraction pipeline performs better than the benchmark evidence in out-of-domain scenarios. Moreover, applying Rawat and Kanojia’s 2021 method, a summary based evidence extraction pipeline, provides poor results compared to ours. Comparing the benchmark Fever data with our evidence, the benchmark evidence involved a manual fine-grained evidence selection (Thorne et al., 2018), however, our results are fully automatic real-time evidence extraction similar to Rawat and Kanojia (2021).

Table 1: Comparison of evidence extraction pipeline between Rawat and Kanojia, 2021 and our real-time evidence collection pipeline and benchmark evidence.

Takeaway

We show empirically how world knowledge extracted through relevant news searching is used by subsequent neural networks for encoding.

We show how this knowledge can be exploited by BERT models for performing our fake news verification task.

We offer ways to enhance the quality of this knowledge by leveraging external resources such as: using pre-trained BERT model dual similarity checking for news evidence extraction from the web, and further fine-tuning a pretrained BERT model on benchmark data and testing on realistic settings.

Tested on realistic settings, our evidence extraction pipeline provides better accuracy than evidence extraction methods based on summarization.

Further Reading:

James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and verification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 809–819.

Mourad Sarrouti, Asma Ben Abacha, Yassine Mrabet, and Dina Demner-Fushman. 2021. Evidence-based

fact-checking of health-related claims. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3499–3512, Punta Cana, Dominican Republic. Association for Computational Linguistics.

Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 892–901.

Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2020. Reasoning over Semantic-level Graph for Fact Checking. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6170–6180.

Andreas Hanselowski, Christian Stab, Claudia Schulz, Zile Li, and Iryna Gurevych. 2019. A Richly Annotated Corpus for Different Tasks in Automated Fact-checking. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 493–503, Hong Kong, China. Association for Computational Linguistics.

James Thorne, Max Glockner, Gisela Vallejo, Andreas Vlachos, and Iryna Gurevych. 2021. Evidence-based Verification for Real World Information Needs.

Tal Schuster, Adam Fisch, and Regina Barzilay. 2021.Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 624–643, online. Association for Computational Linguistics.

Georgi Karadzhov, Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño, and Ivan Koychev. 2017. Fully Automated Fact Checking Using External Sources. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 344–353.

Dimitrina Zlatkova, Preslav Nakov, and Ivan Koychev. 2019. Fact-checking Meets Fauxtography: Verifying Claims About Images. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2099–2108.

Kai Shu, Limeng Cui, Suhang Wang, Dongwon Lee, and Huan Liu. 2019. Defend: Explainable Fake News Detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 395–405.

He, X., Meng, X., Wu, Y., Chan, C. S., & Pang, T. (2020). Semantic Matching Efficiency of Supply and Demand Texts on Online Technology Trading Platforms: Taking the Electronic Information of Three Platforms as an Example. Information Processing & Management, 57(5), Article 102258.

Mrinal Rawat and Diptesh Kanojia, Automated Evidence Collection for Fake News Detection, CoRR,abs/2112.06507, 2021, https://arxiv.org/abs/2112.06507.

--

--