Who should I trust when I seek the truth?

Who should I trust when I seek the truth?

(Machine learning for “fake-news” detection)

For many of us these days, it is not unusual to wonder whether something we read or hear in the media (traditional or social) is true or not.

One of the main activities in the CONCORDIA project is focused on the User-Centric Security, where our team’s current main efforts are concentrated on detection and analysis of “fake-news” in the today’s information overload. These kinds of news present a serious harm to politics, health and society in general.

There is no commonly accepted definition of “fake-news” [1], which presents an important research question on its own. For example, some previous research defines “fake-news” based on the source where news is published (here it is assumed that some venues are more trustworthy than others), while others analyse the content of the news itself. In addition, the term “fake-news” often overlaps with terms such as misinformation, propaganda or satire, amongst others.

In our group, we are tackling the issue of “fake-news” from both psychological and technical perspective. In the former, we are seeking answers to questions such “Why are ‘fake-news’ popular?”, while in the latter, we develop and analyse tools for “fake-news” detection.

Machine learning (ML) is one of the available tools that can be applied to detection of “fake-news” [2]. ML methods are a set of algorithms that can, for example, automatically analyse the content of news and tell with certain confidence whether the news is “fake” or not. Our main contribution to this, already active research field, is combining so-called Automated ML (AutoML) methods [3] with interpretable ML models. AutoML helps inexperienced users build ML models with little effort, while interpreting ML models gives insights into how a model decides whether some news is “fake” or not. For example, “fake-news” might contain certain kinds of emotions [4] or a high number of distinguishable words (such as e.g., slang).

Since trust in Mainstream Media has been in decline since 2015 [5], we are currently most interested in finding out how “fake-news” influence this phenomenon. To achieve this goal, we are developing a robust framework where ML is being applied on a large news dataset from a wide range of news sources.

We expect our research to give important insights into the ecosystem of “fake-news”, where our work will help to understand why some news are “fake” and how to detect them, and how to make the media space a safer environment.

(By Simon Kocbek, University of Maribor)

References

  1. https://www.sciencedirect.com/science/article/abs/pii/S0148296320307852
  2. https://www.sciencedirect.com/science/article/pii/S266682702100013X
  3. https://www.automl.org/automl/
  4. https://link.springer.com/article/10.1007/s13278-018-0505-2
  5. https://www.digitalnewsreport.org/publications/2020/trust-will-get-worse-gets-better/