Educating users to distinguish the reliable information – that would be the most perspective way to fight fake news, believes Gianluca Demartini, an associate professor at the School of Information Technology and Electrical Engineering, University of Queensland.[1]

The artificial intelligence (AI) is still not smart enough to save us from fake news. That is one of the study findings by the information technology expert Gianluca Demartini, who was hired by Facebook to study whether computer programs can handle filtering information on social networks without human intervention.

Demartini gives an example with the image of the “napalm girl”. Prized with a Pulitzer in 1972, a photograph showing children and soldiers running away from a napalm bomb during the Vietnam War was posted on Facebook in 2016, but later removed because it pictures a naked nine-year-old girl which contradicts the official standards of the social network. A massive public protest followed as regards to the historical value of the iconic shot and forced Facebook to allow the photo’s uploading to the platform.

Demartini and his colleagues’ approach combines the capacity of AI to process large amounts of data and the ability of people to understand digital content. Thousands of moderators already work for Facebook, the so-called "cyber porters", who "clean" social media from inappropriate posts related to violence, hate, etc. However, regarding to fake news, the issue of subjectivity is problematic – people’s estimation might be distorted depending on the origin and personal bias. Therefore, the aim of the research team is to collect multiple "truth labels" for the same news from several thousand moderators. The labels show the level of “fakeness” of a piece of information and the concrete personal judgements are being collected in detail in order to track and explain the ambiguities and contradictions in the answers.

Currently the Facebook content is treated as binary – it either meets the standards, or it does not. Therefore, the data set can be used to train AI to identify better which news are controversial and which ones are fake. The data can also help assess how effective technology is in detecting fake news.

The research team offers us a step forward – instead of only computer programs or professional moderators deciding which news are fake, it would be appropriate for social media users to be trained on how to identify such elements for themselves. This requires an approach aimed at promoting information literacy, shares Gianluca Demartini, summarizing: “Despite the contribution of technology to tackling false information, the involvement of both moderators and users is crucial because people are necessary in order to interpret the guidelines and make a decision on the value of digital content, especially if it is inconsistent."

The duration of the project is until 2024. It shall be seen whether the pace of technology development will change significantly the final findings of the team’s study.

 Author: Yoanna Nikolova-Kar

The article is based on the material: AI isn't smart enough yet to save us from fake news: Facebook users (and their bias) are key (https://techxplore.com/news/2019-09-ai-isnt-smart-fake-news.html?utm_source=TrendMD&utm_medium=cpc&utm_campaign=TechXplore.com_TrendMD_1).

 

[1] https://researchers.uq.edu.au/researcher/18932

 

The CLASS project is implemented with the financial support of Iceland, Liechtenstein and Norway through the EEA's financial mechanism. The aim of the project is to encourage the media literacy and civic education.

The entire responsibility for the content of this article is held by the European Institute Foundation and under no circumstances can it be assumed that it reflects the official opinion of the Financial Mechanism of the European Economic Area and the Bulgarian Operator of the Active Citizens Fund.

more...