Why people and AI are caught in a stalemate in opposition to false information

Fake news is a scourge for the world community. Despite our efforts to fight it, the problem runs deeper than just fact-checking or suppressing publications that specialize in misinformation. Current thinking still tends to support an AI-powered solution, but what does that really mean?

According to recent research, including this paper from scientists at the University of Tennessee and the Rensselaer Polytechnic Institute, we are going to need more than just clever algorithms to fix our broken discourse.

The problem is simple: AI can’t do anything a person can’t. Sure, it can do many things faster and more efficiently than humans – like up to a million – but at its core, artificial intelligence only scales things that humans can already do. And people really suck in identifying fake news.

According to the researchers mentioned above, the problem lies in what is known as “confirmation bias”. If a person thinks they already know something, they are less likely to be influenced by a “fake news” tag or “dubious source description”.

According to teamwork:

In two consecutive studies, we use data collected from message consumers through Amazon Mechanical Turk (AMT) to determine whether there are differences in their ability to correctly identify fake messages under two conditions: when the intervention is targeting novel message situations and when the intervention is tailored to specific heuristics. We find that in novel messaging situations, users are more receptive to advice from the AI ​​and, under this condition, tailored advice is more effective than generic.

This makes it incredibly difficult to design, develop, and train an AI system to detect fake news.

While most of us think we can spot false news when we see it, the truth is that the bad actors who create misinformation don’t do so in a void: they can lie better than we tell the truth. At least when they say something, we already believe.

The researchers found that people – including independent Amazon Mechanical Turk employees – were more likely to mistake an item as a counterfeit if it contains information they believe is incorrect.

On the other hand, people were less likely to make the same mistake if the news presented was viewed as part of a novel news situation. In other words, if we think we know what is going on, we are more likely to agree to fake news that fits our preconceived notions.

While the researchers identify various methods by which we can use this information to strengthen our ability to notify people when they are presented with fake news, the bottom line is that accuracy is not the problem. Even if the AI ​​gets it right, we still believe less in a reality News articles when the facts don’t align with our personal bias.

That’s not surprising. Why should anyone trust a machine built by Big Tech instead of the word of a human journalist? If you think, because machines don’t lie, you are dead wrong.

Typically, when an AI system is created to identify spoofed messages, it must be trained on pre-existing data. In order to teach a machine to recognize and tag fake news in the wild, we need to feed it a mix of real and fake items so that it can learn to recognize which are which. And the datasets used to train the AI ​​are usually hand-labeled by humans.

This often means crowd-sourced labeling requirements apply to a cheap third-party labor company like Amazon Mechanical Turk or any number of data stores that specialize in records rather than messages. The people who decide whether a particular article is fake may have actual experience or expertise with journalism and the tricks bad actors can use to create compelling, hard-to-spot, fake news.

And as long as people are biased, we will continue to see fake news flourish. Not only does confirmation bias make it difficult for us to distinguish facts that we disagree with from lies, but it also makes it difficult to maintain and accept outright lies and misinformation from celebrities, our family members, colleagues, bosses, and the highest political offices To convince people differently.

While AI systems can certainly help identify tremendously false claims, especially when made by news outlets that regularly publish fake news, the fact remains that whether or not a news article is true is not really a problem most People.

Take, for example, the most watched cable network on television: Fox News. Despite the fact that Fox News attorneys have repeatedly stated that numerous programs – including the second highest program on its network hosted by Tucker Carlson – are in fact fake news.

According to a ruling in a defamation case against Carlson, U.S. District Judge Mary Kay Vyskocil – a Trump-appointed person – ruled in favor of Carlson and Fox after discovering that reasonable people would not take the host’s day-to-day rhetoric as true:

The “general tenor” of the show should then inform a viewer about it [Carlson] does not provide “factual facts” on the issues he is discussing and instead deals with “exaggeration” and “non-verbal comments”. … Fox argues convincingly that given Mr. Carlson’s reputation, any reasonable viewer “arrives with reasonable skepticism.”

Because of this, under the current news paradigm, it may be impossible to create an AI system that can definitely determine whether a particular message is true or false.

If the news agencies themselves, the general public, elected officials, big tech, and the so-called experts cannot decide without prejudice whether a particular news article is true or false, there is no way we can trust an AI system. As long as the truth remains as subjective as a particular reader’s politics, we will be inundated with false news.

Greetings humanoids! Did you know that we have a newsletter all about AI? You can subscribe to it exactly here.

Published on March 16, 2021 – 21:57 UTC

Comments are closed.