Could An Auto Logic Checker Be The Solution To The Fake News Problem?

Fake news is not news – that is, it is not in fact news, and the matter of fake news is not a recent revelation. But while fake news is a thorny problem that needs addressing in its own right, it is part of an even bigger issue too. Discourse –- the process by which humanity collectively comes to an understanding of itself, and so shapes its own future –- is fundamentally broken. The Conversation

The problem begins with the school debate, a win-or-lose scenario where one party ultimately triumphs in the claim for truth. The real world is, of course, more intricate, with numerous subtleties lying between any two extremes. Yet this model persists all the way into international politics, where complex issues are reduced to soundbites. Material that arouses heated emotions within the viewer spreads faster and wider than well-considered, evidence-based argument.

For an elected leader, a u-turn is seen as the ultimate betrayal, but for a scientist, changing views in the face of better evidence is a sign of the highest integrity. An alert reader would recognise this, but many do not and are left uninformed and angry.

However, the very social and digital technology that is causing and spreading these problems could instead tackle the issue.

Auto-check

Imagine, if you will, a sort of spellchecker application for ideas: that familiar squiggly underline appears for bad logic or conflicting evidence.


innerself subscribe graphic


Before you object that any claim could be flagged with contradictory information, or that the choice of beliefs is a personal one, rest assured that the logic checker’s settings could allow for this. Right click, reject correction. Mind you, the checker now knows you must believe one of several alternatives. The evidence was fabricated, the interpretation was wrong, and so on.

Still, you’ve succeeded in removing the squiggly underline, so long as at least one of those alternatives is compatible with all the other beliefs you’ve previously taught the checker. If not, then you’ll get another error message. If your position is truly out of touch with is the proven truth, you’ll ultimately be forced either to reject the scientific method altogether, or more productively, to confront the inconsistencies in your views.

Is it possible that arguing with an unemotional machine rather than another human would take the ego out of discussion? Being shown where your beliefs contradict themselves would surely be an immensely valuable tool for learning.

The aim of this fictitious checker is not to be the final arbiter of truth and falsehood – but, in a world of information overload, to track down conflicting evidence and counterarguments faster than you could ever do so yourself. In fact, this isn’t so far from today’s internet search extended into the semantic web, where knowledge is represented as structured data rather than free text. The futuristic part is the text processing, but that’s not essential to the system: the user could instead choose ideas, beliefs and claims manually from a crowdsourced database –- or input their own – rather than the computer doing so automatically. And there are numerous examples of experimental systems like this that have already been built.

From here to there

Why then, are we not using automated or crowdsourced logic checking already? It turns out that building a community of people to create the supporting data is harder than building the technology. Successful online communities do exist, albeit they are shaped by their own agendas. Facebook must be the world’s largest repository of community-generated data, but the creation process is shaped by algorithms with the ultimate aim of producing advertising revenue simply by keeping the user engaged for as long as possible.

Perhaps more interesting is Stack Exchange where communities pose and answer questions on specific topics. Because maintaining a reputed source of information is integral to the model, user interaction is guided by votes and reputation scores. Still, Stack Exchange has made compromises to this end, most notably an effective ban on subjective questions, which are an essential part of any complete understanding of the world around us.

Most interesting of all is Wikipedia, which despite its imperfections has succeeded in building a charitable community directed towards documentation of knowledge. Returning to our fictitious logic checker, two projects built on Wikipedia have already taken significant steps towards the sort of structured information necessary to support it: Wikidata could one day become the crowdsourced database mentioned above, while dbPedia attempts to extract the data automatically from existing articles.

Is this the answer to all of our problems? Of course not. No tool of this type will completely remove the underlying power structures – including, but not limited to, online community business models – that contribute to our present day situation. But these tools have the potential to improve the way we communicate with one another, and that can’t be a bad thing.

About The Author

Crispin Cooper, Research associate, Cardiff University

This article was originally published on The Conversation. Read the original article.

Related Books

at InnerSelf Market and Amazon