Mis-information & Dis-information, the new fight for Cognitive dominance.

Oscar Okwero
11 min readNov 12, 2021
Photo Courtesy @Pintrest

Fake News

Many of us have been exposed to coronavirus ‘fake news’ stories over the past few months. Conspiracy theorists have falsely linked Covid-19 to the rollout of 5G networks, leading to 20 phone masts being vandalised in the UK. Others have stated that Africans are immune to the Covid19 pandemic. People have also drunk bleach and methanol in the mistaken belief it would protect them from coronavirus. The World Health Organization characterises the crisis as an ‘infodemic’, in which the over-abundance of information makes it hard for citizens to get the reliable guidance they need. The Fake news phenomenon was first thrust into the public limelight during the 2016 US elections where it was revealed the existence of a Russian troll firm, the Internet Research agency that ran a coordinated misinformation campaign to help shape the public opinion and hence election results. Since then there have been numerous efforts targeted at shaping public sentiments through social media platforms of Facebook, Twitter, You tube. With over half of the adult population getting their news mainly from social media platforms in the UK , US and Africa recording a massive number of internet users, the risk that these malignant players are still at work and could mislead the public a real possibility especially during the global health pandemic.

The Covid19 pandemic which has heavily relied on the public sharing of information on the preventions and management of the virus response has invariably been affected by this new ‘infodemic’. Europol has raised concern about the spread of Fake news listing risks such as; promoting fake products and services (e.g. fake COVID-19 tests and vaccines), promoting a false sense of security (e.g. misleading information about treatments), promoting suspicion of the official guidelines and sources. Brazil which has lately become the largest epicentre of the virus but has also recetnly categorised Fake news about the virus as the biggest risk in handling the pandemic. Earlier this month, twitter removed a post shared by the Brazilian president in which he promoted the use of Hydrochroloquine for Covid19 treatment.

One of the most perceptive analyses of what’s going on has come from Kate Starbird of Washington State University, who’s a leading expert on “crisis informatics” — the study of how information flows in crisis situations, especially over social media. Crises always generate levels of high uncertainty, she argues, which in turn breeds anxiety. This leads people to seek ways of resolving uncertainty and reducing anxiety by seeking information about the threat. They’re doing what humans always do — trying to make sense of a confusing situation.

Forms of Fake news

‘Fake news’ occur in different formats from basic sensational headlines and Memes to highly sophisticated bot armies that generate fake social media accounts, follow, like, retweet articles. News items that have misleading titles or pictures and videos that are well choreographed to convey a particular untrue message. Difficult situations occur when information comes across as truthful yet misleading. Certain phrasing can leave information up for interpretation. This occurs using data from surveys and public polls, a context that does carry proof. No matter the statistic, most parties involved can spin the information in favour of themselves as exemplified by the ‘Brave snake saving a drowning fish’ example. This leads to truthful information with deceptive intent or the reverse, giving true meaning to the term “alternative facts.”

Fake news are at times used as decoys for other objectives like click baits to fool readers into downloading programs or clicking on links to compromised websites or pop-up ads. Several Covid-themed websites have been used as channels for malware spread. Microsoft warned of a new COVID 19-related malware campaign spreading by email and using Excel 4.0 macros and NetSupport Manager to compromise systems. In its research entitled “Coronavirus-themed Threat Reports Haven’t Flattened The Curve,” , Bitdefender said that the daily evolution of COVID-themed threats shows a consistent effort on the part of cybercriminals to exploit fear and misinformation about the pandemic to get victims to click on malicious links, open malicious attachments, and even download and install malware.

Synthetic texts’ or ‘Read fakes’ are a model of deep learning and natural language processing that allows a model be fed a title and then it generates a stream of text based on learned data stream. This could easily be used to generate and spread fake health news that would be dangerous if taken for truth by members of the public. The tool is one of several emerging technologies that experts believe could increasingly be deployed to spread trickery online, amid an explosion of covert, intentionally spread of disinformation and of misinformation, the more ad hoc sharing of false information.

The biggest risks of all yet is the concept of Deep fakes where voices, videos or pictures of authoritative voices are superimposed on fake messages and spread on social media by attackers. This could include the voice or image of a public authority superimposed on a misleading content. Unfortunately the technology to create deep fakes is easily available whereas the detection technology is light years behind hence exposing the world to a massive disinformation drive.

Once these news articles are generated they are shared either by social media users to their friends or family as they try to get as much information on how to deal with the pandemic. The more they are shared the more hits they receive and the higher the rating they receive in the search engine algorithms which makes them more prominently displayed hence further enhancing their spread. In the instance of the Brexit and the 2016 American elections, the spread was by an army of automated bots that spread fake news on social media and hence affected the result of the vote.

Identification of fake news

In April 2018 Mark Zuckerberg appearing before Congress for the mishandling of user information during the 2016 election suggesting that AI was going to be the solution to the problem of digital disinformation by providing programs that would combat the sheer volume of computational propaganda. These AI-powered analytics tools would include stance classification to determine whether a headline agreed with the article body, text processing to analyse the author’s writing style, and image forensics to detect Photoshop use. To determine the reliability of an article, Algorithms could extract even relatively simple data features, like image size, readability level, and the ratio of reactions versus shares on Facebook.

When a social media algorithm starts pushing a trending post or article to the top, if AI-powered analytics tracked the sudden surge of a new topic, correlating this data with the source site or Facebook page, it would emerge as an obvious anomaly and be paused from gaining any further momentum until a human at Facebook or Google can validate the specific item, rather than needing human review of all topics. This could be far faster than humans could, can be used when working with thousands or millions of metrics. Real-time anomaly detection can catch even the most subtle, yet important, deviations in data. Facebook was using human editors, but then in 2016 the company fired them after it was reported that they routinely suppressed conservative news stories from trending topics. Now, however, Facebook has brought back human editors to curate certain news content.

Eric Novotny, librarian at Penn State University libraries, divides fake news into seven categories: Fake News: Sources that intentionally fabricate information, disseminate deceptive content, or grossly distort actual news reports, Satire: Sources that use humour, irony, exaggeration, ridicule, and false information to comment on current events, Bias: Sources that come from a particular point of view and may rely on propaganda, decontextualized information, and opinions distorted as facts, Rumour Mill: Sources that traffic in rumours, gossip, innuendo, and unverified claims, State News: Sources in repressive states operating under government sanction, Junk Science: Sources that promote pseudoscience, metaphysics, naturalistic fallacies, and other scientifically dubious claims, Click bait: A strategically placed hyperlink designed to drive traffic to sources that provide generally credible content, but use exaggerated, misleading, or questionable headlines, social media descriptions, and/or images (Novotny, 2017).

Responses to fake news

UN in Lebanon has organized a series of national and regional webinars aimed to build the capacity of journalists on providing accurate information on COVID-19, delivering “positive” coverage of the crisis, and avoiding stigma and discrimination against infected people. It’s also supporting a Massive Open Online Courses (MOOC) for journalists titled “Journalism in a pandemic: Covering COVID-19 now and in the future”. This online course will notably cover the past history of pandemics and disasters in the 20th century and how governments responded to these outbreaks. Several awareness raising information materials, including flyers and info graphics have been prepared targeting the public. They offer tips on how to spot fake news and counter their spread, and how to avoid discrimination against victims of COVID-19. A social media campaign countering fake news and their negative impact on social cohesion and stability was also launched earlier entitled, “Count till 10 before sharing unverified news”.

The European Union has recently announced plans to work with social media companies to remove ‘myths and fake news’. In the UK, a Rapid Response Unit has been created to ‘crack down’ on ‘fake experts’ providing misleading advice that might cost lives. Online platforms have joined this ‘fight’ against Covid-19 misinformation, with Twitter promising to remove content contradicting the advice of local and global public health authorities. Facebook has also banned misleading ads for test kits and face masks, and pledged to invest US$100 million to support news media and fact checking organisations. It is difficult to fully assess the scale of Covid-19 disinformation and misinformation online. Research conducted by the Pew Research Center and Ofcom in the past month suggests that as many as one in two people in the UK and US have come across false or misleading information about the virus. We also have some evidence of the type of false information being distributed via online platforms. EU monitoring teams identified 283 examples of disinformation being circulated.

The Cyber Future Foundation was built with the idea of keeping the internet’s information pathways as clean as possible while also doubling down on identity fraud. Led by a council of representatives, the CFF tries to provide spaces where people can collaborate with truthful information as well as providing education for research skills. Instagram, a subsidiary of Facebook, started working with independent, third-party fact-checkers in May 2019 to combat misinformation and expanded the operation several months later. Canada’s spy agency the Communications Security Establishment (CSE) says it’s already taken down a number of fraudulent sites that have spoofed the Public Health Agency of Canada, Canada Revenue, and most recently, Canada Border Services Agency. “We’ve taken down some COVID-related fake sites out there. We work with partners to do that type of thing. We’re taking action,” Scott Jones, head of the CSE’s Canadian Centre for Cyber Security, told CBC News.

In South Africa, Praekelt.org, a non-profit organisation set up a Corona virus “hotline” on WhatsApp to respond to questions and concerns related to the pandemic. Soon after, UNDP, WHO and UNICEF launched an international partnership with WhatsApp to set up a WhatsApp Coronavirus Information Hub to provide simple, actionable and up-to-date information. These interventions are a good example of the use of machine learning and natural language processing to adequately sift data and provide an appropriate response. Jigsaw, the Google-based technology incubator designed and built an AI-based tool called Perspective to combat online trolling and hate speech. This tool is an API that allows developers to automatically detect toxic language. In 2016 Facebook launched Deeptext, an AI tool similar to Google’s Perspective. The company says it helped delete over 60,000 hateful posts a week. Facebook admitted, however, that the tool still relied on a large pool of human moderators to actually get rid of harmful content. There are a growing number of tools for detecting synthetic media such as deepfakes under development by groups including security firm Zero FOX.

Some governments have turned to censure of information published in relation to the Covid19 pandemic that are not from official sources. This has often raised concerns by Human rights advocates who claim it’s used to crack down on critical voices especially in relation to the handling of the pandemic. In response to this, international organisations such as the World Health Organisation and Human Rights Watch have adopted guidelines and checklists regarding the protection of human rights. This includes the freedom of expression as COVID-19 measures are implemented. Article 19 of the International Covenant on Civil and Political Rights protects the universal freedom of expression but provides for limitations. Measures to contain fake news during COVID-19 are permissible under the protections of public health. However, these limitations do not apply when citizens critique the measures their governments have taken as long as they do not spread fake news. The United Nations Special Rapporteur on Freedom of Opinion and Expression published a report last month on disease pandemics and freedom of opinion and expression. The Special Rapporteur emphasised that freedom of expression is critical to meeting the challenges of the pandemic.

In Africa, the Special Rapporteur on Freedom of Expression and Access to Information issued a recently issued a press statement expressing concerns about internet shutdowns in African countries in the time of COVID-19. These incidents act as a limitation to the freedom of expression of Africans, including that of the press. In this regard, on World Press Day — 3 May — the UN Secretary General emphasised the role of the press as an ‘antidote’ to the ‘disinfodemic’. There are also many laws at the global and regional level that require countries to uphold freedom of expression even in times of pandemics. That freedom can only be limited with justification for instance where news is proven to be fake. Measures to contain fake news during COVID-19 are permissible under the protections of public health. The report recommended that states must still apply the test of legality, necessity and proportionality before limiting freedom of expression even in cases of public health threats. This recommendation can still be used to combat fake news as long as the impact on freedom of expression is minimal. The statement recommended that states guarantee respect and protection of the right to freedom of expression and access to information. This would be through access to the internet and social media services. The Special Rapporteur emphasised that states must not use COVID-19 as “an opportunity to establish overarching interventions”. And the African Commission recently published its Revised Declaration on Principles of Freedom of Expression and Access to Information in Africa. According to the Declaration, freedom of expression is an indispensable component of democracy. It states that no one should be found liable for true statements, expressions of opinion, or statements which are reasonable to make in the circumstances.

This article has illustrated that malicious actors are now after not only our data, accounts and profiles but after the truth. They want to call to question the sources of information that are relied on for public health. It is no longer enough to see and believe and very easily the video by the WHO director you watch could be a fake with ill intent. Luckily the world players have all realised this great danger and are working together to call out, stop and counter the spread of misinformation that has proven in some instances fatal. In the second part of this article I will go deeper on Deepfakes and how it has become the new battle ground of misinformation.

--

--

Oscar Okwero

Cyber Security | AI | Data protection | Food | Liverpool FC |