Most of us have come across conspiracy theories and anti-science rhetoric on popular platforms such as WhatsApp. Some of these theories are so unbelievable, they are dismissed outright. But why are such theories ultimately harmful?
Disinformation is harmful for many reasons but it is not a new problem, says Harris Zainul, an analyst at the Institute of Strategic and International Studies (ISIS) Malaysia.
The problems of misinformation and disinformation, which have always in one form or another plagued society at large, have been exacerbated by the advent of social media, says Harris.
In a study that utilised Twitter data from 2006 to 2017, published by the Massachusetts Institute of Technology (MIT), it was found that falsehoods diffused significantly farther, faster, deeper and more broadly than the truth in all categories of information.
The harm caused by false information was largely dependent on the type of false information, explains Harris, who studied international relations and law at university. He has spent a great deal of time looking into the impact of mis- and disinformation on society as well as investigating various public policy options to tackle this problem.
“For example, mis- or disinformation in the political process could undermine the integrity of civic discourse. If we look at the US under former president Donald Trump, false information was so prevalent that people were questioning the legitimacy of the electoral process — and by any measure, this is not a good thing,” says Harris.
It is difficult to say what variables are in play that could make false information more likely in a country because there are many factors involved at any one time. “However, I think trust — or the absence of it — plays a critical role in determining how likely the problem is. Trust here refers to not just trust in the government (and politicians) but also trust in experts, institutions and in one another,” he adds.
In addition, if people are constantly bombarded with falsehoods surrounding minority groups, this could also lead to a widening of society’s fault lines, says Harris.
“We saw multiple allegations on social media (in Malaysia) in April last year, falsely stating that a Rohingya man was insisting on full citizenship rights for Rohingya refugees, that the Rohingya were clashing with the police and that they were receiving monetary allowance from the UN High Commissioner for Refugees (UNHCR).
“Other false narratives alleged that the Rohingya were enjoying special privileges, flouting the law without consequences and obtaining healthcare benefits at the expense of ordinary Malaysians. All these narratives can have the effect of desensitising Malaysians to the plight of the Rohingya,” he says.
Another potential harm that is more relevant to us today is false information regarding Covid-19 vaccines. This could increase vaccine hesitancy and undermine our efforts to achieve herd immunity, Harris notes.
So, where do we go from here?
Improving our information landscape
Ultimately, digital literacy is vital.
It used to be the case that if you could read and write, you were considered literate, says Harris. “However, I’d argue that in the digital age, if you cannot ascertain for yourself the credibility and authenticity of information found online, then you are pretty much digitally illiterate,” he adds.
What is important to note is that we should not only focus on instilling digital literacy skills among the young, as they are digital natives. They grew up with these technologies, and some might not even know of a time before Facebook and Twitter, explains Harris.
This does not mean that we should neglect educating the young, but that we must take concrete steps to instill these skills in the older generation. “Focusing solely on including digital literacy modules in the education syllabus means leaving out those who have left school,” he points out.
Do big tech players, such as Facebook and Twitter, have a role in this education process?
Mark Zuckerberg’s motto used to be “move fast and break things”, and while this approach might work for some technologies, Harris says “we should not downplay how it can also be detrimental to society”.
For algorithmic technology, for example, prior to its full deployment, there must be a concerted effort in understanding how it works in the real world and how it might affect real people, he explains.
Often, algorithms retain the same gender, racial and income-level biases as human decision makers, according to a report released last month by the Greenlining Institute, a non-profit fighting for racial and economic justice.
Algorithmic bias can come from the choosing of an outcome or the data inputs. Broadly speaking, the report adds, algorithmic bias “arises from the choices developers make in creating the algorithm, rather than an explicit discriminatory motive”.
But, awareness of potential problems with algorithmic technology has been growing over the past few years. Moving forward, hopefully, more tech companies will treat this as a critical part of their research and development process, Harris says.
“It would be easy to say that social media platforms were responsible for Trump’s election in 2016, and that a genocide in Myanmar was incited on Facebook,” he continues. But, focusing only on the bad effects ignores the benefits of these platforms, he points out. “Social media allows democratic defenders to organise themselves. An example is the influence of social media on the events of the Arab Spring.”
The effects of social media largely depend on how people use it. However, a different side of the debate warrants more action on the part of the tech companies themselves.
Ultimately, Harris says, there needs to be greater oversight from either the government or an industry regulator to address the dichotomy known as “Technology 5.0 but Regulation 1.0”.
Policymakers need to be more agile in responding to public policy challenges brought about by new technology, he notes.
“Considering how there could be little technological homogeneity underpinning certain technologies, the policies and regulations ought to be objective-based rather than technologically specific. This should allow sufficient flexibility for innovation,” says Harris.