Friday 29 Mar 2024
By
main news image

This article first appeared in Digital Edge, The Edge Malaysia Weekly on July 19, 2021 - July 25, 2021

Most shops in Malaysia offer at least two ways for visitors to check in during the pandemic: using the MySejahtera app or writing down their details in a notebook provided for the purpose. Some shops also allow visitors to input their details in a Google Form.

Whenever Farlina Said comes across these options, she has to decide quickly, taking into account a variety of factors. The book and the Google Form data are going to be collected by the shop, Farlina observes, and the information would be protected by the Personal Data Protection Act (PDPA). But who is to stop someone from copying down (or taking a snapshot of) the information and selling it to scammers or marketeers?

“I also have to consider how Google is going to store the data collected on the Form. Meanwhile, MySejahtera data falls under the Official Secrets Act and the Penal Code,” says Farlina, an analyst at the Institute of Strategic and International Studies Malaysia. If a government officer is found to have disseminated MySejahtera data, the officer could be charged, she explains.

“If the shop looks dodgy, I choose to go with MySejahtera. Otherwise, I might choose the book.  At the end of the day, it’s a personal choice on who you give your data to.”

Some may find these kinds of considerations about personal data pointless, but it is becoming more crucial for people to pay attention to who collects their data, how it is stored and what is being done to it.

That is because governments and corporations are using technology increasingly to monitor and manage citizens, a trend that has been accelerated by the pandemic. Various countries are tracking citizens’ locations through their mobile phones to conduct contact tracing and mandating phone check-ins wherever one goes.

Apart from the pandemic, the use of facial recognition technology in smart CCTVs — which features largely in many smart city plans — biometric verification to access services and artificial intelligence (AI)-powered surveillance or policing have been picking up globally. 

The AI Global Surveillance Index (AIGS) from 2019 found that at least 75 out of 178 countries use AI for surveillance purposes, most of which are facial recognition systems, smart city platforms and smart policing. Companies from China are major providers of such technologies, followed by those from the US. 

Most of these technologies are deployed for ostensibly good reasons. For instance, smart CCTVs are used to capture criminals; biometric verification could prevent identity theft; contact tracing is meant to keep citizens safe from Covid-19 or, at least, trace whoever has been in contact with those who have been diagnosed positive.

And, yet, this puts a lot of data — location, travel history, personal and facial — in the hands of the government. If not managed properly, it gives leeway to those in power to potentially limit citizen freedom. 

The question is, how can we prevent the use of technology for supposedly good purposes from sliding down the slippery slope of restrictive surveillance leading to what is essentially a police state?

“Many of us would agree that it is legitimate for governments to collect more rigorous information about our movements for the purposes of contact tracing. But location tracking is often done without informing the public, and that is where it gets unethical,” says Anisha Nadkarni, research fellow at the Social and Economic Research Initiative.

“The slippery slope occurs when the government uses this data for other supposedly socially justified causes unrelated to Covid-19 and there is no oversight of this in Malaysia or other countries. The risk is that these ‘temporary emergency’ measures may become permanent.”

A dangerous potential

That fear came true earlier this year when the Singapore government said personal data collected in its Covid-19 tracing app could be accessed by the police for criminal investigations.

After a public backlash, a new bill was passed that allowed police to access only the data for “serious offences”, which include terrorism or the possession of dangerous weapons. 

“This is a classic example of how a lack of appropriate legal framework can easily lead to function creep,” says Anisha.

“Function creep”, according to the Collins English dictionary, is the gradual widening of the use of a technology or system beyond the purpose for which it was originally intended, especially when this leads to a potential invasion of privacy.

Of course, not every government that uses technology for surveillance or other functions necessarily infringes on citizens’ human rights. What matters is that they can do so, in the absence of laws that limit their power and protect citizens. 

As the AIGS report points out, the most important factor that determines whether surveillance technology is used for repressive purposes is the quality of governance. This means considering whether the government has a pattern of human rights violations and whether there are strong rule-of-law traditions. 

The lack of transparency and legal protection could result in a surveillance state, where citizens’ behaviours are shaped by those in power. For instance, AI surveillance automates tracking and monitoring functions and casts a wider surveillance net. The effects of this could be chilling, with citizens’ communications being monitored and movements tracked.

“It’s what we call ‘mission (function) creep’, where the capabilities of the technology and how it might be extended in the future is unknown. Privacy rights affect other human rights, whether it’s your civil freedom or your right not to get discriminated against,” says Tan Jun-E, an independent researcher who has studied digital rights, AI safety and human rights in Southeast Asia. 

In that situation, there is an imbalance of knowledge and authority between those who have the information at their disposal and the average citizen. “Ultimately, this could lead to governance becoming more fear-based and centralised rather than more democratic. Information is power,” says Anisha.

One often thinks of China as a prime example of a surveillance state, given the reported use of facial recognition technology to target minorities, censorship practices and amassing of databases to determine the social credit score of individuals and companies. Low scores prevent one from accessing services such as public transportation.

In 2017, the Chinese government demonstrated with the BBC how it could locate a BBC reporter within seven minutes using its network of CCTV cameras and facial recognition technology.

Some also point to the US when it comes to the discussion of issues related to surveillance technology and the creation of a police state. In recent years, police departments in the US have been working with tech giants to use AI and facial recognition technologies to fight crime. However, it has also resulted in wrongful arrests and, allegedly, the targeting of activists.

In a high-profile case last year, New York police stormed into the apartment of Derrick Ingram, a Black Lives Matter protestor, accusing him of assaulting a police officer. According to reports, the police used facial recognition technology that matched the activist’s face to an Instagram photo. Ingram refused to let them enter without a warrant. The charges were ultimately dismissed. 

Meanwhile, London’s Metropolitan Police began deploying real-time facial recognition technology earlier last year, aiming to target individuals on watch lists. Again, this has elicited concerns about privacy and risk of discrimination. 

After all, civilians cannot opt out of being watched by CCTVs placed in public areas, and it is not known how this footage will be analysed or stored most of the time.

When tech goes wrong

Tan likes to compare the scenarios posed by two classic novels: The Trial by Franz Kafka and 1984 by George Orwell. The Trial tells the story of a man who was arrested by a remote authority without ever finding out what crime he had committed. 1984 describes an authoritarian state — controlled by the elusive leader known as Big Brother — that polices individual behaviour and thoughts.

“People always say our freedom of expression and civil freedom are very important. But what’s more insidious, to my mind, are the things you can’t see and when you get lost in bureaucracy, like in The Trial. I think it’s scarier than 1984 because the system doesn’t care who you are and is not interested in controlling you. They don’t care about people who fall through the cracks. They just want to make sure the system works,” says Tan.

This reality was reflected in The Guardian’s investigative series “Automating Poverty” in 2019, which looked at how AI is used in welfare distribution in India, England and Australia. 

In India, some people were denied welfare assistance because their biometric data was not recognised, owing to system glitches. In Australia, some individuals were informed that their welfare payments were suspended via text or email. In both cases, there was no easy way to complain or seek recourse in a system run by robots. 

The use of AI in credit scoring could also trap the underprivileged. “Technology is used to predict how people will act and you could deny them services such as financing because it thinks a certain race or person in a certain place would not be able to pay back certain loans,” says Tan. 

But this is not to say that the Big Brother scenario is not threatening. 

“A certain race may get discriminated against because, historically, they have had a history of being arrested or questioned. Using machine learning to go through historical data but not having a contextual understanding of the data might marginalise, say, the Indian community in Malaysia [which already faces discrimination]. It might lead to the problem of predictive policing,” says Tan.

Predictive policing refers to the use of big data to predict where future crimes will be committed and which individuals are mostly likely to commit crime. This has resulted in overpolicing in minority neighbourhoods in the US, according to some reports.

Facial recognition technology, at least in its current form, has been shown to be less accurate for people of colour or women, according to research from the Massachusetts Institute of Technology and the US National Institute of Standards and Technology, as reported by Nature. The accuracy levels also vary widely among different providers. This can result in innocent people being put on watch lists and wrongfully arrested.

However, there have been initiatives taken by various parties to improve the quality of data fed into AI systems to rectify this error.

Another potential outcome of the misuse of technology is widespread confusion. The recent implementation of the Hotspot Identification for Dynamic Engagement (HIDE) system is an example. 

HIDE uses AI to predict locations that have potential to become Covid-19 hotspots in seven days, using data from MySejahtera and health databases. Identified premises are expected to take proactive actions to prevent the spread of Covid-19. 

But when the map was released, there was uncertainty about what should be done with that information. Many of the premises, which were malls, were ordered to close for three days at short notice. The initial order from the responsible ministry was that these premises did not have to close unless directed by authorities.This affected many business owners and employees. 

“This is dangerous in the sense that, if what you predict will happen doesn’t happen, you risk coming down too hard on something. For things such as terrorism, there is no negotiation if the risk is there. But, in other situations, if you do have pre-emptive technologies, you have to be aware of the consequences and take steps to protect the people,” says Farlina. 

How to prevent tech from going wrong

The more pessimistic may believe that individual action would amount to nothing. After all, how can citizens protest a government’s purchase or deployment of surveillance technology?

There have been encouraging examples, though. Pressure by activists in the US pushed Amazon.com, Microsoft Corporation and IBM to either exit or temporarily stop selling facial recognition technology to police. California introduced a moratorium on the use of facial recognition technology in police body cams, whereas New York temporarily banned its use in schools. The European Union (EU) has proposed banning high-risk uses of AI. 

At the most basic level, Malaysians should change their habit of giving away personal information easily, the interviewees say. They should also protest when unnecessary personal data is collected and avoid questionable apps. 

Of course, this is not always possible, since they may be shut out of certain services if they do not comply.

Ultimately, stronger laws, oversight and enforcement are sorely needed to push for more systemic protection.

“If I were to choose one vital thing, it would be a revision of the PDPA or a similar legal provision, which includes the government in its mandate,” says Anisha. 

“Checks and balances must also be put in place to prevent the exceptional data collection practices deployed during the pandemic from creeping into other-uses cases beyond the time frame that is actually necessary.”

For instance, the EU’s General Data Protection Regulation (GDPR) has clauses that require companies to collect only data for specific purposes and for as long as it is necessary. Consumers also have a right to ask companies how their data is being used.

The PDPA is not as comprehensive as the GDPR. It only prevents the inappropriate use of personal data for commercial purposes, is rarely enforced and does not regulate the types of data collected or used, according to a report by the Khazanah Research Institute. The law lacks provisions to address online privacy issues and personal data processed in non-commercial settings.

“The appropriate legal framework must be in place before deploying technology. For instance, the Malaysia Digital Economy Blueprint states that the PDPA will be reviewed only in 2025, the same year in which the National Digital ID is rolled out. The scope of the PDPA should be reviewed well before this is launched,” says Anisha. 

Currently, the PDPA does not apply to the government. This should be addressed, adds Foong Cheng Leong, a lawyer focusing on areas such as privacy and data protection laws.

“There should be a law governing how the government can process our information. Such a law should include the right to request the government to disclose what kind of personal data it has collected or is collecting,” says Foong.

“This request is, of course, subject to certain exemptions such as national security. The law should also make the government accountable for misuse of our information or negligent handling of our information.”

Other suggestions by the interviewees include data localisation laws, mandatory data breach notifications and laws that allow the public to request information from the government.

There should also be some scrutiny on who the country buys surveillance technology from, especially if it is from countries that have weaker privacy standards. According to the AIGS report, the key companies that sell surveillance technology to Malaysia are China’s Huawei and Yitu, and Japan’s NEC Corporation.

“In terms of surveillance technology, we seem to be quite heavily dependent on technology from a particular country. This presents real geopolitical risk. What happens if diplomatic relations sour? More immediately, it presents privacy risks,” says Anisha.

All in all, transparency is crucial. The government can claim that the technologies are used to protect citizens, but it must then be transparent about how the technology works and what the data is used for, says Tan.

“It has to be open to more scrutiny. If you say you are going to protect us, you need to earn our consent. If the government has a much better outreach programme that explains truthfully what the limitations and benefits are, the buy-in will be higher too. 

 

Findings from the AI Global Surveillance Index 2019

• Fifty-one per cent of advanced democracies, 41% of electoral autocratic or competitive autocratic states and 37% of closed autocratic states deploy this technology;

• There is a strong relationship between a country’s military expenditure and a government’s use of artificial intelligence surveillance systems. Forty of the top 50 military-spending countries use AI surveillance technology; and

• Governments in autocratic and semi-autocratic countries are more prone to abusing this technology for mass surveillance.

Save by subscribing to us for your print and/or digital copy.

P/S: The Edge is also available on Apple's AppStore and Androids' Google Play.

      Print
      Text Size
      Share