Friday 29 Mar 2024
By
main news image

This article first appeared in Digital Edge, The Edge Malaysia Weekly on October 4, 2021 - October 10, 2021

From the shows we watch, to what we have for dinner and even whom we date, algorithms have replaced our active power of choice.

The appeal is immense, as the positives of algorithmic decisions greatly outweigh the negatives, even though they take away some autonomy. These codes comb through tonnes of personal and granular data and make correlations and predictions that help streamline and navigate our lives in a digitally saturated world.

Often overlooked, however, is the fact that a lot of inherent human biases make their way into algorithms — the building blocks of artificial intelligence (AI) and machine learning systems that we rely upon to automate simple and complex decision-making processes.

The use of AI in decision-making is still in its infancy in Malaysia but gaining momentum as the nation aspires to become a regional leader in the digital economy by 2030, AI being a significant component of this aspiration.

Minister in the Prime Minister’s Department (Economy) Datuk Seri Mustapa Mohamed, during the unveiling of the Malaysia Digital Economy Blueprint in February, said AI-related technologies alone could increase gross domestic product (GDP) by up to 26% — making it the biggest commercial opportunity in the next decade.

Subsequently, the Malaysia Artificial Intelligence Roadmap (AIRmap) — designed by Universiti Teknologi Malaysia experts and supported by industry consultants from the National Tech Association of Malaysia and the Ministry of Science, Technology and Innovation’s (MOSTI) National Science and Research Council — was launched in March to “create a thriving national AI ecosystem”.

Of its primary goals, the AIRmap has set out to establish AI governance. “Artificial intelligence is going to permeate all aspects of life and will inexorably evolve along with one’s cradle-to-grave lifespan. No human activity or product will be left untouched,” states the policy document.

The extracts of the plan show that the team is working on establishing an AI coordination and implementation unit, which will oversee policy direction, mete out an AI code of ethics, evaluate existing laws, cybersecurity, talent development, research and innovation, and more by 2022.

But seeing that the current regulations are likely to come under strain — considering the exponential influence and pace of growth of digital technologies — data experts and analysts urge policymakers to move quickly to ensure that existing laws, regulations and legal constructs remain relevant in the face of technology change.

Although we would like to believe that algorithms are unimpeachable in their decision-making capabilities, skewed input data, false logic or even just the prejudices of programmers mean AI easily amplifies human biases, says Dr Rachel Gong, senior research associate at Khazanah Research Institute (KRI).

“None of [it] is neutral. All of it is shaped by the people who design the algorithm, who write the code, who decide what data should be used to teach the machine,” says Gong.

In June, Gong and a team of KRI researchers published a book titled #NetworkedNation: Navigating Challenges, Realising Opportunities of Digital Transformation, in which they highlight the importance of digital governance, among others.

“As Sofiya Noble points out in her book, Algorithms of Oppression, algorithms themselves are biased even before big data comes into the picture. It’s a point that a lot of people find hard to accept; it’s almost easier to just focus on the data because that sort of shifts the responsibility away from the big companies developing the algorithms and onto history and society more broadly.

“It’s something that underscores all the policy recommendations we make in the #NetworkedNation book, that tech alone is not, and cannot be, the answer. There’s a whole swath of social considerations that need to be taken into account when we make plans to digitalise government services or go cashless or however else we adopt technology,” says Gong.

Times AI showed prejudice

Escalating instances of AI perpetuating biases that exist in society, particularly discrimination against body size, race and gender, are just the tip of the iceberg.

In 2016, Microsoft’s Tay — an AI Twitter bot that the company described as an experi­ment in “conversational understanding” — was corrupted in less than 24 hours as people started tweeting the bot with all sorts of misogynistic and racist remarks. Tay started repeating these sentiments back to users.

More recently, social media behemoths Facebook, Instagram and TikTok have come under heavy scrutiny over their censorship policies of coloured people, plus-size individuals and even suppressing posts of those in Palestine when a violent conflict erupted in Israel and the Palestinian territories in May.

In June, Stanford University and University of Chicago researchers spotted discrepancies in mortgage approval between minority and majority groups where AI-powered predictive tools used to approve or reject loans are less accurate for minorities in the US. If financial institutions were to automate the selection process entirely, it could be disadvantageous to the unbanked and underserved.

In an attempt to weed out biases in its AI, microblogging site Twitter resorted to holding a competition to find algorithmic bias in its photo cropping system in March.

The top entry showed that Twitter’s cropping algorithm favours faces that are “slim, young, of light or warm skin colour and smooth skin texture, and with stereotypically feminine facial traits”.

The second- and third-placed entries showed that the system was biased against people with white or grey hair, suggesting age discrimination, and favours English over Arabic script in images, proving that AI biases are also more pervasive than ever.

“The interesting thing about technology is how processes in cyberspace are diffused in greater society, causing implications to the economy, social unity and politics,” notes Farlina Said, an analyst in foreign policy and security studies at the Institute of Strategic and International Studies (ISIS) Malaysia.

Apart from discrimination, bias in algorithms also affects competition and consumer experience, she adds. “As users would congregate to large platforms, this would create monopolies and impact fair competition in experience.”

People’s cognitive capabilities such as analytical thinking are also challenged in multiple ways, as algorithms dictate the content we access.

“It can also exaggerate and carve echo chambers, which would challenge traditional efforts of building national unity. Examples such as Cambridge Analytica or the ability of algorithms to suggest content to users mean that the development of echo chambers can pull society deeper into groups.

“While not all groups can have devastating consequences, driving groups heading into extremities can have consequences such as increased radicalisation, rise in anti-vaccine sentiments and communal views. In an environment where moderation would bring stability to development pathways, the echo chambers will impact Malaysia negatively,” says Farlina.

How does bias occur?

There are two stages to how this bias can creep into a seemingly automated process, says Izad Che Muda, CEO and co-founder of Inference Tech Sdn Bhd, an AI solution provider.

In the training stage, an algorithm learns based on a set of data or certain rules or restrictions. The second stage is the inference stage, in which an algorithm applies what it has learnt in practice; this is where its biases are revealed.

“Algorithm bias happens when a machine learning software produces outputs or predictions that show biases against certain groups. Algorithm biases usually occur in various stages along the machine learning development pipeline,” says Izad.

“First, a machine learning algorithm is an algorithm that learns from data. It is demonstrably powerful, as it can analyse complex and high-dimensional data such as videos, images and even speech, and produce more accurate results.

“But it is only as powerful as the data it feeds on. If we put garbage in, we will definitely get garbage out,” says Izad. Inference Tech specialises in designing computer vision and AI-driven video analytics software.

Take, for example, AI recruiting software. During the data acquisition process, sample bias may happen when the data is not representative of the realities of the environment in which the model is deployed, he points out.

“Say an AI algorithm is modelled using attributes of star employees of certain companies. If the companies do not practise diversity, however, then the sample is not representative of the whole population. This limits the opportunity for a good candidate from a different background to be recognised.

“Next is prejudice bias, which replicates the existing societal bias into the machine learning model itself. For example, Amazon’s AI recommendation system was found to be biased against women. It was trained by observing patterns in résumés submitted to the company over 10 years.

“Because Amazon had hired more men than women in the past, the AI replicates this bias. Even during the processing, an algorithm bias may happen when the developer allocates more weight to irrelevant parameters,” says Izad.

In a multiracial setting such as Malaysia, an AI model trained using data that is not representative of the population will result in a model that exhibits some bias, he adds.

“A facial recognition system that performs well in China may not work for us, as we are of different populations. Even if the model is trained using our data, the bias may still happen. A facial recognition system trained using datasets consisting of mostly Malay men is likely to have a higher error rate for other demographic classes.

“Having these biased AI models making decisions for us will put certain demographic groups at risk of injustice and discrimination. It is important to ensure any AI system deployed in our society reflects us and does not open room for discrimination,” says Izad.

AI governing laws

As it is still too early for lawmakers to see just how this technology will have an impact on the public, regulations on AI are not expected to exist till at least 2025, although the AIRmap assured that a code of conduct can be expected as soon as 2022.

The most extensive conversation on AI regulation is happening in the European Union, where governments are already implementing or developing regulations on the use of AI in facial recognition and computer vision, operation and development of autonomous vehicles, challenges arising from conversational systems and chatbots, concerns around AI ethics and bias, aspects of AI-supported decision making and the potential for malicious use of AI, among others.

In Southeast Asia, Singapore is taking the lead; the government has developed a Model AI Governance Framework to help AI practitioners in their systems design and implementation.

KRI’s Gong cautions, however, that policy­makers ought to work out the legislation and regulations before the technology is rolled out on a larger scale.

According to the AIRmap, AI is expected to be rolled out in healthcare, education, agriculture, smart cities transport and the public services sector.

The closest legislation to governing data protection in existence today is the Personal Data Protection Act (PDPA) 2010. Data, after all, fuels AI. While the PDPA restricts how personally identifying data may be distributed, it is rarely enforced. And, the act does not apply to the data collected by the federal and state governments, but only to commercial transactions. The PDPA is targeted to be reviewed by 2025, but there has been no information on whether it will be revised.

Gong says: “It seems that a lot of new technology is being proposed very excitedly by people trying to sell software and systems, and legal and regulatory frameworks have not caught up with all these technologies.

“They are certainly not designed to keep pace with how rapidly technologies evolve, and one of the things KRI recommends is a review of existing laws written in and for an analogue world to ensure they can be appropriately applied in a digitalised society.

“The Digital Economy Blueprint does mention that a review is in order, but the targets to review existing laws vary from 2025 to 2030, to say nothing of drafting new ones. In the meantime, I guess existing laws have to be interpreted and applied ad hoc.”

Once a technological tool, whether an app or an algorithm, is implemented on a large scale, it is hard to reverse its effects.

“We need to ask the difficult questions as early in the process as possible, drawing on lessons that we can learn from how other countries have implemented these technologies ahead of us.

“Algorithms are already a black box to how the machine learns. Wherever possible, we should make sure that the rest of the process is as transparent as possible without sacrificing privacy and security,” asserts Gong.

Farlina concurs, adding that the data economy can increase productivity, spur innovation and improve livelihoods.

She says: “It might be hard to put the genie back in the bottle. It may be better to build an ecosystem that can guide and check the development of such [AI] systems instead of opting out of the technology.

“Among the prevention methods that I can think of is setting up the data governance regime that upholds ethical principles and addresses issues of biases in the data sets. Data management is particularly important and it should be a part of industry standard to use high-quality data sets that are free of bias and biased outcomes.”

As there is no law that governs AI, the responsibility for governance should be held collectively, says Izad. “First, both developers and users have to understand and assess how good the AI solution is and how critical the decision is to the problem. In situations where critical judgement is involved, sometimes AI only acts as a guideline or to improve the efficiency and consistency of the work.”

Izad stresses that the onus is on the developer to ensure users understand the accuracy of an AI model and acknowledge that there is still the risk of errors no matter what.

“Again, developers have to continuously improve the accuracy of their models over time and educate users on how machine learning works. They must not over-claim and upsell the capabilities of their software,” he says.

 

AI in government

Local instances of problems in the public service delivery system caused by incongruities in algorithms have yet to surface, but artificial intelligence (AI) tools are already being used.

To boost its digital capabilities, the Malaysian judiciary piloted an AI tool as a sentencing guide to help judges with decisions. 

The AI sentencing guidelines (AISG) system, known as AiCOS, was developed by Sarawak Information Systems Sdn Bhd (SAINS) — a state government-owned company — to assist Sessions Court judges and magistrates in Sabah and Sarawak to recommend appropriate sentences based on the trend of sentencing from previous cases.

In the initial stages, the AI was trained based on a database of cases between 2014 and 2019 in Sabah and Sarawak before delivering recommendations to the court.

As at June, in only 35% of the cases, the judges followed the recommendation of the AI, the Office of the Chief Registrar, Federal Court of Malaysia, tells Digital Edge, stressing that the technology was rolled out primarily to ensure consistency in meting out sentences and save time in referring to past cases manually, and not to take away their volition in exercising their judicial duties.

To further train the AI, the AiCOS was implemented in the Sessions and Magistrates’ Courts in Kuala Lumpur and Shah Alam on July 23. Like in Sabah and Sarawak, the AI would provide sentencing recommendations for 20 common offences under the Penal Code, Road Transport Act 1987 and the Dangerous Drugs Act 1952 in the Sessions and Magistrates’ Courts.

“The effectiveness and impact of this system can be seen from two perspectives. First, the system manages to streamline and set the trend of sentencing in courts. This is evident after 18 months of AiCOS’s commission in February 2020, 35% of cases — 805 out of 2,300 cases — that used AI had followed the recommendation by AiCOS.

“Before the implementation of AI, there was no structured system for magistrates to have access to sentencing data of similar offences. Ultimately, sentences passed by each court are significantly varied from one another for similar offences. This has caused some dissatisfaction among members of the public, including the accused,” says the spokesperson in an email interview.

Every day, courts receive, process and keep millions of items of data through the filing of civil and criminal cases.

“For the purpose of developing an AISG system, however, only certain data is used to generate recommendation of sentences. The power of machine learning comes from its ability to learn from data and apply that learning experience to new data that the systems have never seen before. To prevent inherently biased data that can skew machine learning results, only clean, accurate and well-labelled data is fed into the machine,” says the spokesperson.

As no AI governance framework is available for public or private agencies intending to build and use AI systems in their work system, the judiciary relied on the 2018 European Ethical Charter for the use of AI in judicial systems and their environment for guidance, adds the spokesperson.

The spokesperson acknowledges that bias can creep into algorithms in several ways. “The AISG system learns to make decisions based on training data. Sensitive variables or parameters such as gender, race or sexual orientation could be factors where predictive or recommended results may be biased. 

“However, in our case,  the AISG does not include race as one of the parameters for sentencing recommendation. Only parameters like gender and nationality are included for statistical compilation purposes and these two parameters are not determinative factors [for] the severity or leniency of the sentences to be passed. The Malaysian judiciary always upholds the principle of non-discrimination as enshrined under the Federal Constitution as well as by dedicated case authorities during the planning, development and implementation of the AISG system.”

Separately, the Royal Malaysian Police embarked on improving its surveillance capabilities by introducing the country’s first AI-based facial recognition camera system in Penang in 2019 to identify criminals on the street. The auxiliary police force also integrated on-body cameras with high-end facial recognition features in criminal identification. But little to no information is available on whether any evaluation has taken place to assess the effectiveness of the system.

Khazanah Research Institute points out, however, that these tools are unreliable because inherent biases in algorithms and the training data they rely on could lead to misidentification, racial profiling and discrimination in criminal sentencing.

“The city of San Francisco, concerned over potential misuse of this technology, banned the use of facial recognition by all city agencies, including law enforcement,” says the think tank in #NetworkedNation: Navigating Challenges, Realising Opportunities of Digital Transformation, its book released in June.

Save by subscribing to us for your print and/or digital copy.

P/S: The Edge is also available on Apple's AppStore and Androids' Google Play.

      Print
      Text Size
      Share