Moonshot: Drone voting and robot politicians

This article first appeared in Digital Edge, The Edge Malaysia Weekly, on September 13, 2021 - September 19, 2021.
Moonshot: Drone voting and robot politicians
-A +A

Every time Malaysia’s general elections come around, we see a scramble to try and get as many people included in the democratic process as possible. This may not be a big issue in city centres where polling stations are set up close to voters’ homes, but in the rural areas and for those who are physically disabled, it may be a daunting task.

So how can voting be made more accessible — by getting ballot boxes to voters instead of the other way round. Perhaps, by using drones? Inference Tech Sdn Bhd CEO Izad Che Muda tells Digital Edge it is an interesting idea, especially since drone delivery is picking up overseas. Using drones as a means to get ballot boxes to people’s houses is not too different from drone package delivery, he points out.

Izad says that theoretically and practically, in order to complete a delivery, a drone must identify a safe landing spot before delivering the package. This can be achieved with the use of existing computer vision technologies.

“The drone will then accept an authenticated digital receipt from the customer after unloading the package. Additionally, and perhaps most importantly, customers’ private information and any communicated data must be protected against malicious attacks such as eavesdropping, manipulation, interception and the physical capture of the drone. Standard cryptographic algorithms are not designed to address these security challenges,” he explains.

Fortunately, there are some impressive advances in white-box cryptography and blockchain that may solve these security challenges. However, these are still in the experimental stage and nowhere ready for prime time. In other words, it will be a few years before drones deliver your latest purchases from Shopee right to your doorstep.

White-box cryptography basically combines methods of encryption and obfuscation (or confusion) to embed secret keys within the application code. The goal is to combine code and keys in such a way that the two are indistinguishable to an attacker, and the new “white-box” programme can be safely run in an insecure environment, according to Lane Wagner in a post on Hacker Noon, an independent technology media site.

“Putting aside the regulatory issues, the technical challenges are pretty much identical, with an extra emphasis on security for obvious reasons. Therefore, when (not if) drone package delivery issues are solved, which can be many years from now, I don’t see why drones cannot be used to deliver ballot boxes,” Izad says.

He adds that voters can be authenticated with a combination of a MyKad slot, facial recognition and an additional biometric authentication mechanism such as thumbprint scanning. He believes that one without the other is not quite sufficient as authentication is just one part of the security equation.

“Let’s assume that white-box cryptography and blockchain can address most of the security concerns on the software side of things. The biggest risk now for drone delivery (whether it is parcels or ballot boxes) is physical capture of the drone,” he says.

“Drone jammers, which can jam the frequency that a drone uses to communicate with its ground station, can be easily bought online. There are also drone catchers who literally catch drones using a big net. There is no obvious solution to these anti-drone mechanisms.”

Companies such as Dominion have been manufacturing electronic voting (e-voting) systems for quite some time and they are widely used in the US states that allow electronic voting. Izad says these systems are designed with extreme security measures in mind, such as strict non-reliance on internet connectivity and absence of remote access mechanisms (to eliminate the risks of remote attacks), in addition to physical paper ballots and paper records to ensure reliable audits and recounts.

Theoretically, if a smaller and lighter version of these machines can be manufactured, it would be the best payload to be carried by drones on election day, says Izad. Voters will have to authenticate themselves and cast their votes electronically via a touchscreen device mounted on the drone. 

“In my opinion, electronic voting is the way to go. However, I do recognise that the logistical aspects of electronic voting are non-trivial. It is even more difficult to theoretically deliver this kind of technology to the often-marginalised and forgotten people who live in remote areas,” he says.

“A gradual, slow-and-steady transition and migration from a fully manual to fully electronic voting system is perhaps the best approach. We do have to start somewhere.”

A robot politician?

For most of us, choosing the right candidate to vote for can be tough. Not all candidates cater for all the needs of the people and that in itself could be a significant barrier for political inclusivity. Will a robot politician be the answer in this case? Yes, but not as a parliamentary candidate running for office.

David Lim, CEO and co-founder of Wise AI Sdn Bhd, tells Digital Edge that a robot politician is very likely to happen but it will be used to facilitate the human politician rather than run for parliamentary seats. 

He adds that some will agree that it is our ability to make value or moral judgments that sets humans apart from machines. Others would argue that ethical standards can be programmed into the robots. This, however, raises the perennial question about who gets to decide what those ethical standards are.

“Typically, politicians have a manifesto, which is an algorithm describing their promises. These manifestos are generally broad so that the politicians will have the flexibility to move things around after the election and react according to circumstances,” Lim explains.

“Meanwhile, a robot is an algorithm and has a specific task to deliver and is generally very narrow in scope. Robots are now used in social networks to weed out inaccurate and fraudulent information posted online. This is even more important before the election, as fake news could impact the rakyat’s vote.”

However, it might be possible for a bot to scan the sentiments of politicians before the election. Lim says there are multiple scientific disciplines such as artificial intelligence (AI), machine learning, cognitive science, mathematics, behavioural science, psychology, psychometrics and statistics that are used for emotion analysis. 

Facial expressions and speech recognition are the two extensively utilised methods adopted in AI, he adds. They include facial expression synthesis (image and video) and speech emotion synthesis. The use of psychometrics is also popular, but it is an intrusive method as it requires the person to answer a questionnaire or survey.

“Most facial expression synthesis and speech emotion synthesis systems rely on machine learning approaches that require large databases for effective training. As these large datasets are not easily available, a good solution is to augment the databases with appropriate data augmentation techniques, which are typically based on either geometric transformation or oversampling augmentations,” Lim explains.

“A specific neural network called Generative Adversarial Networks (GANs) is used for this purpose. GANs are algorithmic architectures that use two neural networks, pitting one against the other to generate new, synthetic instances of data that can pass for real data.”

Meanwhile, psychographics seek to understand the cognitive factors that drive human behaviour. This includes emotional responses and motivations; moral, ethical and political values; and inherent attitudes, biases and prejudices. Lim says the evaluation is usually done via traditional focus groups or interviews, set-top box viewing data, surveys, psycholinguistic dictionaries, website analytics, browsing data and social media.

An example is Silent Talker, a programme created at Manchester Metropolitan University to guess whether the subject matter was lying or telling the truth by scanning the face and eye movements. In 2019, it was used as a virtual policeman as part of iBorderCtrl, a project that aims to enable faster and more thorough border control for third-country nationals crossing into the European Union member states, and tested on the volunteers at borders in Greece, Hungary and Latvia.

The project, Lim says, has also drawn criticism from human rights activists and criticised by a professor of criminal investigation at University of Derby for not being credible as there is no evidence that monitoring micro gestures on people is an accurate way to measure lying.

“The belief that deception can be detected by analysing the human body via a high-tech AI version of a polygraph test has become more popular in modern days. This is especially due to the current limitations of the polygraph machine for being too slow and cumbersome to use at airports, borders, or on a large group of people, and the possibility of bringing the AI system into areas like insurance fraud claims, loan screenings and private job interviews,” he explains.

The bottom line — there is no single solution or silver bullet to perfectly identify human emotions, says Lim. Humans communicate through various verbal and non-verbal channels to show their emotional state and researchers look for physiological events thought to be correlated with a particular emotion.

“Although multiple data inputs from various channels could be used to suggest a state of emotion, the accuracy of these methods is still in question and would not be regarded as conclusive evidence.”

“Intentions or biases run deep in humans on many levels, and training algorithms to be completely free of those intentions and biases is a nearly impossible task. The tools and equipment used to collect data on the factors data scientists select to analyse and how they train their models can cause biases to creep into algorithms,” Lim says.

“One way to build better algorithms is to use auditing tools to detect biases in the training model before deploying it in the real world. Data scientists usually build algorithms for accuracy and not fairness.”