How asking ethical questions will determine altruism in AI

This article first appeared in Forum, The Edge Malaysia Weekly, on February 17, 2020 - February 23, 2020.
-A +A

The new year began with a mixed bag of news on artificial intelligence (AI) and Google. First, a top executive resigned as the company failed to implement human rights policies in a project developed for use in China. Then, in contrast, the success of a collaboration involving, among others, Deep Mind and Google Health in reducing missed cases of breast cancer from mammogram images tested by pitting AI against human radiologists. This success story closely follows one from last year where a deep learning model was used to detect lung cancer by reviewing CT scans more precisely than radiologists.

Use of AI in healthcare pushes the agenda for algorithms to be developed to assist medical experts, but certainly not to replace them. Similar to other AI tools, pervasive use of AI must be preceded with an assessment of a number of variables, including the accuracy of the diagnosis (false positives and false negatives), the healthcare risks, security of the data used and findings, liability in cases of a wrong diagnosis as well as innumerable other legal and ethical considerations.

Nevertheless, current trends anticipate that many facets of our lives will be regulated by algorithms and, in a nebulous situation, we may feel increasingly helpless in our lack of understanding and control of a non-human decision-making process. We are not, however, helpless. Our stance is that all is not lost. Indeed, the power and responsibility of humans to undertake conscionable decision-­making matters more than ever.

Conscience is that part of our mind that informs us whether our actions are right or wrong. Italian philosopher Thomas Aquinas spoke of synderesis, arguing that human beings have a

fundamental and infallible grasp of this. Our conscience is the process that measures our actions against a prescription of morality and ethics that comes to bear in our decision-making. It is our conscience — the whole sum of our biological processes and experiences — that leads our ethical awareness, ethical considerations and ethical decision-making.

When did ethics and human conscience begin to interrelate with information and communications technology? Perhaps it was the moment we realised that beyond its functionality and utility, technology can have perilous, menacing and harmful consequences to humanity. There is a need to be aware of ethical concerns in developing any type of technology, in particular the type of AI being created today through the writing of algorithms, the use of which will have serious consequences for the way we live, the dignity in our existence and humanity as a whole.

Data analytics and AI are fuelling a transformation through the advancement of data and technology pandered to governments, businesses and individuals by its users and inventors as necessary and revolutionary. The consideration of ethics and law helps us to deliberate, evaluate and decide the course of our actions and the consequences that follow from these transformative technologies.

While law is the moral minimum, ethics sets a higher threshold and, where there is a legal vacuum, ethics becomes the sole measuring yardstick. The view we are advancing is that this consideration exercise is not an impediment but, rather, an opportunity to obtain a competitive advantage — to ensure that by integrating this consideration in the design of data analytics and AI, there is a degree of prospects that the AI being developed and deployed is used for altruistic ends.

The apprehension around AI stems from its use by corporations and governments to make decisions, policies and laws that regulate our behaviour, rights and liberties. And this decision-making, to a large extent, is left in the hands of tech companies and IT professionals who write algorithms that use data to make decisions. In short, algorithms will regulate us.

“Algorithmic regulation”, a phrase coined by technology publisher and entrepreneur Tim O’Reilly, was expounded on by Prof Karen Yeung, Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at the University of Birmingham (UK) Law School. Algorithmic regulation or decision-making refers to the use of algorithmically generated knowledge systems to execute or inform decisions, which can vary widely in simplicity and sophistication. Regulation here is an intentional attempt to manage or minimise risk, or alter or control behaviour to achieve some pre-specified goal.

Yeung draws several fears flowing from algorithmic regulation that give rise to a new system of social ordering. These fears include increased surveillance, private-sector use of big data analytics, erosion of informational privacy, a lack of due process devoid of respect for fairness and equality principles in decision-making that impacts individuals, delegation of decision-making to automation and, finally, a lack of democratic accountability. These fears lead us to argue that we need to regulate algorithmic regulation — whether by ethics or law.

The loudest voices calling for regulation in algorithm design and deployment point us to three predominant concerns — namely, the risk of bias, the lack of transparency and the question of accountability, all of which are strongly interlinked.

Instances of racial bias in algorithms are demonstrated by the research of Prof Latanya Sweeny, highlighting discrimination in online advertisement delivery and Google’s image recognition wrongly classifying African Americans as “gorillas”. Other forms of algorithm bias in terms of gender, social class and political disposition are becoming increasingly apparent and potentially more impactful. An explanation as to why there is a shortage of reported instances of bias is the absence in detecting these biases at the stage of algorithm development.

In this vein, developers and researchers alike often highlight that the decision-making process of an algorithm is an indiscernible “black box”, which lacks transparency in the decision-making process and the inevitable concern of accountability when a problem such as bias is detected. Although there is a movement towards explainable AI (XAI), which aims to incorporate new explanation techniques with the results produced by the machine in order to create more explainable models and results, XAI is yet to morph into a mainstay component of AI.

Undoubtedly, these concerns are amplified when the algorithm in question is regulatory and predictive in nature, such as determining recidivism, bail applications, social security eligibility or travel security screening. Existing legal remedies may act as a stopgap measure, but the civil justice process is costly and slow. In the fast-paced field of technology, early implementation of hard law has proved to be unnecessarily restrictive and slow (although AI law is an eventuality), we may need to look elsewhere for solutions.

Without reinventing the wheel, we are inclined to build on existing frameworks to be adopted, such as the Asilomar principles on beneficial AI and the interim reports of the UK Centre for Data Ethics and Innovation. It is also encouraging to see practical tools being developed in response to the emerging concerns such as a “bias filter” developed by Elisa Celis and her team at Ecole Polytechnique Fédérale de Lausanne, a research institution in Switzerland.

However, drawing from calls as early as 1991 (Dan Gotterbarn, a leader in the field of professional ethics) to introduce ethics as a fundamental part of the field of computer technology, and more recent calls by Hannah Fry of University College London in the UK to implement an oath for mathematicians similar to that found in the medical profession, we conclude the sensible and pragmatic advancements of professional autonomy as playing a key role in the field of algorithm development.

There is a degree of progressive attitude towards ensuring that development and deployment of AI incorporates ethical considerations. Tech companies that are aware of the fears of algorithmic regulations are setting up AI

Ethical Frameworks and AI Ethics Committees within their organisations. Sceptics view the adoption of these initiatives as simply paying data ethics lip service. This may be seen in the case of Google, where there is an increased dissidence among its employees, such as the uprising in Google against its development of Project Maven for the Pentagon (the project involved training an algorithm to identify certain objects in video from surveillance drones) and Project

Dragonfly for China (the project involved a search engine that would produce government-controlled results).

When philosopher Yuval Harari referenced data analytics as “Dataism”, he highlighted the potential of algorithmic regulation. Harari commented that processing data should be entrusted to computational algorithms as they have the capacity that far exceeds the human brain. However, we need to add a further dimension to this. The idea is, ethics is required to regulate the technology and the technologist.

We are accepting of the fact that technology is or will become superior to human intelligence and we forward the argument to regulate algorithmic regulation — either by soft law such as professional codes, AI ethical frameworks and ethical committees; or eventually hard law — in the same way parliamentarians and governments regulate entities — human or artificial.  AI is machine intelligence and it can undertake many of the tasks that human intelligence achieves, bar one — human conscience. We do not precisely know what creates the feeling of consciousness in humans. It will be challenging for a data scientist to say that AI can write an algorithm with human conscience.

Machines will become more pervasive and invasive with increased intelligence and autonomous processing. But we must continue to strive for AI to be cooperative with human conscience and human altruism. Through time, machines have been built to serve their creators, and human intervention is required in their automation. But we hasten to add that human intervention may be a thing of the past with sentient machines. That is a conversation for another day.

Discussion of ethics in AI may be seen as prescient, but owing to our rapacious appetite for using it to solve the world’s problems, appreciation of ethical considerations is vital in the inexorable transformation made and led by AI. Initiatives of discussing data ethics must begin now. It is foolish to dismiss it.


Dr Jaspal Kaur Sadhu Singh is senior lecturer at the Faculty of Law and a member of the Centre of Analytics Research and Applications, HELP University. Darmain Segaran is founder and principal consultant at Dataraxis and an adjunct lecturer, Faculty of Law, HELP University. They are executive committee member and ordinary member respectively, of the International Movement for a Just World.

Save by subscribing to us for your print and/or digital copy.

P/S: The Edge is also available on Apple's AppStore and Androids' Google Play.