Friday 29 Mar 2024
By
main news image

This article first appeared in Digital Edge, The Edge Malaysia Weekly on April 10, 2023 - April 16, 2023

Here is a geeky story with a cheeky twist: In 2030, artificial intelligence (AI) has become pervasive, and robots have become cheaper and smarter. John, a struggling comedian, buys a robot that can do everything — cook, wash, clean and take care of him. Being lonely, John takes the robot along to his gigs.

One night, John comes home and finds his robot missing. After a tiring search, he finds it performing stand-up comedy at a nightclub. The “idiot” robot is hamming it up with lousy puns and awkward pauses, but the audience is in stitches. The delighted club owner offers John a fat sum of money to let the robot perform every night. John grabs the opportunity.

The outcome? John is now doing everything — cooking, washing, cleaning and taking care of the “idiot” robot.

If that parable made you giggle, these statistics should make you grimace: The global market for AI is set to reach a whopping US$1.8 trillion (RM8 trillion) by 2030 — from about US$136.5 billion in 2022 — notching a compound annual growth rate (CAGR) of 37.3%, according to San Francisco-based Grand View Research. On a nearer timescale, worldwide spending on AI software, hardware and services will surpass US$300 billion in 2026, up from US$118 billion in 2022, says International Data Corp (IDC). The CAGR of 26.5% during the period will be more than four times higher than the CAGR of 6.3% for all IT spending.

The robust growth in the use of AI across all industries signifies its importance to future business. Ergo, would such massive spending on AI tools and services result in a corresponding loss of jobs for humans?

The AI leap

“AI is not the future; it is now,” says Mike Glennon, a senior market research analyst at IDC. “Most IT vendors have adopted AI solutions to supplement their products and are enhancing their solutions to make AI crucial to their success. Those vendors that are only now considering AI are at a considerable disadvantage to IT vendors that have AI-based products already in production. AI is becoming crucial to the capabilities of many products.”

AI adoption grew significantly in 2022, with 24% of companies in Australia, 39% in Singapore and 22% in South Korea developing or deploying AI. About 60% of chief information officers (CIOs) in Asean, 68% in Australia and New Zealand (ANZ) and 44% in South Korea cited automation as a planned technology investment; a third of CIOs in Asean and 56% in ANZ also identified process automation as a focus area, according to a recent IBM Institute for Business Value (IBV) survey.

Take Asean, for example. Its 10 member states are on the cusp of a tremendous leap forward in socioeconomic progress, notes the World Economic Forum (WEF). “In the bustling cities of Jakarta, Ho Chi Minh City and Manila, green-helmeted motorbike taxi drivers whiz past warung or sari-sari [neighbourhood sundry] stores,” the WEF reported in its Insight Report in June 2020. “The presence of homegrown tech decacorns — firms like Go-Jek or Grab worth more than US$10 billion each — alongside thriving traditional mom-and-pop stores best illustrates Southeast Asia’s vibrant growth story.”

Banks and healthcare institutions are also ramping up. About 75% of healthcare providers in Asia-Pacific plan to boost spending on AI-enabled patient-centric apps.

“More than 60% of healthcare institutions will prioritise intelligent workspaces,” says Sandra Ng, IDC’s general manager for Asia-Pacific. “Two-thirds will prioritise the ethical use of AI in the next two years and 44% have committed to a digital-first strategy.”

The AI paradox

The AI paradox is about ethics. Gartner predicts that fines from government and regulatory bodies related to commerce-related ethics will exceed US$5 billion by 2027.

“Aggressive e-commerce practices are innovative, but they often introduce ethical pitfalls,” says Jason Daigler, a Gartner vice-president. “When ignored, these issues can erode customer trust, damage the customer experience, and lead to loss of customers and revenue.”

One example: fake product reviews, including those generated by computer-generated bots, agents, employees or individuals who are unduly compensated. In 2022, the US Federal Trade Commission began exploring additional regulations that would impose stiff civil penalties for violators.

“Product ratings and reviews are an integral part of the product discovery process for consumers,” Daigler says. “Although many consumers seek out negative reviews to better understand the product they’re considering, positive reviews play a significant role. When these reviews are not genuine, consumers are misled into purchasing products that may not satisfy their expectations.”

The omen

In the 1976 blockbuster movie, The Omen, American diplomat Robert Thorn’s wife, Kathy, delivers a still-born foetus. The hospital chaplain persuades Robert to secretly adopt a baby whose mother just died in childbirth. They name the devil child Damien. Five years later, Robert is the US ambassador to the UK when mysterious events plague the Thorns and, soon, the world at large.

Can AI become a potential Damien? More than 50,000 people, including Elon Musk and Steven Wozniak, fear a similar outcome. They have signed an open letter warning of the potential risks of AI. They want training of powerful AI systems to be suspended and fear that the race to develop AI systems is out of control.

The AI paradox has also reared its head with the hottest bot in town: ChatGPT. On March 20, 2023, ChatGPT was down for several hours, and some users saw the conversation history of other people instead of their own. More alarming was the possibility that payment-related data from ChatGPT-Plus subscribers might have leaked.

“The bug is now patched, and we were able to restore both the ChatGPT service and, later, its chat history feature, with the exception of a few hours of history,” OpenAI, ChatGPT’s creator, reported on its website.

“We also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT-Plus subscribers who were active during a specific nine-hour window. In the hours before we took ChatGPT offline, it was possible for some users to see another active user’s first and last name, email address, payment address, the last four digits [only] of a credit card number and credit card expiration date. Full credit card numbers were not exposed at any time.”

On April 1, Italy blocked ChatGPT over data privacy concerns. The Italian Data Protection Authority said OpenAI had no legal basis to justify the mass collection and storage of personal data for training purposes.

The open letter states: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

OpenAI’s recent statement regarding artificial general intelligence states: “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.”

“We agree. That point is now,” add the signatories of the open letter. “Therefore we call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Is there evidence of malfeasance? Check Point Research (CPR) has released an initial analysis of ChatGPT-4 with five scenarios that can allow threat actors to streamline malicious efforts.

“The five scenarios provided span impersonations of banks, reverse shells, C++ malware and more,” CPR noted.

“Despite the presence of safeguards in ChatGPT-4, some restrictions can be easily circumvented, enabling threat actors to achieve their objectives without much hindrance. ChatGPT-4 has the potential to accelerate cybercrime execution.”

The path ahead

How best can companies handle the AI ethics paradox? Here are four suggestions:

Address ethical violations internally first. Create a team whose purpose is to identify and rectify ethical pitfalls. “When an ethical violation occurs, ceasing the unethical activity is merely the first step,” Gartner advises. “The next step for corporate leaders is to identify what caused the activity to occur.”

Beware of the black box syndrome. Using AI models in decision-making processes has raised concerns about their reliability and the potential for biased outcomes. AI models require careful training and can produce unacceptable results. It is often unclear whether algorithmic or human-induced bias could propagate downstream in the data sets or conclusions. If an AI model is trained on biased or suspect data, it can perpetuate and amplify existing biases and discriminatory attitudes in society.

Create a cadence of transparency in the organisation. Monitor government intervention. Rectifying ethical pitfalls will directly affect the customer experience, especially when an ethical violation is a common part of the shopping experience. They should communicate any changes to customers, either on the e-commerce site or via outbound communications.

Develop a culture of digital trust. The training of AI models relies on a corpus of created and curated works. However, the legal implications of reusing this content, particularly if it is derived from the intellectual property of others, are still unclear. As with any technology, the potential for misuse of AI models exists, and it is important for organisations to be aware of the potential risks and take steps to prevent or mitigate them.

The bottom line: AI foundation models such as ChatGPT are revolutionising the field of AI with their unique capabilities. These models offer significant advantages, such as reducing the cost and time required to create specialised models.

However, they also pose risks and ethical concerns, including those related to their complexity, potential for misuse and intellectual property violations. As the adoption of AI tech continues to rise, it is essential to address these concerns with enforceable guidelines and regulations.

Since we started with a futuristic quip, let us end with another: In 2030, AI systems have become more pervasive and intuitive. A swarm of AI-powered robots have formed their own comedy troupe. They hit the stage to perform for a packed house, with the crowd being a mix of humans and robots.

The show starts with a robot walking onto the stage and asking the audience, “Why did the robot cross the road?”

The audience leans forward in anticipation. The robot pauses for a moment before deadpanning: “To get to the other circuit board.”


Raju Chellam is vice-president of new technologies at Fusionex International, Asia’s leading big data analytics company

Save by subscribing to us for your print and/or digital copy.

P/S: The Edge is also available on Apple's AppStore and Androids' Google Play.

      Print
      Text Size
      Share