Is generative technology becoming a sight for sore AI?

As the powerful technology continues to advance, perceived risks grow more pronounced, industry experts say

A robot greets people in Bangkok. Investors have put more than $4.2 billion into generative AI start-ups in 2021 and 2022. EPA
Powered by automated translation

The rapid rise of artificial intelligence has been meteoric, bringing with it a host of benefits, challenges and perceived risks.

Authorities have been scrambling to regulate the sector as new innovations within AI continue to outpace existing guidelines.

"AI needs to be regulated – it’s too important not to," Joyce Baz, a spokesperson for Google, one of generative AI's main players, told The National.

"It is important to build tools and guardrails to help prevent the misuse of technology. Generative AI makes it easier than ever to create new content, but it also raises additional questions about trustworthiness of information online."

Reality, or digital hallucinations?

For starters, there seems to be a “huge dissonance” between what the general public cares about when discussing generative AI and what executives and business owners do, said Thomas Monteiro, a senior analyst at Investing.com.

The former always care more about the “bad” while the entrepreneurs only look at the “good”, he said.

“It is more than a purely technology-related matter. It is a broad social matter for which society still hasn’t found a common ground … and this is the main challenge for regulators at this point."

Generative AI could add as much as $4.4 trillion annually to the global economy and will transform productivity across sectors with continued investment in the technology, McKinsey & Company said in a study earlier this year.

The downside, however, stems from AI's “imperfections at its inception, potentially leading to instances of inaccuracies or hallucinations”, said Chiara Marcati, a partner at McKinsey & Company.

“This underscores the need for extensive awareness, continual mental filtering of AI outcomes and an emphasis on AI literacy,” she said.

AI hallucination is a phenomenon in which a large language model – often a generative AI chatbot or computer vision tool – perceives patterns or objects that are non-existent or imperceptible to human observers, creating output that is nonsensical or altogether inaccurate, according to IBM.

In art and design, AI hallucination offers a “novel approach to artistic creation, providing artists, designers and other creatives a tool for generating visually stunning and imaginative imagery”, IBM says.

“With the hallucinatory capabilities of AI, artists can produce surreal and dreamlike images that can generate new art forms and styles.”

To illustrate this, The National last week put out a test to find out how well one can recognise actual images from AI-generated ones.

 

Of the 10 pictures, users guessed right on nine, and with reasonable margins. The only image that they got wrong was particularly close, with (as of this writing) 54 per cent believing it was an AI image when, in fact, it was not.

“Critical thinking becomes essential to verify AI-generated outputs, as they shouldn't replace human cognition but rather enhance and refocus attention on significant tasks,” Ms Marcati said.

Tightening the digital screws

Earlier this month, the EU became the first major governing body to enact major AI legislation with the Artificial Intelligence Act, stipulating what can and cannot be done, and announcing corresponding fines – up to more than €35 million ($38.4 million) – for non-compliance.

When issues related to AI are tackled along with the ethical aspect, the technology will become much more flexible and adaptive, and benefit society even more, said Samer Mohamad, regional director for the Middle East and North Africa at mobility platform Yango.

“In terms of regulatory frameworks, given the varying regulatory landscapes across countries, advancements in AI and smart technologies might be shaped by local regulations, particularly about data privacy and security,” he said.

AI gained momentum – and jolted regulators – with the introduction of generative AI, which rose to prominence thanks to ChatGPT, the sensational platform from Microsoft-backed OpenAI.

Its sudden rise has also raised questions about how data is used in AI models and how the law applies to the output of those models, such as a paragraph of text, a computer-generated image, or videos.

“To fully capitalise on the potential of AI, it is essential to address the need for robust regulatory frameworks, ensure societal acceptance and foster interdisciplinary collaborations,” said Pawel Czech, co-founder of Delaware-based AI company New Native.

“This will require collaboration between stakeholders – including policymakers, industry leaders, and researchers – to navigate ethical considerations, workforce disruptions and data quality.”

The bandwagon speeds up

Google-owned Bard is the other front-runner in the burgeoning generative AI field, which has attracted attention from other notable names. Microsoft has already made its AI assistant Copilot available on its Office 365 suite of applications.

Last month, Amazon Web Services launched its own generative AI tool, Amazon Q. Meta Platforms, the parent company of Facebook, Instagram and WhatsApp, has also launched a series of generative AI tools.

Elon Musk, the owner of social media platform X, formerly Twitter, and chief executive of Tesla, launched xAI “to understand reality” and “the true nature of the universe”.

Samsung Electronics, the world's biggest mobile phone manufacturer, in November joined the race with its own ChatGPT-style Gauss platform.

Even Apple chief executive Tim Cook, during the company's fourth-quarter conference call, confirmed that the company had been working on its own generative AI technology. Earlier this month, the iPhone maker was reported to have quietly released MLX, a framework for building foundational AI models.

The breakneck speed at which companies are developing their respective AI models increases risks and questions on transparency, said Arun Chandrasekaran, a vice president and analyst at Gartner.

“Given the high odds at stake, this also creates an environment where technology vendors are rushing generative AI capabilities to market."

As a result, they are “becoming more secretive about their architectures and aren’t taking adequate steps to mitigate the risks or the potential misuse of these highly powerful services”, he said.

AI needs to be developed in a way that maximises the positive benefits to society while addressing the challenges, Google's Ms Baz said.

"While there is natural tension between the two, we believe it’s possible – and in fact critical – to embrace that tension productively. The only way to do it is to be responsible from the start."

Pumping the brakes

Investors have put more than $4.2 billion into generative AI start-ups in 2021 and 2022 through 215 deals after interest surged in 2019, recent data from CB Insights showed.

Globally, AI investments are projected to hit $200 billion by 2025 and could possibly have a bigger impact on gross domestic product, Goldman Sachs Economic Research said in a report in August.

Despite current investment trends, a “more realistic outlook” beyond the hype is anticipated, given the increasing scrutiny for the technology, said Balaji Ganesan, co-founder and chief executive of California-based generative AI and data security company Privacera.

“This expansion will prompt the creation of architectural blueprints for adapting data structures to support generative AI,” he said.

“Privacy and security will take centre stage, driving innovation in managing and safeguarding private data using foundational models.”

In terms of regulatory frameworks, given the varying regulatory landscapes across countries, advancements in AI and smart technology might be shaped by local regulations, particularly around data privacy and security, Yango's Mr Mohamad said.

“In 2024 … more concrete regulations will be introduced to curb AI’s risks and take advantage of its benefits.”

The past 12 months have witnessed the “pressing need” to bridge the widening gap in AI knowledge, with the need to foster inclusivity between AI experts and the broader community becoming increasingly crucial, said Preslav Nakov, department chairman of natural language processing at Abu Dhabi's Mohamed bin Zayed University of Artificial Intelligence.

“Investing in AI education and promoting literacy across diverse demographics are pivotal steps towards enabling everyone to comprehend, engage and contribute meaningfully to the evolving AI landscape,” he said.

“Looking forward, as generative AI becomes more integrated in different industries, organisations are getting a better grasp on how to best leverage it. The next generation of AI tools is likely to go far beyond chatbots and image generators, unlocking AI's full potential.”

Meet the world's first CEO with artificial intelligence

Meet the world's first CEO with artificial intelligence
Updated: December 27, 2023, 3:00 AM