Seven takeaways from Stanford's 2023 AI index report

Latest AI models are independent and capable of developing innovative concepts, boosting scientific progress and reaching new research milestones

Generative AI could drive a 7 per cent increase in the global economy by 2030. Getty
Powered by automated translation

The global artificial intelligence market value is projected to pass $1.7 trillion in 2030, up from $93.5 billion in 2021, expanding at a compound annual growth rate of more than 38 per cent, data from Grand View Research indicates.

Generative AI, such as Microsoft-backed ChatGPT or Google-owned Bard, also holds immense potential, a recent report by Goldman Sachs found.

It could drive a 7 per cent (or almost $7 trillion) increase in the global economy and lift productivity growth by 1.5 percentage points over 10 years, the US investment bank said.

In its latest annual 2023 AI index report, the Stanford Institute for Human-Centred Artificial Intelligence of Stanford University explores the AI market and discovers the trends affecting the industry.

Here, The National looks at seven key takeaways from the Stanford report.

Industry tops academia in AI race

Last year, industry released 32 major machine learning models, whereas academia produced only three.

Until 2014, academia was producing important machine learning simulations or projects, but industry has taken over since then, the report found.

Developing advanced AI systems requires huge capital, scientific resources, best talent, vast amounts of data and computing power — resources that “industry actors inherently possess in greater amounts compared to non-profits and academia”, the report said.

“In 2011, roughly the same proportion of new AI PhD graduates took jobs in industry (40.9 per cent) as opposed to academia (41.6 per cent)," it said.

"Since then, however, a majority of AI PhDs have headed to industry. In 2021, 65.4 per cent of AI PhDs took jobs in industry, more than double the 28.2 per cent who took jobs in academia."

AI becomes more flexible and expansive

Conventional AI systems performed well on limited projects or narrow tasks but they have struggled across broader jobs.

However, latest AI models challenge this traditional trend as they are proficient to follow many commands and perform different tasks simultaneously.

For example, generative AI can produce text, images, videos and audios, and can also generate novel content, in the right context, instead of merely analysing or acting on the existing data.

Although language models continued to improve their generative capabilities, Stanford research suggested that they still struggle with complex planning tasks.

Is AI the new scientist?

Latest AI models are developing innovative concepts, boosting scientific progress and reaching new research milestones.

Last year, AI models were used to accelerate research in various innovative projects such as hydrogen fusion, improving the efficiency of matrix manipulation and developing new antibodies.

Technology company Nvidia used an “AI reinforcement learning agent to improve the design of the chips that power AI systems", the report said.

"Google recently used one of its language models … to suggest ways to improve the very same model. Self-improving AI learning will accelerate AI progress."

Rising abuse of AI

The number of AI-related controversies has increased 26 times since 2012, according to the Aiaaic database that tracks incidents related to the ethical misuse of AI.

Aiaaic is an independent, non-partisan, public interest initiative that examines and makes the case for real AI, algorithmic and automation transparency and openness.

Some incidents of last year included a deepfake video of Ukrainian President Volodymyr Zelenskyy surrendering and the US prisons using call-monitoring technology on their inmates.

Last year, generative models such as ChatGPT gained huge following but they come with many ethical challenges.

For example, text-to-image generators are sometimes biased along gender dimensions, and chatbots can be tricked into serving nefarious aims, the report said.

Decrease in private investment in AI

For the first time in the past decade, yearly private investment in AI decreased. Global AI private investment decreased 26.7 per cent annually to $91.9 billion last year.

In 2022, the AI focus areas with the most investment were medical and health care ($6.1 billion), data management, processing and cloud ($5.9 billion), and FinTech ($5.5 billion).

But the demand for AI-related professional skills is increasing across almost every American industrial sector, the report said.

In the US, across every sector for which there is data — except agriculture, forestry, fishing and hunting — the number of AI-related job postings increased on average from 1.7 per cent in 2021 to 1.9 per cent last year.

AI use in multi-faceted ways

The AI skills most likely to have been embedded in businesses include robotic process automation (39 per cent), computer vision (34 per cent), natural language text understanding (33 per cent) and virtual agents (33 per cent), according to the report.

The most commonly adopted AI use cases in 2022 were service operations optimisation (24 per cent), creation of new AI-based products (20 per cent), customer segmentation (19 per cent), customer service analytics (19 per cent), and new AI-based enhancement of products (19 per cent).

Policymakers’ rising interest in AI

Stanford’s AI index analysis of the legislative records of 127 countries showed that the number of bills containing “artificial intelligence” that were passed into law grew from only one in 2016 to 37 last year.

“An analysis of the parliamentary records on AI in 81 countries likewise shows that mentions of AI in global legislative proceedings have increased nearly 6.5 times since 2016,” the report said.

Updated: April 12, 2023, 4:30 AM