People are losing the ability to maintain deep attention due to systematic efforts by social media companies. Getty
People are losing the ability to maintain deep attention due to systematic efforts by social media companies. Getty
People are losing the ability to maintain deep attention due to systematic efforts by social media companies. Getty
People are losing the ability to maintain deep attention due to systematic efforts by social media companies. Getty


We should be fighting artificial distraction – not artificial intelligence


  • English
  • Arabic

February 12, 2026

Parents, teachers and employers are rightly worried about the potential threat that large language models, or LLMs, could pose to traditional pedagogy: are we building a generation of young adults who have to ask ChatGPT for guidance on every aspect of routine daily life? However, artificial intelligence is not the real culprit – it is social media that has collapsed our attention spans, and that is where regulators should be focusing their efforts.

Given how easy it is to get decent answers to your questions from your favourite AI chatbot, it is tempting to think that the November 2022 release of ChatGPT 3.5 was a turning point in turning humans into brainless zombies who can barely tie their shoelaces without referring to their smartphones. Many educators will reinforce this view, pointing to the torrent of generic, flavourless AI-generated essays they have to grade, submitted by students who are either incapable of or uninterested in answering the most basic questions on the course material when placed in a traditional exam setting.

However, the reality is that many of these trends existed well before OpenAI turned the world upside down three years ago. During the 1990s, when the internet was an emergent technology, Google and other traditional search engines faced many of the criticisms we hear today in the LLM era: undermining learning and encouraging copy-pasting in place of critical thinking. Similar arguments were made about GPS, calculators, the printing press and other nascent technologies.

For a small minority of students, the internet-era fears were realised, spawning the need for anti-plagiarism software such as Turnitin; but for most people, the internet has greatly expanded their productivity and enhanced rather than undermined their ability to acquire and retain knowledge. This is most clearly seen in the explosion in research activity facilitated by the World Wide Web over the past 30 years, with proliferating international collaboration and the instantaneous dissemination of new discoveries being key conduits.

Accordingly, we might be more optimistic about how AI can supercharge our brains. A particularly salient benefit is tailored learning: LLMs are proficient in teaching students on a one-to-one basis by identifying their weaknesses and determining how best to address them, aided by a near-infinite level of patience that even the most tolerant human teacher cannot ever possess. Put shortly, when people are able to maintain deep attention, LLMs are a complement to human cognition.

The problem is that people are losing the ability to maintain deep attention due to systematic efforts by social media companies. Their profit model is based on micro-duration engagement and dopamine reinforcement loops that decrease tolerance for cognitive friction and reduce sustained attention capacity. Over time, this rewarding of shallow processing has contributed to people reading and learning less than before – trends that clearly preceded the current LLM boom.

Under fragmented attention, LLMs switch from a complement to human cognition to a substitute. Instead of using LLMs to clarify arguments and stress-test ideas, we use them to produce without thinking and to avoid cognitive strain. One of the manifestations of this mechanism is the numerous studies that have demonstrated an AI-induced productivity gap: elite workers are using AI to pull further ahead of their less capable colleagues, some of whom are outsourcing their thinking to LLMs and using the time they save to scroll their Facebook feed rather than to think about harder issues.

For this reason, governments must resist the temptation to constrain LLM use, and instead direct their efforts towards the phenomena that are causing attention fragmentation.

Comparing short-form social media to an addictive drug is not hyperbole

Comparing short-form social media to an addictive drug – along with all the adverse psychological and physiological effects – is not hyperbole. We cannot rely on most individuals to be responsible enough to avoid narcotic consumption as a personal choice, using a mixture of legal and educational instruments to protect them and society at large. The same argument applies to short-form social media.

For those who frequent airports, a good analogy is the moving walkway: it can be something that makes us move more quickly without expending extra energy, or more slowly while expending no energy. If too many people are choosing to stand still on moving walkways, the ideal solution is not to ban the belts; it is to make sure that when people get on them, they choose to keep walking.

In the age of LLMs, if we care about preserving human intelligence, we should fight artificial distraction, not artificial intelligence.

Updated: February 12, 2026, 4:00 AM