February 2017 began in Mosul with fear and trepidation. The Iraqi army had already taken back the eastern half of the city from ISIL. By mid-February, the army was preparing to cross the Tigris and retake the rest.
With Mosul, Iraq's second largest city, then being prized free from the grip of ISIL, the Sawab Centre in Abu Dhabi started a social media campaign. The centre is a joint UAE-US venture that aims to counter ISIL's ideology online; it has hundreds of thousands of followers across several social media platforms.
The campaign was called #AfterDaesh, using the Arabic acronym for the militant group and featured every day images of life without ISIL.
Here were heartwarming photos of two young Iraqi newlyweds “among the thousands of young people moving forward with their lives #AfterDaesh”; here was a young Iraqi boy, declaring, “For a long time, no-one took us to school” and looking forward to a return to education. Other posts, on Twitter and Instagram, featured photos and videos of young men and women, dancing and enjoying freedom.
What is absent are the images of loss and pain so commonly associated with reports of ISIL. There are no graphic images of ISIL atrocities. The focus of #AfterDaesh is on how Iraqis have overcome the militant group, on the triumph of the human spirit.
Tackling the ideas and ideologies of extremist groups is a continuing theme of public policy, in both the west and the Islamic world.
In the Middle East, the focus is ISIL, but there are other dangers, too. In the US, there is deep concern about the possibility of Russian influence using social media; in Europe, far-right extremism uses many of the same methods of dissemination as Islamist extremists. The outcome is not in doubt; the only question is how to manage it.
One method is to remove the material. The American tech giants YouTube, Twitter and Facebook (which also owns Instagram) faced the US Senate last month to explain what steps they were taking to filter and remove extremist content. Recognising the possibility of public outrage or legislation, the tech companies have moved to put resources into, in particular, "machine learning", which allows computers to identify extremist material and remove it.
Read more from Opinion
The focus on Facebook, YouTube and other major platforms is not misplaced, but incomplete. Governments and large platforms have the resources to analyse, target and remove the worst of extremist content. Battlefield footage, killings and the most gruesome videos can be removed by these large providers.
But the internet is vast and broadly unregulated. These companies are facing down a threat from a group of ideologues determined to distribute this content, and would-be recruits eager to access it. There are simply too many websites and apps out there for all to be regulated. Removing the worst means that many people will not simply stumble across this material, which is important. But they may be led to the material by other means.
Just last week, a high ranking UK government adviser on terrorism warned of “remote radicalisation”, where terrorists are radicalised and given plans via websites and mobile applications, without ever meeting recruiters.
Governments, not merely the UK, have long operated on the assumption of “nodes” of known radicals influencing others, usually in gathering places such as mosques or other spaces. By watching these nodes, they often were able to discover those at the periphery, who may be radicalised into committing violent acts. But the rise of the online space as a discussion and recruiting ground has made it much harder to spot radicals.
That is particularly concerning because the radicalisation can occur relatively quickly. And because the online space is governed by the same norms of celebrity and status as the offline world.
It is known, for example, that some online jihadis become stars in extremist circles, especially on more immediate and interactive mediums like Twitter, where the curious can ask and have questions answered in real time. Many ISIL recruits have tweeted and posted videos live from the battlefield. It must not be underestimated how glamorous these scenes can appear to young men and women in grey bedrooms on safe streets. The celebrity – notoriety is the better word – of these militants have drawn young men and women to Syria and Iraq, and will continue to draw them wherever groups like ISIL re-emerge.
That is why merely removing the material is an incomplete approach. The Sawab approach is unusual in the world of counter-extremism, but it is worth considering. What it seeks to do is provide a form of inoculation at an earlier stage to those who may fall prey to these ideologies.
Because the extremist material doesn't occur in isolation; there is a broader worldview that is propagated from the same sources, one that pushes separation and difference, one that focuses on the harm caused by outsiders rather than what unites us, as countries, cultures and people. Most importantly, it does so online, contesting the very spaces where this extremism thrives.
There is still a great deal to do. But by trying to change the narrative of grievance and difference that allows extremist ideas to flourish, the Sawab approach seeks to stop or interrupt these ideas before they can grow. Extremism thrives on difference; by focusing on what connects us, the #AfterDaesh campaign is trying to stop Daesh's ideas before they can even take root.