Countering terrorism via social media: an example of countering Al Qaeda and Islamic State’s social media propaganda
By Suvojit Bandopadhyaya, PhD
This research paper discusses the counter-narratives strategies that counterterrorism agencies can potentially incorporate in light of the media content in Al Qaeda, and the Islamic State’s e-magazines Inspire, Dabiq and Rumiyah and other social media units. The central argument arising in this paper is that to formulate counter-narrative strategies for terrorist groups such as Al Qaeda and the Islamic State, one needs to look at crucial aspects from which their propaganda messaging arises. The key to formulating counter-narrative strategies is to understand and combat ideological messaging first. Second, after the rise of the Islamic State, new terrorist and extremist groups are more and more likely to target the millennial generation; therefore one needs to understand where does the millennial audience reside and what is the nature and characteristic of the millennial audience who access information through their screens and mobile devices (Prensky, 2001).
For what Prensky (2001, p.2), calls the digital natives (those who are “used to receiving information really fast. They like to parallel process and multi-task. They prefer their graphics before their texts rather than the opposite. They prefer random access like hyper-text. They function best when they’re networked. They thrive on instant gratification and frequent rewards and they prefer games to ‘serious work'”) it is the mode of visual communication, the graphics and aural modes through which the millennial generation consumes information.
To counter visual propaganda, counterterrorism agencies need to come with counter-narratives that are visually rich, which restricts video messages and meet the quality and sophistication of those video messages (HD cameras and high-quality sound). Terrorist organisations such as the Islamic State has a vast network of media web spread across the globe and promotes it’s messaging through e-magazines, social media channels, websites, audio messages (hadiths and nasheeds), video messages (in multiple languages) newsletters, satellite radios. Thus, to counter messaging from terrorist organisations such as the Islamic State, counterterrorism agencies have to formulate counter-narratives that are suited to all the channels in which these terrorist groups broadcast their propaganda. The counterterrorism agencies have to meet the aspirations of terrorist propaganda and the reach of terrorists’ sporadic media units to disrupt their ideological offensive. More than this, cybersecurity and counter-narrative strategy needs to focus on the ideological manifestation through social media channels which are more visually attractive than the mere textual presentation of the ideology itself. The counterterrorism agencies’ social media arm should specialise in medium-specific messaging strategies. To counter social media propaganda by terrorist organisations, we need social media equipped counter-narratives. One of the ways to counter terrorist social media propaganda is through trolling. “Online trolling is a practice of behaving in a deceptive, or destructive manner in a social setting on the Internet with no apparent instrumental purpose” (Buckels, Trapnell, & Paulhus, 2014, p. 1). In the recent past, the Russians have used similar social media strategies like the Islamic State to polarise American society and interfere in the 2016 elections. The Kremlin has adopted similar social media strategies like the Islamic State, in which trolls have been a useful tool. Russia’s information warfare began around the same time as the Islamic State; however, very few from the national security had any inklings that it was happening. The first news story appeared on the Russian manufacturing trolls came from The New York Times titled, ‘The Agency’ by Adrien Chen, which stated that a Kremlin-linked troll farm which specialised in creating fake personas conducted influence operations and manipulate conversations online which operated within the US.
The Kremlin’s strategy has been simple; it has been to create internal violence by whipping up the Muslim community in a nation and then capitalise on the sad violence. To advance Russia’s foreign policy objective in Syria, the Kremlin has capitalised on Islamophobia in the West and created a “new hybrid form of information warfare”. It has exploited this “new hybrid form of information warfare” to deflect attention from its war crimes in Aleppo, Syria in 2016. Knowing the general Western psyche of Westerners’ misunderstanding of “any bearded Muslim or Arab of being a ‘terrorist'” Kremlin used this Muslim stereotype to denote first responders to be members of Al Qaeda. Further, the Kremlin created several fake social media accounts, including the United Muslims of America, to gain attention “of typical Muslim American attitudes and beliefs”. It went to the extent of disseminating memes saying Hillary Clinton has claimed that Al Qaeda was created, funded and armed by the US and that Clinton bought advertisements focusing mainly on political rallies for Muslims only.
The Kremlin has effectively exploited trolls to manipulate online discourses and rather than countering the Islamic State and Al Qaeda propaganda it has used their propaganda to malign and mislead online discourse. Thus, one can understand from the fact that Kremlin exploiting the Islamic State’s propaganda for its means indicates the level of expertise and technological sophistication that Islamic State’s media units have achieved as a terrorist organisation. It is indeed a challenge for the counterterrorism agencies to match up to the social media efficacy that the Islamic State has managed to achieve within a span of 4-5 years.
Counter-terrorist agencies must equip themselves to decipher and deconstruct extremist beliefs projected by terrorist organisations and breakdown social media messaging strategies proposed by terrorist media units. In the case of counter-messaging strategy to deal with terrorist organisations such as Al Qaeda and the Islamic State, these terrorist organisations encourage propaganda on the premise of religious differences and consider religious oppression as the source of all of its struggles. To counter terrorist propaganda is to counter both the narrow interpretation of extreme religious belief and a fabricated notion of oppression by other communities and sects. The counter-narrative strategies should focus on educating and informing vulnerable youth of the practical realities. It is also essential to use the same mediums to counter terrorist propaganda within which the terrorist cells are active. Terrorist propaganda is a social media problem, and one needs to apply social media strategies to counter social media propaganda. Social media counter-strategies can be trolls, counter-commenting, counter memes and Graphics Interchange Format (GIF) to counter terrorist messaging. Counter messaging strategies are required to be visual and graphic to register visual evidence and visual contradictions to what has been stated by extremist groups. Such alternative narratives which not only focus on the content itself but the medium as well will be significant to weaken terrorist propaganda.
In the recent past, there have been some counter-narrative strategies that have been put forth by many counterterrorism agencies to control social media propaganda by the Islamic State. For instance, Cilluffo (2013) proposes a five D’s structure to counter the Islamic State’s online narratives. These four D’s are “(a) Dissect and de-legitimise (b) Disaggregate, and de-globalise (c) De-glamorise and (d) Deny and destroy” (pp.7-8). Applying the four D’s to the Islamic State narratives would be to first (dissect and de-legitimise) break and expose the drawbacks of the Islamic State narratives to understand correctly and to offer rebuttals. Counter-narrative strategy should target towards exposing the hypocrisy of terrorist propaganda versus their actions which would disbalance the extremist beliefs. Human, capital and technological advancements are the three essential resources which have not yet matched up by counterterrorism agencies compared to terrorist media units to de-glamorise terrorist propaganda. The last counter-narrative response proposed by Cilluffo (2013) is to invoke computer network attack methods and tools to up-end the Islamic State’s efforts to use the Internet to further their ends (p.9). Social media being a “user-friendly, reliable and free” medium have been some of the critical factors that have attracted modern terrorists to Facebook and Twitter as effective social media channels (Wu, 2015). One of the other reasons that Wu (2015) citing Weimann (2014) is also because social media platforms “are by far the most popular with their intended audience, which allows terrorist organisations to be part of the mainstream” (p.288). It is pertinent to take into consideration the Islamic State’s social media efficacy which has now left a road map for future extremist groups to follow in future leading me to discuss future regulatory frameworks in the social media sphere.
Social media corporations such as Facebook and Twitter have taken stern actions to curb terrorism emanating from social media platforms which have resulted in the growing cooperation between counterterrorism agencies, inter-governmental departments and social media companies. The United Nations has been an overarching institution to bring different actors together in formulating internet governance strategies. “The UN organised the Word Conference of International Telecommunications (WCIT) in Dubai in 2012 to discuss issues related to internet governance” (Wu, 2015, p. 284). However, the member countries did not agree on a global regulatory framework on internet governance. One of the key obstacles faced with the regulation of the internet and social media landscape has been that the Internet as a global commons. “On the international level, commons are areas that ‘do not fall within the jurisdiction of any one country’; these areas ‘are termed as international commons or global commons’ as they do not fall under any jurisdictions, commons are challenging to regulate” (Buck, 1998 cited in Wu, 2015, p.292). For social media regulation, specific practical difficulties such as “filtering, and zoning internet information is extremely tedious for private corporations to monitor the online sphere. Facebook, YouTube, Twitter are large corporations, but the resources required to monitor the Internet is enormous” (Wu, 2015, p. 300). Besides, researchers specialised in counterterrorism review social media content round the clock all week, the social media corporations rely heavily on community policing, according to Monika Bickert, who is Facebook’s head of counterterrorism efforts and global product policy (Hackernoon News Report, 2017). The US Department of State, in collaboration with Facebook and marketing firm EdVenture Partners, are working together to develop a Peer to Peer program. The Peer to Peer program is “an attempt to crowdsource and aggregate insights from university students across the world and empower positive voices at a local level” (Sandre, 2017, p.5).
The second obstacle towards social media governance “for overcoming collective action problems is consistency between the distribution of costs and benefits. The benefits of a coordinated effort to regulate social media would be to have less terrorist propaganda and recruiting. The costs would be those associated with monitoring and participating in the governance of social media” (Wu, 2015, p. 302). The social media governance can be further strengthened with collaborations amongst social media “corporations such as Google, Facebook and Twitter with international non-profit corporations, organisations like the Internet Security, the Internet Engineering Task Force and the Internet Corporations for Assigned Names and Numbers (ICANN), which develops the Internet’s protocols and standards; and computer security experts” (Lotrionte, 2012 cited in Wu, 2015, pp.308-309). To address the regulation of social media, we need to look at it as a collective action problem to tackle terrorism emanating from social media platforms.
Several counter-narrative efforts are put in place by the US Department of State, one of them being the Think Again and Turn Away (TATA) campaign launched in English in December 2013 (Katz, 2014). This outreach program was ineffective but also provided a stage to voice arguments by the Islamic State and Al Qaeda followers, regularly engaging in petty disputes with the US State Department officials. There were three essential failures of TATA campaign. First one was, the campaign by the US Department of State offered a stage to terror affiliates to voice their opinion which meant more significant publicity for the Islamic State or Al Qaeda followers resulting in more attention from their subsequent followers. For example, in one instance, a twitter handle under the name of @de_BlackRose had posted some images of Iraqi prisoners being tortured by the US soldiers in the Abu Gharib Prison in 2003-2004 with the message, “REMEMBER HOW YOU AMERICA ARRESTED AND HUMILIATED OUR BROTHERS IN IRAQ AND HUMILIATED THEM IN THEIR OWN COUNTRY” to which TATA campaign responded stating, “US troops are punished for misconduct, #ISIS fighters are rewarded” with an image of US soldiers happily interacting with Middle Eastern children. Seeing the opportunity to be under the attention of the Department of State’s TATA campaign, @de_BlackRose along with other Islamic State’s like-minded affiliates rebutted to TATA’s response relying back upon as, “loool in spilling their bloods only a misconduct? Well, that’s not enough,” “poor children where Americans fooling them with their smiles,” and “well only in June did ISIS crucify one of its fighters for robbing civilians at a checkpoint.” To counter @de_BlackRose’s message, TATA campaign replied – “this is what children see under #ISIS rule, this brand of honour and respect’, and included a picture of children standing around a crucified soldier”, and after this response, dozens of anti-American tweets were made at the TATA account with @de_BlackRose stating “loool, you don’t know about Shariah… better think again and turn away…; and, I rather see my children see this so they know what their fate is when they are against shariah of Allah, than democrazy.”
Another example is from a Twitter account named as Amreeki Witness tweeting – “IS has flaws, but the moment you claim they cut off the heads of every non-Muslim they see, the discussion is over” to which TATA replied, “#ISIS tortures, crucifies and shoots some – ISIS also gives ultimatums to Christians: convert, pay or die – some flaws you say?”. On witnessing, the US Department of State’s ‘TATA’ account countering Amreeki Witness’s social media posts, Amreeki Witness offered a long series of rebuttals against TATA campaign. From the two examples, it is evident that TATA campaign intends to hijack twitter audience from the Islamic State and Al Qaeda follower accounts to gain attention from the moderate Muslim population who are still undecided whether to join the group or not and are TATA’s main target audience (Katz, 2014). However, the responses from TATA have repetitively backfired providing jihadist validation and a platform to upstage their messages. The State Department’s campaign also got dragged to a “whack-a-mole” (Ozeren, Gunes and Al-Badayneh, 2007, p.276) game where the counterterrorism machinery is dragged into tweeting and countering the Islamic State and Al Qaeda follower’s tweets and messages.
The second drawback with State Department’s propaganda campaign has been not knowing their right opponent in contrast to what is being stated by whom. For example, an Al Qaeda official, Abu Sulayman tweeted on September 11, “on this day, in 2001, the USA’s largest economic shrine, the idol of capitalism was brought to the ground… the toll of injustice is hefty” to which the TATA campaign replied stating, “nobody’s a bigger fan of the fruits of capitalism than the so-called #ISIS Caliph” providing an image of the Islamic State leader, al-Baghdadi wearing a Rolex watch. Now any competent terrorism analysts will be well aware of the fact the Al Qaeda and the Islamic State are at loggerheads; hence, the State’s department’s replying to an Al Qaeda official with an example of al-Baghdadi and to rest of Sulayman’s followers only indicates that the security agencies are clueless to the contemporary jihadi landscape. Thus, this only shows that security agencies are inadept at dealing with the right opponent with the right targeted messaging. While the US Homeland Security’s and UK’s Home Office has launched the US Countering Violent Extremism (CVE) Task Force which is commendable, what is still lacking is the grasp over the ideological underpinnings that drive these propaganda narratives. The CVE task force offers “a new online training educational course, ‘Countering terrorist exploiting the social media and the internet’ to educate start-up companies and social media companies about how terrorists may seek to exploit their platforms”. This course should also equip social media corporations to understand that narratives through which social media are interactive, that they are no longer dealing with a passive audience but an active one, and who are keeping an eye on content promotions and countering such content. The State department’s counter-narratives backfiring to counter the Islamic State’s or Al Qaeda’s online propaganda only strengthens the latter’s narratives leading to the greater attachment to the terror group by the follower. Thus, it is not only about learning and understanding social media techniques but also to comprehend jihad and its ideological manifestations in visual form.
The third drawback that the US State Department and other counterterrorism agencies have to be cautious about is countering the right propaganda at the right time. In the social media world, information brevity matters the most, thus countering information spikes in times after a terror attack has taken place and countering its social media manifestation could lead to better counter-narrative outcomes. For example, the Islamic State disseminated a PDF poster after the London terror attacks in March 2017 over Twitter and Facebook, which was widely circulated in social media circles (Figure 1). Counterterrorism agencies need to be ready to counter terrorist propaganda immediately once an event (a terror attack) has taken place which would strengthen counter-narratives and would be more visible to the Islamic State and Al Qaeda followers in online discourse.
Figure 1 – Islamic State’s propaganda poster circulating in social media post-London terror attacks in 2017
Online video platforms have been crucial recruitment and propaganda tool for terrorist organisations. The videos are distributed over YouTube, Twitter, Facebook and other social networks and don’t cost too much to produce for the Islamic State media departments. “IS members and supporters uploaded 1,348 YouTube videos garnering 163, 391 views between March and June 2018, according to Counter Extremist Project (CEP). CEP has for years criticised social media companies for not doing more to keep extremist content – including videos of beheading, bombings and calls to violence – off their platforms” (Greenemeier, 2018).
One of the hindrances has been social media corporations having their “own policies and definitions of terrorist material when deciding whether to remove content when it finds a match to a shared hashtag, according to a Facebook spokesperson. Facebook claims that it finds and removes 99 per cent of the Islamic State and Al Qaeda related content before users report it through a combination of photo and video matching software and human monitors” (Greenemeier, 2018). On the other hand, Google asserts that the company’s Artificial Intelligence (AI) software identifies 98% of the YouTube videos, which contain violent extremism content. Nevertheless, terrorist video content is still uploaded through small social media networks and further circulated to more prominent social media channels with greater volume of the content upload. The content is disguised in social media posts and to spread posts using hyperlinks recently.
“Despite, the emergence of new apps and sites for sharing video content, YouTube, Facebook, Twitter and other large social media platforms are still the most important to monitor as these are the platforms where aspiring terrorists get their’ first taste of IS propaganda, ideology and narrative'”, according to deputy director of the George Washington University’s Program on Extremism, Seamus Hughes (Greenemeier, 2018). One of the popular hashtags that have been going around has been #AllEyesonISIS which helped the Islamic State to create bots and encouraged Islamic State followers after their online propaganda went viral to generate enough fear for the Iraqi forces who were trained and equipped by the US army to forsake their positions to defend their own country. The Islamic State leveraged on this fear that was circulating through #AllEyesonISIS, which showed brutal beheadings, Islamic State’s military might and the capability of the Islamic State to carry attacks while facing the opponents. This terrorist group’s “ability to leverage technology, cyber capability, information operations… is one of the things that we anticipate and be out in front of is something new” according to Gen Joseph Dunford, who is the Joint Chiefs Marine Corps.
Terrorist using real-time social media applications to broadcast terror attacks has been a real concern, “as it happens in an environment resembling our own because the shocking images are on our phones, laptops and television screens which erase the distance between us and the source of danger” (Burke, 2019). The act of broadcasting a terror attack simultaneously while the attack is taking place, in reality, has added a new dimension to terrorism. For example, during the Christchurch terror attacks on March 15, 2019, in New Zealand, the attacker, before attacking the Mosque addressed directly to his audience which has become a social media contemporary phenomenon of taking a ‘selfie’ acknowledging his own doing of attacking the Muslim community, followed by saying, “let’s get this party started”. This aspect of online self-acknowledgement of oneself as the doer of an act and leaving a social media self-witness is unprecedented. “The point of the attack is not just to kill Muslims, but to make a video of someone killing Muslims” (Burke, 2019). And, this is what gains attention, and gathers accolades from supporters who favour such extremism. In Christchurch attacker’s world, “on his live stream, in his mind and those of his followers, he is a warrior, a racial hero, a leader but also, in a wider contemporary sense, a celebrity, if only for a moment” (Burke, 2019). “They vie for eyeballs by attempting to outdo the previous posts, constantly pushing the line of acceptable behaviour to new extremes” (Warzel, 2019). The gunman who open fired in a synagogue in Poway, California, near San Diego had received comments like “Get the high score” points out that today’s terrorist “bask in the uncertain wink-and-nod of their threats, their comments, or their tributes in the hope of shrouding the very idea of a threat, or a comment, or a tribute in uncertainty” (Bogost, 2019).
This aspect of live-streaming terror attacks has brought real repercussions to the online world. It has also brought in terror attacks as vengeance act by lone-wolf terror attackers. The terrorist might be a white nationalist perpetrating right-wing terrorism or regional terror groups such as National Thowheeth Jama’ath (an Islamist terror group) who with the help of the Islamic State carried out terrorist attacks on April 18, 2019, in Sri Lanka; social media usage has brought renewed attention to how online terrorism manifests presently. Social media companies, essentially big tech giants such as Facebook, Twitter, Google, YouTube, are thus facing a slew of regulatory measures that are being touted over these platforms to counter online extremism. For example, “YouTube struggles to quash misinformation, conspiracies and incendiary content. The company had ignored warnings to change YouTube’s recommendation engine and that they were, in some cases, discouraged from seeking out videos that might violate YouTube’s rules to preserve a sense of plausibility deniability” (Warzel, 2019). Governments and regulatory bodies are coming together with more stringent measures trying to mitigate this issue of social media terrorism emerging from these platforms.
New Zealand Prime Minister, Jacinda Arden and French President, Emmanuel Macron have initiated the ‘Christchurch Call’, “to try and eliminate violent extremist content online in the wake of the March 15 terrorist attacks in Christchurch, New Zealand”. The meeting was aimed “to have world leaders and CEOs of tech companies agree to a pledge, called the Christchurch call, to eliminate terrorist and violent extremist content online” (Matamoros & Morgan, 2019). A potential regulatory measure that has been proposed by Prime Minister Ardern is perhaps delaying live streaming containing gore and violence before broadcasting to the users. However, to this, Facebook has argued that the company’s overall goal is to connect people and if the live stream or Facebook live is delayed than the moment to ‘connect’ with other users is gone and the immediacy of sharing that ‘moment’ live loses its impact. Ardern has raised concerns saying that with social media live streaming of attacks we are in a way, “sleepwalking towards a future in which all social media posts are filtered prior to being posted” (Ingber, 2019).
On the other hand, Facebook has, in many ways, has embroiled in several issues with the regulation of online content. To start with, the Federal Trade Commission (FTC) in the United States have weighed on the sort of constraints that are required to be put in place to restrict the spread of online extremism through Facebook. Regulatory restrictions would mandate how “Facebook handles data, strengthens security and monitors its privacy practises” (Kang and Satariano, 2019). The regulatory constraints would also include “stronger monitoring of Facebook’s privacy practises and greater restraints on how the company shares data with third parties” (Kang and Satariano, 2019). Facebook estimates that the FTC was likely to impose a USD3-5 billion dollars fine for violation of privacy settlement since 2011.
Apart from the United States, other countries have also made sweeping changes towards social media policing after the horrific attacks that took place in 2019. Australia made sweeping legislative changes which put huge fines (approximately AUD7.5 million dollars) “for social media companies and jail terms for their executives if they fail to rapidly remove ‘abhorrent violent material’ from their platforms” (Cave, 2019). In the European region, the UK government has come up with stringent laws to make “UK safest place in the world to be online” (Doffman, 2019). The UK government, along with the European Parliament, forces social media companies to remove terrorist-related content within an hour or face substantial fines if further circulated through their platforms. A new internet regulatory agency has also been considered by the UK, having the task for holding social media corporations legally accountable and issuing fines failing to remove extremist content disseminated through their respective platforms. Facebook, in its European headquarters in Ireland, has been facing investigation charges for not fulfilling with European data protection laws. The Irish Data Protection Commission has initiated a new case inquiring Facebook outing user passwords. While in France, officials are looking at “Facebook’s content moderation policies” (Kang and Satariano, 2019). In Germany, “antitrust authorities forced Facebook to adjust its data-collection policies after determining the company was exploiting its market dominance to profile its users and sell advertising” (Kang and Satariano, 2019). Other than, Facebook itself, several of Facebook’s recent merger companies has also come into question in several south Asian countries. For example, in the terror attack in 2019, Sri Lanka had shut down social media in the island nation to avert disinformation and spread of rumours after the Easter bombings took place. It wasn’t the first time that the island nation did so as there was a previous shutdown in response to violence which emerged from rumours spread on social media platforms according to the Sri Lankan government. Similarly, in India, social media-inspired violence has taken place due to the spread of disinformation and rumours of illegal beef exports which have led to lynching and targeting the Indian Muslim community.
In the recent counter-narrative framework, questions have also been raised as to what constitutes gore and gruesome and till what extent algorithmic and Artificial intelligence of social media companies can pick up before such extremist content goes online. For example, in a recent hearing, Facebook’s policy director for counterterrorism, Brian Fishman stated concerning the Christchurch terror attack that the video that was posted by the attacker was not “particularly gruesome” and that Fishman added that there was “not enough gore” in the video for the algorithm to catch the content. The issue here is that social media companies are entirely reliant on Artificial Intelligence (AI). However, there are limitations to AI as well to identify extremist content online. After the Christchurch terror attacks, Facebook did put up a blog post in which it pointed out the limitations of AI. “AI systems are based on ‘training data’, which means you need many thousands of examples of content to train a system that can detect certain types of text, imagery or video,” according to Facebook’s blog post. It further stated, “this approach has worked very well for areas of nudity, terrorist propaganda and also graphic violence where there is a large number of examples, we can use to train our systems. However, in the event of Christchurch shootings, this particular live stream did not trigger our automatic detection systems. To achieve that we need to provide our systems with larger volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare” (Woodruff, 2019). The relying on AI brings back the question of digital natives and what type of content gives rise to what kind of narratives and surrounding similar content emerging from such live streaming. One of the crude examples emerging from such live streaming has been the appearances of memes. What makes memes impactful are that they are “sticky,” i.e. they are “easy to remember, and they can be shared across platforms and some sub-cultures. What makes memes seem magical is that they are…. easy to build over time”, states the director of the Technology and Social Change Research Project at Harvard, Joan Donovan. Donovan further iterates that memes, similar to those in the Christchurch and Poway shooter’s posts “call out to particular online cultures, who share them to maximise the negative social impact of horrifying events”. Today’s terror attacks are tailored for the Internet and follow an eerie pattern. For example, in both cases with the Poway gunman and the Christchurch shooter, “the shooter begins with a crude genealogy which is formatted as self-interviewed introducing the viewer to their ethnic roots followed by an aggrandising Q. and A. with himself, then a litany of toxic in-jokes meant to confuse the media and those less savvy in far-right cultures (for instances in both the cases, the Christchurch shooter and the Poway gunman facetiously mentioned the YouTube star PewDiePie as an influencer)” (Warzel, 2019). Social media platforms have in a way become “an accelerant for terrorist behaviour”. Modern terrorism and hate crimes imprint through the internet and social media platforms which have given “a theatre for unspeakable acts – and an amplification system for an ideology” (Warzel, 2019) be it white nationalism or terrorism in general.
The video footage of Al Bagdadi that had appeared last in June 2019 pointed towards the shift in focus to the Islamic State. According to the Islamic State counterterrorism expert, Charlie Winter, the Islamic State had given up on the territorial reality of its proto-state long ago and that it has achieved it’s intended aimed through creating the Caliphate. In its present propaganda telling, the Islamic State claims that “its proto-state was a way to build a global platform that would ensure the movement’s future by mobilising tens and thousands of supporters, imbuing them and their kin with its creed and its mission” (Winter & al-Tamimi, 2019). The Islamic State’s mastery over social media in its initial years in 2014-2015 has created a dangerous model which has given rise to a new model of terrorism as social media terrorism and weaponisation of social media platforms (Singer & Brooking, 2018). Looking at all the recent progress been made concerning countering online extremism, the counterterrorism agencies and think tanks need to devise counter-narrative strategies keeping in mind the different characteristics of terrorist groups. In the case of Al Qaeda (an old terrorist organisation), the counter-narratives require to weaken their ideological beliefs that the organisation has developed based on the historical US aggression and intervention in the Middle East. On the other hand, in case of the Islamic State (a new terrorist organisation), the counter-narratives need to focus on countering the radical version of Islam created and being promoted/disseminated through Islamic State media units using social media channels. Thus, counter-narrative strategies are needed to formulate accordingly, keeping in mind what are the ideological basis on which terrorist organisations are producing their propaganda media content. It is also vital for social media experts not to get dragged into unnecessary social media rebuttals as the Islamic State, and Al Qaeda followers are always looking for a social media stage for further legitimation.
Al Qaeda’s e-magazine Inspire offers a sense of rational and planned action. Inspire is targeted primarily to attack the US and expects its audience to understand why the US is the main enemy with its oppressions (in various forms) globally. Inspire takes serious consideration of the existing practicalities while carrying out jihad rather than living in the utopian dream of an intentional Caliphate. Inspire has acted as more of a standard mouthpiece for senior leaders of Al Qaeda guiding its followers to the future course of action whereas Islamic State’s Dabiq and Rumiyah has been reminiscing the dream of long-gone Caliphate and fuelling the present course of action on past animosities. For Al Qaeda, the 9/11 was the media spectacle in its time that gave the organisation the global prominence of a worldwide threat and an international terrorist organisation. Al Qaeda achieved this ‘media spectacle’ with the help of mainstream media in 2001. The 9/11 was the media spectacle that circulated not only on mainstream media but also broadcast on the Internet and still does today via social media. The 9/11 media spectacle has further enabled the Islamic State to develop its media strategy.
Since 2014, the Islamic State has created its own self-managed ‘media spectacle’ from its elaborate media system comprising several media units in each wilayat (province) and social media channels to disseminate even the smallest of attacks into a social media event. The Islamic State media units have been able to do so with the technological superiority and with the emergence of social media channels that were unavailable to Al Qaeda in 2001. How Al Qaeda and the Islamic State have achieved global prominence, have been in different trajectories. It is crucial to keep in mind that 9/11 terror attack was the landmark event for Al Qaeda which created the media spectacle for the organisation. The magnanimity of the attack itself was such that Al Qaeda rose to global prominence through mass media. In today’s terms, it was the social media moment for Al Qaeda in 2001 when the attack went ‘viral’ through mass media. The 9/11 was a mass-mediated event; however, in the case of the Islamic State and with its social media units, the Islamic State has made atrocities personal in nature with the changing media landscape. Terrorism via social media has become personal and mobile and social media jihad is now portrayed as lifestyle terrorism. The Islamic State preaches a jihadi lifestyle with social media usage, which is visually enticing and offers jihadism as a sustainable lifestyle for the reader. The Islamic State rose to popularity with its own self-styled media units capturing barbaric acts in high-definition videos and circulating them over the Internet through social media platforms giving a new brutal face to terrorism. In other words, the Islamic State uses the terror spectacle in more unique ways and increased numbers with its vast array of media units in a more direct fashion given the advancement in online communication technologies post 9/11. What the Islamic State has done through the new manifestation of terrorism, i.e. terrorism via social media, is that it has weaponised smartphones, tablets and laptops along with the online channels, social media platforms through which the media content circulates now. Anyone having a smartphone is a potential influencer of the Islamic State’s content. This miraculous reach of the Islamic State would have been unimaginable without the Internet and social media, of which Al Qaeda could only dream of at its time.
⦁ Azami, D. (2016). The Islamic State in South and Central Asia. Survival, 58(4), 131-158.
⦁ Beck, G. (2015). It is about Islam: Exposing the truth about ISIS, Al Qaeda, Iran, and the Caliphate (Vol. 3): Simon and Schuster.
⦁ Bennett, W. L. (2016). News: The politics of illusion: University of Chicago Press.
⦁ Berger, J. M., & Morgan, J. (2015). The ISIS Twitter Census: Defining and describing the population of ISIS supporters on Twitter. The Brookings Project on US Relations with the Islamic World, 3(20), 4.1.
⦁ Blaker, L. (2015). The Islamic State’s use of online social media. Military Cyber Affairs, 1(1), 4.
⦁ Buckels, E. E., Trapnell, P. D., & Paulhus, D. L. (2014). Trolls just want to have fun. Personality and Individual Differences, 67, 97-102.
⦁ Chaliand, G., & Blin, A. (2016). The history of terrorism: From antiquity to ISIS: Univ of California Press.
⦁ Cilluffo, F., (2013). Countering Use of the Internet for Terrorist Purposes – A Statement before the United Nations Security Council Counter Terrorism Committee.
⦁ Cruickshank, P., & Ali, M. H. (2007). Abu Musab Al Suri: Architect of the New Al Qaeda. Studies in Conflict & Terrorism, 30(1), 1-14.
⦁ Earnhardt, R. L. (2014). Al-Qaeda’s Media Strategy: Internet self-radicalization and counter-radicalization policies. Digital America, 4(3).
⦁ El-Hibri, T. (2010). Parable and politics in early Islamic history: the Rashidun caliphs: Columbia University Press.
⦁ Enzensberger, H. M. (2005). The radical loser. signandsight. com, 1.
⦁ Gambhir, H. (2015). ISIS in Afghanistan. Backgrounder, Institute for the Study of War, 2.
⦁ Gartenstein-Ross, D. (2015). Jihad 2.0: social media in the next evolution of terrorist recruitment. Full Committee Hearing, Homeland Security and Governmental Affairs, 7.
⦁ Grimshaw, M. (2011). Encountering Religion: Encounter, Religion, and the Cultural Cold War, 1953–1967. History of Religions, 51(1), 31-58.
⦁ Gunaratna, R. (2016). The Emerging Wilayat in the Philippines. Counter Terrorist Trends and Analyses, 8(5), 22-27.
⦁ Gunaratna, R., & Oreg, A. (2010). Al Qaeda’s organizational structure and its evolution. Studies in Conflict & Terrorism, 33(12), 1043-1078.
⦁ Hall, B. (2015). Inside ISIS: The brutal rise of a terrorist army: Center Street.
⦁ Hashim, A. S. (2017). The Caliphate at War: Operational Realities and Innovations of the Islamic State: Oxford University Press.
⦁ Hoffman, B. (2004). The changing face of Al Qaeda and the global war on terrorism. Studies in Conflict and Terrorism, 27(6), 549-560.
⦁ Hoffman, B. (2007). The global terrorist threat: is Al-Qaeda on the run or on the march? Middle East Policy, 14(2), 44.
⦁ Hoffman, B. (2018a). Al-Qaeda’s Resurrection. Council on Foreign Relations Expert Brief.
⦁ Hoffman, B. (2018b). Al-Qaeda’s Resurrection. Council on Foreign Relations, 6.
⦁ Holbrook, D. (2015). Al-Qaeda and the Rise of ISIS. Survival, 57(2), 93-104.
⦁ Johnston, P. B., & Clarke, C. P. (2017). Is the Philippines the Next Caliphate? Foreign Policy.
⦁ Kean, T. H., & Hamilton, L. (2004). The 9/11 Commission Report: Executive Summary: National Commission on Terrorist Attacks upon the United States.
⦁ Kepel, G., Rothschild, J., & Ghazaleh, P. (2005). The roots of radical Islam: Saqi London.
⦁ Khatab, S. (2006). The political thought of Sayyid Qutb: The theory of jahiliyyah: Routledge.
⦁ Leadbeater, C. (2008). We-think: The power of mass creativity: Profile Books Limited.
⦁ Lia, B. (2007). Architect of global jihad: The life of al Qaeda strategist Abu Mus’ ab Al-Suri: London.
⦁ Manne, R. (2016). The Mind of the Islamic State: Milestones Along the Road to Hell (Vol. 11): Black Inc.
⦁ Mantel, B. (2009). Terrorism and the internet. Should web sites that promote terrorism be shut down? CQ Global Researcher, 3(11), 129-152.
⦁ McCants, W., & McCants, W. F. (2015). The ISIS Apocalypse: The history, strategy, and doomsday vision of the Islamic State: Macmillan.
⦁ Meikle, G. (2016). Social media: Communication, sharing and visibility: Routledge.
⦁ Mohamedou, M.-M. O. (2006). Understanding Al Qaeda: The Transformation of War: Pluto Press.
⦁ Myren, O. K. Al Qaeda and Its Franchising Strategy: A Success Story? ISD Glo, 164.
⦁ O’Shea, J. (2018). ISIS: The Role of Ideology and Eschatology in the Islamic State. The Pardee Periodical Journal of Global Affairs. Accessed.
⦁ Ozeren, S., Gunes, I. D., & Al-Badayneh, D. M. (2007). Understanding terrorism: Analysis of sociological and psychological aspects (Vol. 22): IOS Press.
⦁ Patrikarakos, D. (2017). War in 140 characters: how social media is reshaping conflict in the twenty-first century: Hachette UK.
⦁ Pearce, J. (1982). Under the eagle: Latin America Bureau London.
⦁ Prensky, M. (2001). Digital natives, digital immigrants part 1. On the horizon, 9(5), 1-6.
⦁ Prucha, N. (2016). IS and the Jihadist Information Highway–Projecting Influence and Religious Identity via Telegram. Perspectives on Terrorism, 10(6), 48-58.
⦁ Rogan, H. (2007). Al-Qaeda’s online media strategies: from Abu Reuter to Irhabi 007. FFI/Report, Norwegian Defence Research Establishment, 2729.
⦁ Roy, O. (2017). Jihad and death: The global appeal of Islamic State: Oxford University Press.
⦁ Sageman, M. (2008). The next generation of terror. Foreign Policy(165), 37.
⦁ Sedgwick, M. (2004). Establishments and sects in the Islamic world. New religious movements in the 21st century. New York and London: Routledge, 283-313.
⦁ Shehabat, A., Mitew, T., & Alzoubi, Y. (2017). Encrypted Jihad: Investigating the Role of Telegram App in Lone Wolf Attacks in the West. Journal of Strategic Security, 10(3), 3.
⦁ Singer, P. W., & Brooking, E. T. (2018). LikeWar: The Weaponization of Social Media: Eamon Dolan Books.
⦁ Sivek, S. C. (2013). Packaging Inspiration: Al Qaeda’s Digital Magazine Inspire and Self-Radicalization.
⦁ Stern, J., & Berger, J. M. (2015). ISIS: The state of terror: HarperCollins.
⦁ Torok, R. (2013). Developing an explanatory model for the process of online radicalisation and terrorism. Security Informatics, 2(1), 6.
⦁ Wasserstein, D. J. (2017). Black Banners of ISIS: The Roots of the New Caliphate: Yale University Press.
⦁ Weimann, G. (2014). New terrorism and new media: Commons Lab of the Woodrow Wilson International Center for Scholars Washington, DC.
⦁ Weiss, M., & Hassan, H. (2016). ISIS: Inside the Army of Terror (updated edition): Simon and Schuster.
⦁ Winter, C. (2015). The virtual ‘caliphate’: Understanding Islamic State’s propaganda strategy.
⦁ Wood, G. (2015). What ISIS really wants. The Atlantic, 315(2), 78-94.
⦁ Wright, L. (2016). The Terror Years: From Al-Qaeda to the Islamic State: Vintage.
⦁ Wu, P. (2015). Impossible to regulate: Social media, terrorists, and the role for the UN. Chi. J. Int’l L., 16, 281.
⦁ Yayla, A. S., & Speckhard, A. (2017). Telegram: the Mighty Application that ISIS Loves. ICSVE Brief Reports.
⦁ Zelin, A. Y. (2015). Picture or it didn’t happen: A snapshot of the Islamic State’s official media output. Perspectives on Terrorism, 9(4).
⦁ Zizek, S. (2004). Iraq’s false promises. Foreign policy, 43-49.
⦁ Žižek, S., & Iek, S. (2002). Welcome to the desert of the real!: five essays on September 11 and related dates: Verso.