Listing 1 - 4 of 4 |
Sort by
|
Choose an application
The speed and diffusion of online recruitment for such violent extremist organizations (VEOs) as the Islamic State of Iraq and the Levant (ISIL) have challenged existing efforts to effectively intervene and engage in counter-radicalization in the digital space. This problem contributes to global instability and violence. ISIL and other groups identify susceptible individuals through open social media (SM) dialogue and eventually seek private conversations online and offline for recruiting. This shift from open and discoverable online dialogue to private and discreet recruitment can occur quickly and offers a short window for intervention before the conversation and the targeted individuals disappear. The counter-radicalization messaging enterprise of the U.S. government may benefit from a sophisticated capability to rapidly detect targets of VEO recruitment efforts and deliver counter-radicalization content to them. In this report, researchers examine the applicability of promising emerging technology tools, particularly automated SM accounts known as bots, to this problem. Their work has implications for efforts to counter the growing threat of state-sponsored propagandists conducting disinformation campaigns or radicalizing U.S. domestic extremists online and assesses the feasibility and advisability of the U.S. government employing social bot technology for counter-radicalization and related purposes. The analysis draws on interviews with a range of subject-matter experts from industry, government, and academia as well as reviews of legal and ethical considerations of using bots, the literature on the development and application of bot technology, and case studies on past uses of social bots to influence individuals, gather information, and conduct messaging campaigns.
Radicalism --- Online social networks --- Social media --- Social aspects. --- Government policy --- Technological innovations --- Political aspects. --- United States.
Choose an application
The United States has a capability gap in detecting malign or subversive information campaigns before these campaigns substantially influence the attitudes and behaviors of large audiences. Although there is ongoing research into detecting parts of such campaigns (e.g., compromised accounts and "fake news" stories), this report addresses a novel method to detect whole efforts. The authors adapted an existing social media analysis method, combining network analysis and text analysis to map, visualize, and understand the communities interacting on social media. As a case study, they examined whether Russia and its agents might have used Russia's hosting of the 2018 World Cup as a launching point for malign and subversive information efforts. The authors analyzed approximately 69 million tweets, in three languages, about the World Cup in the month before and the month after the event, and they identified what appear to be two distinct Russian information efforts, one aimed at Russian-speaking and one at French-speaking audiences. Notably, the latter specifically targeted the populist gilets jaunes (yellow vests) movement; detecting this effort months before it made headlines illustrates the value of this method. To help others use and develop the method, the authors detail the specifics of their analysis and share lessons learned. Outside entities should be able to replicate the analysis in new contexts with new data sets. Given the importance of detecting malign information efforts on social media, it is hoped that the U.S. government can efficiently and quickly implement this or a similar method.
Choose an application
Given past threats to U.S. elections, it is possible that foreign actors will again try to influence the U.S. political campaign season of 2020 via social media. This report, the second in a series on information efforts by foreign actors, lays out the advocacy communities on Twitter that researchers identified as arguing about the election. It goes on to describe what appears to be an instance of election interference in these communities using trolls (fake personas spreading a variety of hyperpartisan themes) and superconnectors (highly networked accounts that can spread messages effectively and quickly). Although the origin of the accounts could not be identified definitively, this interference serves Russia's interests and matches Russia's interference playbook. The report describes the methods used to identify the questionable accounts and offers recommendations for response.
Choose an application
As social media is increasingly being used as people's primary source for news online, there is a rising threat from the spread of malign and false information. With an absence of human editors in news feeds and a growth of artificial online activity, it has become easier for various actors to manipulate the news that people consume. Finding an effective way to detect malign information online is an important part of addressing this issue. RAND Europe was commissioned by the UK Ministry of Defence's (MOD) Defence and Security Accelerator (DASA) to develop a method for detecting the malign use of information online. The study was contracted as part of DASA's efforts to help the UK MOD develop its behavioural analytics capability. Our study found that online communities are increasingly being exposed to junk news, cyber bullying activity, terrorist propaganda, and political reputation boosting or smearing campaigns. These activities are undertaken by synthetic accounts and human users, including online trolls, political leaders, far-left or far-right individuals, national adversaries and extremist groups. In support of government efforts to detect and counter these activities, the research team successfully developed and applied a machine learning model in a Russian troll database to identify differences between authentic political supporters and Russian trolls shaping online debates regarding the 2016 US presidential election. To trial the model's portability, a promising next step could be to test the model in a new context such as the online Brexit debate.
Listing 1 - 4 of 4 |
Sort by
|