Listing 1 - 10 of 20 | << page >> |
Sort by
|
Choose an application
The speed and diffusion of online recruitment for such violent extremist organizations (VEOs) as the Islamic State of Iraq and the Levant (ISIL) have challenged existing efforts to effectively intervene and engage in counter-radicalization in the digital space. This problem contributes to global instability and violence. ISIL and other groups identify susceptible individuals through open social media (SM) dialogue and eventually seek private conversations online and offline for recruiting. This shift from open and discoverable online dialogue to private and discreet recruitment can occur quickly and offers a short window for intervention before the conversation and the targeted individuals disappear. The counter-radicalization messaging enterprise of the U.S. government may benefit from a sophisticated capability to rapidly detect targets of VEO recruitment efforts and deliver counter-radicalization content to them. In this report, researchers examine the applicability of promising emerging technology tools, particularly automated SM accounts known as bots, to this problem. Their work has implications for efforts to counter the growing threat of state-sponsored propagandists conducting disinformation campaigns or radicalizing U.S. domestic extremists online and assesses the feasibility and advisability of the U.S. government employing social bot technology for counter-radicalization and related purposes. The analysis draws on interviews with a range of subject-matter experts from industry, government, and academia as well as reviews of legal and ethical considerations of using bots, the literature on the development and application of bot technology, and case studies on past uses of social bots to influence individuals, gather information, and conduct messaging campaigns.
Radicalism --- Online social networks --- Social media --- Social aspects. --- Government policy --- Technological innovations --- Political aspects. --- United States.
Choose an application
COVID-19 offered authoritarian regimes, such as China and Russia, an opportunity to manipulate news media to serve state ends. Researchers conducted a scalable proof-of-concept study for detecting state-level news manipulation. Using a scalable infrastructure for harvesting global news media, and using machine-learning and data analysis workflows, the research team found that both Russia and China appear to have employed information manipulation during the COVID-19 pandemic in service to their respective global agendas. This report, the second in a series, describes these efforts, as well as the analytic workflows employed for detecting and documenting state-actor malign and subversive information efforts. This work is a potential blueprint for a detective capability against state-level information manipulation at the global scale, using existing, off-the-shelf technologies and methods. This report is part of RAND's Countering Truth Decay Initiative, which considers the diminishing role of facts and analysis in political and civil discourse and the policymaking process.
Choose an application
"More than a century after its release, The Defence of Duffer's Drift by Major General Sir Ernest Swinton has become an enduring military classic. This piece of instructional fiction, in which the narrator learns from his operational mistakes over a series of dreams, has earned a place in military classrooms and has inspired military leaders, analysts, and historians. Indeed, the narrative form can be a powerful teaching and learning tool. To support U.S. Army efforts to better integrate information operations into operational planning, RAND has adapted the premise of General Swinton's work for a modern-day audience and a different problem set. The fictitious narrator, Captain I. N. Hindsight, takes readers repeatedly through the same mission over the course of six dreams in which she makes shortsighted decisions, critical miscalculations, and smaller mistakes that contribute to spectacular failures until the accumulated lessons ultimately allow her and the command she supports to succeed. The fabricated instructional scenario draws on actual historical operations, alternative directions that these operations could have taken, and realistic challenges that an Army information operations planner might face. The 26 concise lessons in this volume offer insight that, ideally, the practitioner will not need to acquire through hindsight."--
Information warfare --- Strategy. --- Leadership --- Decision making.
Choose an application
This report presents a proof-of-concept to assess the online behavior of U.S. Air Force (USAF)-affiliated users over social media, specifically in regard to USAF diversity, equity, and inclusion (DEI) policies. The authors found that USAF-affiliated users rarely used language that showed disrespect for others based on their identity or social group and generally reflected service values of respect and professionalism. Additionally, they suggest that their proof-of-concept could be easily adopted by the Air Force Public Affairs Agency. The approach is an aggregate one that does not highlight individuals and respects the privacy and free speech rights of members, veterans, and other affiliated social media users. Building such a social media analysis capability could provide the Department of the Air Force situational awareness of how the force is being represented online.
Choose an application
The United States has a capability gap in detecting malign or subversive information campaigns before these campaigns substantially influence the attitudes and behaviors of large audiences. Although there is ongoing research into detecting parts of such campaigns (e.g., compromised accounts and "fake news" stories), this report addresses a novel method to detect whole efforts. The authors adapted an existing social media analysis method, combining network analysis and text analysis to map, visualize, and understand the communities interacting on social media. As a case study, they examined whether Russia and its agents might have used Russia's hosting of the 2018 World Cup as a launching point for malign and subversive information efforts. The authors analyzed approximately 69 million tweets, in three languages, about the World Cup in the month before and the month after the event, and they identified what appear to be two distinct Russian information efforts, one aimed at Russian-speaking and one at French-speaking audiences. Notably, the latter specifically targeted the populist gilets jaunes (yellow vests) movement; detecting this effort months before it made headlines illustrates the value of this method. To help others use and develop the method, the authors detail the specifics of their analysis and share lessons learned. Outside entities should be able to replicate the analysis in new contexts with new data sets. Given the importance of detecting malign information efforts on social media, it is hoped that the U.S. government can efficiently and quickly implement this or a similar method.
Choose an application
Given past threats to U.S. elections, it is possible that foreign actors will again try to influence the U.S. political campaign season of 2020 via social media. This report, the second in a series on information efforts by foreign actors, lays out the advocacy communities on Twitter that researchers identified as arguing about the election. It goes on to describe what appears to be an instance of election interference in these communities using trolls (fake personas spreading a variety of hyperpartisan themes) and superconnectors (highly networked accounts that can spread messages effectively and quickly). Although the origin of the accounts could not be identified definitively, this interference serves Russia's interests and matches Russia's interference playbook. The report describes the methods used to identify the questionable accounts and offers recommendations for response.
Choose an application
As social media is increasingly being used as people's primary source for news online, there is a rising threat from the spread of malign and false information. With an absence of human editors in news feeds and a growth of artificial online activity, it has become easier for various actors to manipulate the news that people consume. Finding an effective way to detect malign information online is an important part of addressing this issue. RAND Europe was commissioned by the UK Ministry of Defence's (MOD) Defence and Security Accelerator (DASA) to develop a method for detecting the malign use of information online. The study was contracted as part of DASA's efforts to help the UK MOD develop its behavioural analytics capability. Our study found that online communities are increasingly being exposed to junk news, cyber bullying activity, terrorist propaganda, and political reputation boosting or smearing campaigns. These activities are undertaken by synthetic accounts and human users, including online trolls, political leaders, far-left or far-right individuals, national adversaries and extremist groups. In support of government efforts to detect and counter these activities, the research team successfully developed and applied a machine learning model in a Russian troll database to identify differences between authentic political supporters and Russian trolls shaping online debates regarding the 2016 US presidential election. To trial the model's portability, a promising next step could be to test the model in a new context such as the online Brexit debate.
Choose an application
Researchers conducting this exploratory study used news and blog data to understand efforts by foreign countries to spread malign information regarding coronavirus disease 2019 (COVID-19) in the Indo-Pacific region. Malign information is content that is provocative, inflammatory, possibly deceptive, or even untrue, and is disseminated or boosted for the purpose of advancing foreign countries' strategic goals. The authors used RAND's proprietary lexical analysis platform, RAND-Lex, to explore whether news and blog data could provide some initial insights on the reach of, content of, and tactical strategies used in foreign malign information about COVID-19 in the Indo-Pacific. Using NewsAPI, a database of global news and blogs, the authors developed a data pipeline and analytic flow that allows analysts to (1) identify key features of foreign malign information efforts and (2) track down specific foreign malign information campaigns. The researchers discovered several key features of foreign malign information on COVID-19 in the Indo-Pacific and identified an example malign information campaign.
Choose an application
The U.S. Army faces two analytical and management challenges because its data are locked away in siloed and proprietary databases and it lacks access to modern, commonplace analytical tools. To solve these two problems, the authors developed a case study with Army Contracting Command (ACC) to determine if there is a simple and effective way to overcome these challenges and found an effective, efficient, and quick path forward. The authors conducted a proof of concept for data sharing and analytics with ACC, which has high volume and value of annually awarded contracts. They migrated large contracting data sets from ACC, built a robust querying and analytics platform for exploring that data, piloted a method for accessing heretofore inaccessible unstructured text data from contracts, and conducted a pilot machine-learning analysis highlighting how a cloud-based contract analysis system for ACC could lead to cost savings. The team found that the Army can achieve immediate cost savings and efficiencies through advanced data analytics and the use of currently available commercial off-the-shelf technology. The Army should immediately conduct multiple similar proofs of concept that take siloed and inaccessible data to the cloud to be analyzed using modern analytical tools to validate the methodology from this report across multiple commands.
Choose an application
U.S. Department of Defense (DoD) efforts to plan and conduct influence operations in an ethical manner face several challenges, including concerns regarding the appropriateness of any influence activity, a lack of explicit consideration of ethics in the influence-planning process, and decoupling the ethics of force from the ethics of influence in military operations. Currently, DoD lacks a framework to explicitly consider the ethics of an influence activity outside legal review. Ethics scholarship reveals that the principal ethical objection to influence is its threat to autonomy. Although influence is a threat to autonomy and is thus morally fraught, this scholarship points to several situations in which influence activities might be justified. This report includes (1) clear ethical principles that should govern the planning and conduct of influence operations; (2) clear procedures for assessing ethics and the ethical risk associated with a proposed influence operation; and (3) guidelines for creating a justification statement for a proposed influence operation based on a preliminary ethical determination so that reviewers and approvers are presented with a consistent, coherent, and nonarbitrary ethical evaluation with which they can engage and agree or disagree. The authors offer a principles-based framework for military practitioners to determine whether a proposed influence effort is ethically permissible and guidance for preparing a justification statement that allows approvers to follow the ethical logic behind a proposed influence effort.
Listing 1 - 10 of 20 | << page >> |
Sort by
|