Listing 1 - 6 of 6 |
Sort by
|
Choose an application
The authors of this report examine military applications of artificial intelligence (AI) and consider the ethical implications. The authors survey the kinds of technologies broadly classified as AI, consider their potential benefits in military applications, and assess the ethical, operational, and strategic risks that these technologies entail. After comparing military AI development efforts in the United States, China, and Russia, the authors examine those states' policy positions regarding proposals to ban or regulate the development and employment of autonomous weapons, a military application of AI that arms control advocates find particularly troubling. Finding that potential adversaries are increasingly integrating AI into a range of military applications in pursuit of warfighting advantages, they recommend that the U.S. Air Force organize, train, and equip to prevail in a world in which military systems empowered by AI are prominent in all domains. Although efforts to ban autonomous weapons are unlikely to succeed, there is growing recognition among states that risks associated with military AI will require human operators to maintain positive control in its employment. Thus, the authors recommend that Air Force, Joint Staff, and other Department of Defense leaders work with the State Department to seek greater technical cooperation and policy alignment with allies and partners, while also exploring confidence-building and risk-reduction measures with China, Russia, and other states attempting to develop military AI. The research in this report was conducted in 2017 and 2018. The report was delivered to the sponsor in October 2018 and was approved for distribution in March 2020.
Choose an application
Past research has placed little emphasis on how to value the experience of U.S. Army noncommissioned officers (NCOs). The authors of this report examine the relationships between the tenure, experience, and productivity of key NCO leaders and the performance of the junior soldiers they lead, with a focus on maintaining or improving leadership quality and soldier performance, as well as reducing personnel costs. The authors find that the characteristics and experience of senior leaders are related to differences in the outcomes of junior soldiers; junior personnel have lower early-term attrition in cases in which senior leaders possess key types of experience. Having a leader with the right mix of experience can potentially generate substantial savings, but more experience is not always desirable. The authors note a concern that the Army promotion process captures only a limited amount of experience, since it considers deployment experience solely when promoting to E-5 and E-6. Recommendations to improve the promotion process are also presented.
Leadership. --- Soldiers --- Training of --- Rating of --- United States. --- Non-commissioned officers. --- Promotions.
Choose an application
The Air Force Research Laboratory (AFRL) asked RAND Project AIR FORCE (PAF) for assistance understanding how cyber-related risks compare with other risks to its defense-industrial supply chains—a scope that included supply chains for hardware, not supply chains for software—and exploring implications for directions in risk assessment and mitigation and for research. AFRL was interested in how attackers might use supply chains to wage attacks, such as through malicious code, and how supply chains might, themselves, be targets of attack, such as through disruption. To conduct the analysis, PAF drew insights from the literatures on cybersecurity, supply chain risk management (SCRM), game theory, and network analysis and worked with sets of stylized supply chains and fundamental principles of risk management. The report uses the phrase cyber SCRM broadly to refer to the cybersecurity of supply chains, including attacks through supply chains to reach a target and attacks on supply chains in which the target of the attack is the supply chain itself.
Choose an application
A large body of academic literature describes myriad attack vectors and suggests that most of the U.S. Department of Defense's (DoD's) artificial intelligence (AI) systems are in constant peril. However, RAND researchers investigated adversarial attacks designed to hide objects (causing algorithmic false negatives) and found that many attacks are operationally infeasible to design and deploy because of high knowledge requirements and impractical attack vectors. As the researchers discuss in this report, there are tried-and-true nonadversarial techniques that can be less expensive, more practical, and often more effective. Thus, adversarial attacks against AI pose less risk to DoD applications than academic research currently implies. Nevertheless, well-designed AI systems, as well as mitigation strategies, can further weaken the risks of such attacks.
Choose an application
U.S. air superiority, a cornerstone of U.S. deterrence efforts, is being challenged by competitors—most notably, China. The spread of machine learning (ML) is only enhancing that threat. One potential approach to combat this challenge is to more effectively use automation to enable new approaches to mission planning. The authors of this report demonstrate a prototype of a proof-of-concept artificial intelligence (AI) system to help develop and evaluate new concepts of operations for the air domain. The prototype platform integrates open-source deep learning frameworks, contemporary algorithms, and the Advanced Framework for Simulation, Integration, and Modeling—a U.S. Department of Defense–standard combat simulation tool. The goal is to exploit AI systems' ability to learn through replay at scale, generalize from experience, and improve over repetitions to accelerate and enrich operational concept development. In this report, the authors discuss collaborative behavior orchestrated by AI agents in highly simplified versions of suppression of enemy air defenses missions. The initial findings highlight both the potential of reinforcement learning (RL) to tackle complex, collaborative air mission planning problems, and some significant challenges facing this approach.
Choose an application
U.S. election systems are diverse in terms of governance and technology. This reflects the constitutional roles reserved for the states in administering and running elections but makes it challenging to develop a national picture of cybersecurity risk in election systems. Moreover, it requires each state and jurisdiction to evaluate and prioritize risk in the systems it oversees. With funding from the Cybersecurity and Infrastructure Security Agency, researchers from the Homeland Security Operational Analysis Center have developed a methodology for understanding and prioritizing cybersecurity risk in election infrastructure to assist state and local election officials.
Listing 1 - 6 of 6 |
Sort by
|