Listing 1 - 4 of 4 |
Sort by
|
Choose an application
Automated target recognition (ATR) is one of the most important potential military applications of the many recent advances in artificial intelligence and machine learning. A key obstacle to creating a successful ATR system with machine learning is the collection of high-quality labeled data sets. The authors investigated whether this obstacle could be sidestepped by training object-detection algorithms on data sets made up of high-resolution, realistic artificial images. The authors generated large quantities of artificial images of a high-mobility multipurpose wheeled vehicle (HMMWV) and investigated whether models trained on these images could then be used to successfully identify real images of HMMWVs. The authors obtained a clear negative result: Models trained on the artificial images performed very poorly on real images. However, they found that using the artificial images to supplement an existing data set of real images consistently results in a performance boost. Interestingly, the improvement was greatest when only a small number of real images was available. The authors suggest a novel method for boosting the performance of ATR systems in contexts where training data are scarce. Many organizations, including the U.S. government and military, are now interested in using synthetic or simulated data to improve machine learning models for a wide variety of tasks. One of the main motivations is that, in times of conflict, there may be a need to quickly create labeled data sets of adversaries' military assets in previously unencountered environments or contexts.
Artificial intelligence --- Machine learning --- Target acquisition --- Military applications --- Development. --- Computer simulation.
Choose an application
The Air Force Research Laboratory (AFRL) asked RAND Project AIR FORCE (PAF) for assistance understanding how cyber-related risks compare with other risks to its defense-industrial supply chains—a scope that included supply chains for hardware, not supply chains for software—and exploring implications for directions in risk assessment and mitigation and for research. AFRL was interested in how attackers might use supply chains to wage attacks, such as through malicious code, and how supply chains might, themselves, be targets of attack, such as through disruption. To conduct the analysis, PAF drew insights from the literatures on cybersecurity, supply chain risk management (SCRM), game theory, and network analysis and worked with sets of stylized supply chains and fundamental principles of risk management. The report uses the phrase cyber SCRM broadly to refer to the cybersecurity of supply chains, including attacks through supply chains to reach a target and attacks on supply chains in which the target of the attack is the supply chain itself.
Choose an application
A large body of academic literature describes myriad attack vectors and suggests that most of the U.S. Department of Defense's (DoD's) artificial intelligence (AI) systems are in constant peril. However, RAND researchers investigated adversarial attacks designed to hide objects (causing algorithmic false negatives) and found that many attacks are operationally infeasible to design and deploy because of high knowledge requirements and impractical attack vectors. As the researchers discuss in this report, there are tried-and-true nonadversarial techniques that can be less expensive, more practical, and often more effective. Thus, adversarial attacks against AI pose less risk to DoD applications than academic research currently implies. Nevertheless, well-designed AI systems, as well as mitigation strategies, can further weaken the risks of such attacks.
Choose an application
Artificial intelligence (AI) technologies hold the potential to become critical force multipliers in future armed conflicts. The People's Republic of China has identified AI as key to its goal of enhancing its national competitiveness and protecting its national security. If its current AI plan is successful, China will achieve a substantial military advantage over the United States and its allies. That has significant negative strategic implications for the United States. How much of a lead does the United States have, and what do the United States and the U.S. Air Force (USAF) need to do to maintain that lead? To address this question, the authors conducted a comparative analysis of U.S. and Chinese AI strategies, cultural and structural factors, and military capability development, examining the relevant literature in both English and Chinese. They looked at literature on trends and breakthroughs, business concerns, comparative cultural analysis, and military science and operational concepts. The authors found that the critical dimensions for the U.S. Department of Defense (DoD) involve development and engineering for transitioning AI to the military; advances in validation, verification, testing, and evaluation; and operational concepts for AI. Significantly, each of these dimensions is under direct DoD control.
Listing 1 - 4 of 4 |
Sort by
|