Listing 1 - 7 of 7 |
Sort by
|
Choose an application
Military readiness is a perennial priority for the United States and a cornerstone of national security. Key to managing and improving readiness is the ability to measure it. This gives leaders situational awareness and tools for exploring trade-offs with other priorities, such as modernization, force structure, and use of national resources. There are likely many ways in which artificial intelligence (AI) can improve measurement and management of military readiness. In this report, the authors discuss work that advances the capability of computers to "understand" human language describing factors that promote or impede readiness. The U.S. military reports monthly on overall readiness. These quantified reports are accompanied by narratives explaining what is occurring in military units that is affecting current or future readiness. The authors' goal in this report is to use these assessments to calculate overall readiness and enable senior leaders to estimate how readiness could be affected by personnel, equipment, or training factors. An additional benefit would be to have automated, real-time interaction with unit commanders as they write their assessments to help them refine the information they provide and better align their narratives with reported readiness levels.
Choose an application
The authors describe an approach for leveraging machine learning to support assessment of military operations. They demonstrate how machine learning can be used to rapidly and systematically extract assessment-relevant insights from unstructured text available in intelligence reporting, operational reporting, and traditional and social media. These data, already collected by operational-level headquarters, are often the best available source of information about the local population and enemy and partner forces but are rarely included in assessment because they are not structured in a way that is easily amenable to analysis. The machine learning approach described in this report helps overcome this challenge. The approach described in this report, which the authors illustrate using the recently concluded campaign against the Lord's Resistance Army, enables assessment teams to provide commanders with near-real-time insights about a campaign that are objective and statistically relevant. This machine learning approach may be particularly beneficial in campaigns with limited or no assessment-specific data, common in campaigns with limited resources or in denied areas. This application of machine learning should be feasible for most assessment teams and can be implemented with publicly and freely available machine learning tools pre-authorized for use on U.S. Department of Defense systems.
Choose an application
The United States has been the international leader in science and technology of importance to national security for three-quarters of a century. However, the development by other nations of their own science and technology capabilities, in concert with and fueled by increasing globalization and connectivity of economic and technological development, has increased competition for technological leadership. The authors use patent filings to analyze the current relative positions of the United States and other countries in selected technology areas of interest to the Department of the Air Force: additive manufacturing, artificial intelligence, ceramics, quantum, sensors, and space. Areas of technological emergence were identified by detecting rapid growth in cumulative patent applications in specific technology areas and whether this occurred in the United States or China. The authors also describe and analyze the patent portfolios of U.S. companies that were early filers in these areas, focusing on small or medium-size companies that were not already owned or controlled by foreign entities; this, in turn, enabled identification of companies that had specific leading technological capabilities that could make them attractive for possible foreign acquisition. The authors propose a method to simultaneously identify connected areas of technological emergence and the companies with leading capabilities in these areas.
Choose an application
This volume serves as the technical analysis to a report concerning the potential for artificial intelligence (AI) systems to assist in Air Force command and control (C2) from a technical perspective. The authors detail the taxonomy of ten C2 problem characteristics. They present the results of a structured interview protocol that enabled scoring of problem characteristics for C2 processes with subject-matter experts (SMEs). Using the problem taxonomy and the structured interview protocol, they analyzed ten games and ten C2 processes. To demonstrate the problem taxonomy and the structured interview protocol for a C2 problem, they then applied them to sensor management as performed by an air battle manager. The authors then turn to eight AI system solution capabilities. As for the C2 problem characteristics, they created a structured protocol to enable valid and reliable scoring of solution capabilities for a given AI system. Using the solution taxonomy and the structured interview protocol, they analyzed ten AI systems. The authors present additional details about the design, implementation, and results of the expert panel that was used to determine which of the eight solution capabilities are needed to address each of the ten problem characteristics. Finally, they present three technical case studies that demonstrate a wide range of computational, AI, and human solutions to various C2 problems.
Choose an application
This report concerns the potential for artificial intelligence (AI) systems to assist in Air Force command and control (C2) from a technical perspective. The authors present an analytical framework for assessing the suitability of a given AI system for a given C2 problem. The purpose of the framework is to identify AI systems that address the distinct needs of different C2 problems and to identify the technical gaps that remain. Although the authors focus on C2, the analytical framework applies to other warfighting functions and services as well. The goal of C2 is to enable what is operationally possible by planning, synchronizing, and integrating forces in time and purpose. The authors first present a taxonomy of problem characteristics and apply them to numerous games and C2 processes. Recent commercial applications of AI systems underscore that AI offers real-world value and can function successfully as components of larger human-machine teams. The authors outline a taxonomy of solution capabilities and apply them to numerous AI systems. While primarily focusing on determining alignment between AI systems and C2 processes, the report's analysis of C2 processes is also informative with respect to pervasive technological capabilities that will be required of Department of Defense (DoD) AI systems. Finally, the authors develop metrics — based on measures of performance, effectiveness, and suitability — that can be used to evaluate AI systems, once implemented, and to demonstrate and socialize their utility.
Choose an application
The U.S. Department of Defense (DoD) requires more efficient and timely methods to acquire, integrate, and interoperate systems, and perhaps more crucially systems-of-systems (SoSs), to deter near-peer adversaries in a rapidly evolving threat environment and prevail in combat should deterrence fail. Current practice for integration across systems generally relies on the development of interface control documents that describe in detail how the different systems and subsystems connect and interact. In 2019, RAND researchers were asked to participate in a multiyear effort to help DoD understand the challenges of creating a universal command and control language (UCCL) to facilitate the evolution of systems and interoperability of SoSs. In this report, the authors establish a conceptual framework for analyzing SoS performance of different sensor-to-shooter connections, combinations, and associated command and control constructs. The analysis shows that implementation details of a standard interface may contribute to interface overhead that changes technical performance by orders of magnitude. Overall, while the authors found that there are cases in which mission performance is mainly driven by operational parameters and not the interface design, there are also cases in which implementing a standard interface has the potential to adversely influence mission outcomes if designers do not apply in-depth engineering analysis and careful design practice. This research should not be viewed as a study of a specific standard interface, but as an early system engineering study of how such an interface could and should be designed.
Choose an application
The 2019 National Defense Authorization Act mandated a study on artificial intelligence (AI) topics. In this report, RAND Corporation researchers assess the state of AI relevant to the U.S. Department of Defense (DoD), and address misconceptions about AI; they carry out an independent and introspective assessment of the Department of Defense's posture for AI; and they share a set of recommendations for internal actions, external engagements, and potential legislative or regulatory actions to enhance the Department of Defense's posture in AI.
Listing 1 - 7 of 7 |
Sort by
|