AI and Its Unintended Consequences in Intelligence Work
May 10, 2021

AI and Its Unintended Consequences in Intelligence Work

Robert Miller
Robert Miller

The U.S. government is slow on the uptake that software development is iterative; or, so goes the gripe heard amongst many AI companies. Modern practice, of course, is to follow an “agile methodology,” which is of little meaning or value to people outside the know. However, at the macro level, we must give credit where it's due. Flagship AI programs at the Joint AI Center (JAIC), the National Geospatial-Intelligence Agency (NGA), and the U.S. Air Force have all followed iterative approaches to onboarding AI and computer vision. These efforts, while asynchronous, have followed similar pathways that, by and large, reflect the state of technology. When AI was the exclusive domain of computer scientists, the government hired experts, albeit in limited quantities. When AI was commercialized as a service in recent years, the government bought off-the-shelf models. Both approaches were bereft of widespread success, however, due to their inabilities to scale with the mission. AI’s next evolution--enablement tools-- will attempt to overcome previous limitations while also presenting new challenges that most likely remain hidden. As those flagship AI programs and others march forward, understanding these potential pitfalls and incorporating them into planning would go far to delivering AI solutions today, while codifying a sustainable, long-term approach, giving our decision makers and warfighters the tools they need for decision advantage.

Imagery analysts have long been skeptical of computer vision and other automations. Poor performance of early AI reinforced these beliefs, but did not explain analysts’ simultaneous fear of being made irrelevant by it in the career field they love. AI visionaries talk about the creative destruction and alleviated analytic burden from AI, chalking up those fears to unfounded paranoia, especially given computer vision’s storied past. In its earliest days, when counting cars in parking lots and measuring the depth of oil tanks were the only use-cases being talked about, quickly those tools fell short when weather conditions left models unusable. Rain, for example, made pavement darker, rendering previously-identifiable cars invisible. The logic of analysts, rightly, was, “If you can’t find cars in a parking lot, then you can’t help me track field-deployed ground units.” The creators of these early tools responded by narrowing their tools’ applications until a model was successful: It detects a specific helicopter at a specific airfield on a specific apron, but nothing else. There was value in that tool, but it helped few people, and certainly didn’t enamor computer vision to onlookers and skeptics.

After successful entries by Orbital Insight, Skybox, and other early computer vision providers came the rise and then explosion of machine learning startups. Together, they sold the dream: functional computer vision models on demand. But the state of technology was that model building required work to be conducted in Python, using a multitude of software tools, including Jupyter Notebook, PyTorch, Keras, TensorFlow, TensorBoard, not to mention a data labeling platform and most importantly the AI architectures, such as a convolutional neural network, to train computer vision models. While this work was successfully outsourced, a shortage of trained engineers in industry who could manage these workflows and build high-performing ML pipelines meant a lot of failure.

Make no mistake, artificial intelligence is hard. And putting those tools into practice, generating a viable, repeatable process takes significant investment, patience, and willingness to fail fast and adapt. Anyone can download a ResNet or U-net architecture found on GitHub and, rightfully, claim an in-house AI/ML capability. But the ubiquity of computer vision has not translated into universally-good computer vision. Today, too few companies are making still too few computer vision models that are warfighter ready.

Take NGA, for example. It acknowledges some 3,500 analysts under its employment. Ignoring those serving overseas or on loan to other organizations, let’s assume there are 2,000 working from its Springfield, VA location. Those analysts are divided among regional and functional accounts-- think Russia and counter proliferation. Further, those analysts are broken into branches and teams, each uniquely focused on a particular subregion and mission. Accepting that teams are non-duplicative, their AI requirements are unique as well. That is, a computer vision model designed to classify Chinese tanks simply won’t do in Russia (unless there’s been an invasion, of course). The point here being that it doesn’t take a large stretch of the imagination to realize that 2,000 analysts equate to as many or more unique computer vision models to satisfy their individual missions.

That vast quantity of models will be balanced by budgetary realities and the aforementioned availability of competent companies to perform such work. Whether it’s decided by the National Intelligence Priorities Framework (NIPF) or some other rubric, the harsh reality is that some missions will always get the resources they need, while the rest will not.

In the short run, this division may be agreeable to some analysts who wish to squint at imagery the old fashioned way. But, over time, as AI/ML increases in capability and, therefore, relevance, analysts’ performance objectives will assuredly become tied to their successful application of automation (it has already in many places). This potentially creates a scenario inside the DOD/IC where priority accounts, those with AI/ML tools, become the only offices where analysts can receive the requisite experience for promotion. The dark side of which is that traditionally low-priority accounts, those without AI/ML tools, will become developmental wastelands, where no competent analyst will desire to work because they recognize the lack of opportunity will hinder their professional advancement. This has profound implications on U.S. national security interests not only in covering down on those accounts, but should these highly-trained professionals find themselves in a career path not of their own making, known colloquially as a rut, they may elect to find employment elsewhere.

Instead, as JAIC Director, Lt. Gen. Groen, has noted, the Department of Defense should move toward “enablement” tools. Going further, the DOD and IC should push computer vision development and maintenance to the analytic edge, especially now as code-free machine learning software options are available. To borrow from former NGA Director Tish Long, they should strive to put the power of AI into the users’ hands. Not only does the distribution of enablement tools ensure “account parity” from the analytic, professional development, and desirability standpoints, but putting AI tools into the hands of the subject matter experts ensures that the training data and models generated are optimized for those very accounts. What’s more, these experts are empowered to update models continuously to reflect changing conditions in the real world, keeping them mission relevant and not beholden to generalized models of mixed performance sourced from a static library.

Importantly, this vision is implementable in the near term on unclassified systems, as DOD/IC classified infrastructure may, in some instances, have difficulty in attempting to run some of the proposed hardware and software needed for machine learning. But, between the JAIC’s Joint Common Foundation, NGA’s SAFFIRE, and the U.S. Air Force’s Advanced Battle Management System, investments are being made to modernize those higher-security domains. In the meantime, much can be accomplished on unclassified systems, especially now when so many analysts are teleworking from home, using web-based enablement tools. Empowered analysts, today, working with self-service tools, can begin curating data, labeling ML training sets, and even begin model generation from unclassified GEOINT data so that AI adoption can take root early and mature in a manner that is iterative and optimized to the missions for which they are intended.

Defense
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.