Centcom CTO sees generative AI models as a potential drudge for combatant commands
U.S. Central Command has deployed high-tech artificial intelligence capabilities such as computer vision, pattern detection and decision aids that could help with missions like intelligence, surveillance, reconnaissance and targeting. But for generative AI models, Centcom’s chief technology officer sees a more mundane use case: drudge work.
Generative AI has gone viral in recent months with the emergence of ChatGPT and other tools that can generate content such as text, audio, code, images, videos and other types of media, based on prompts and the data they’re trained on.
Schuyler Moore, the CTO for Central Command — which is responsible for U.S. military operations in the Middle East — sees some upsides and limitations of this type of technology for defense applications.
“This is in an earlier stage for us and we’re trying to be a bit more careful with this because the opportunity space for time savings is immense — and also the risks given the variance of performance and current lack of explainability where errors occur, make the models carry more risk, I think, than might be appropriate for combatant commands to push forward very hard,” Moore said Thursday during a panel discussion on tech integration hosted by the Potomac Officers Club.
“What’s interesting for generative models in our mind that we use is that they really can carve out all of these smaller or menial tasks. And I think sometimes we get super excited about generative models because we see ChatGPT and you can imagine … the future that it could be in and the types of media applications. But the reality is that it has such potential for menial tasks that you can pull off someone’s plate and that everyday workflow,” she said.
Writing emails is one example of the type of drudge work that it could assist with.
“If you have a model that can generate the first draft of your email and get you to a 40% place — it doesn’t have to write a full email, [and] God forbid it sends the email — but if it got you to a 40% mark and then you can adjust and edit from there, think about how much time that would save you. And all of that time savings can then direct the human brain to the nuanced semantic decisions that our human brains are really good at. I think sometimes we flip that over when we say generative models are going to replace those meaning [and] nuance-based decisions that need to be made. No, no. They can free up your time, it can free up your day so that you can make better decisions based on that,” she explained.
Moore continued: “And so for us, generative models require applications where you have the patience and the risk acceptance to work with them and to see where … they might reflect well, where they might not. But you have to be able to accept the risk associated with them. And in our mind, the more pedestrian menial tasks are the ones where there is actually huge potential. So, we’re working through that right now.”
Combatant commands and other Defense Department components recently put several generative models through their paces during a Global Information Dominance Experiment (GIDE) led by the Chief Digital and AI Office (CDAO).
“We have about five different models really just to test out, you know, how do they work? Can we train them on DOD data or tune them on DOD data? How do our users interact with them? And then what metrics do we want to come up with based off of what we were seeing to facilitate evaluation of these tools? Because there aren’t really great evaluation metrics for generative AI yet, and … we want to make sure that we have the ability to understand its capability and know when it’s going to be effective,” Margie Palmieri, the Pentagon’s deputy chief digital and AI officer, said last month at a RAND Corp. event.
Palmieri didn’t identify the specific models that were tested, but she suggested an aim is to explore potential use cases and see how well the technology might meet the department’s needs.
“The way we think AI is it really is use-case based. So even in these generative AI models, they’re not the solution for every use case … In DOD, we have computer vision to look for object detection and tracking. We have natural language processing to look through policy documents and, you know, find correlations. We have other types of [machine learning] algorithms for predictive maintenance. So generative AI is no different. There are going to be use cases that it’s really, really good for and there are going to use cases it’s not good for,” she said.
Meanwhile, some Pentagon leaders are keen on pursuing generative AI solutions that have been specifically developed for U.S. military missions and trained on DOD data, because officials don’t fully trust the commercial products that are currently on the market, which have been known to “hallucinate” and provide inaccurate information.
“We are not going to use ChatGPT in its present instantiation. However, large language models have a lot of utility. And we will use these large language models, these generative AI models, based on our data. So they will be tailored with Defense Department data, trained on our data, and then also on our … compute in the cloud and/or on-prem, so that it’s encrypted and we’re able to essentially … analyze its, you know, feedback,” Maynard Holliday, the Pentagon’s deputy CTO for critical technologies, said in June at Defense One’s annual Tech Summit.
The department recently hosted a conference in McLean, Virginia, focused on generative AI. About 250 people from government, industry and academia were expected to attend, Holliday told DefenseScoop before the confab.
“We’ve got to level-set everybody” regarding DOD’s potential use cases for the technology and the things that industry and academia are doing in this field, he explained, noting that stakeholders need to figure out what technical gaps can be closed “to get us to those foundational models.”