NASA knew about AI long before CEOs started parroting it during their 2024 earnings calls.
As early as 1990, NASA began using AI schedulers like its SPIKE system to determine telescope observation times, and later to autonomously guide the Mars rover to rock targets, and predict lightning strikes.
Still, today’s NASA professionals have plenty of other ideas on how to use technologies like generative AI. David Salvagnini, named NASA’s new chief AI officer in May, is bringing generative suggestions to the agency, even serving up a “Summer of AI” campaign, which aims to tout the tools available to NASA’s workforce.
Maybe someone wants to use generative AI to prepare an annual financial report, Salvagnini said, or deploy a front-end chatbot to help employees with retirement prep.
“A lot of my day is spent around the workforce. It’s around understanding the existing use cases and addressing some of the boundaries that might be related to some of those use cases,” Salvagnini told IT Brew.
Salvagnini spoke with IT Brew about the kind of AI enthusiasm currently being generated within the organization.
This interview has been edited for length and clarity.
How would you characterize the agency’s enthusiasm around AI?
There is more enthusiasm than skepticism, but there are elements of skepticism…I think people are concerned that it’s a black-box technology, and people are going to ask it questions and just accept responses blindly without thinking about it critically…I like to use the term “augmented intelligence,” not artificial intelligence. Because the reality is: you, the human, have the accountability for the work product, whether AI was involved in or not.
What are the concerns that some of the NASA creatives have about generative AI?
Largely, the creatives are worried about overreliance on generative AI tools for the development of images, and the quality of those images. So, we have seen examples of, let’s say, using DALL-E to “give me an image of an astronaut on the moon.” DALL-E will kick back an image, but we’ve seen defects in the image. For example, the American flag on the astronaut spacesuit may be malformed, it may not have enough stripes, it may not have enough stars…that is not consistent with NASA’s brand.
Top insights for IT pros
From cybersecurity and big data to cloud computing, IT Brew covers the latest trends shaping business tech in our 4x weekly newsletter, virtual events with industry experts, and digital guides.
What can you do to assure someone who has that concern?
Part of the CAIO role is really dealing with a lot of the policy implications of generative AI…to allow the workforce to understand what is in and out of bounds as it relates to the use of generative AI capabilities, for image creation as an example. We have some draft language we’re working on right now to that end, so we can get that published and make sure that the workforce understands what I often like to say is the “guardrails.”
What does a test look like for NASA, with generative AI ideas?
Think about a launch vehicle going through the atmosphere and all the variables on any given day that that launch vehicle has to account for, or what the systems on it have to account for, and testing all that with that level of accuracy. I find that to be quite impressive. Validation in generative AI is quite different.
When you put a large language model search front end on top of a corpus of data, you’re going to get an answer. Generative AI is very good at giving you an answer. And it’s going to give you a very grammatically well-formed answer. But is it a complete answer? I’ve seen examples of generative AI capabilities piloted where I have not been given complete answers. I think part of the testing is making sure that the outcomes are complete, consistent, and can be relied upon.