Blog

The Clock Is Ticking. Is Your Data Ready for the AI Era?

March 31, 2026 By Sean Applegate

Post-GIST360 Webinar Recap | By Sean Applegate

Federal data has never been more valuable or more consequential. As AI and advanced analytics move from experimentation to operational reality, data is increasingly recognized not as an IT resource but as a strategic national asset, one that shapes mission outcomes, national security, and public trust. The opportunity is significant, but so is the challenge.

Many agencies are navigating legacy systems, fragmented data environments, and rapidly changing mission objectives, all while being asked to translate broad mandates into measurable impact. Resilient foundations, purposeful governance, and scalable integration are not optional prerequisites. They are mission imperatives.

The Trump administration has reinforced that mindset through a decisive series of directives including America’s AI Action Plan, OMB Memorandum M-25-21, as well as the recent National Cyber Strategy and National AI Legislative Framework. These mandates are not ambiguous. What remains unclear for many organizations is where to start.

Our GIST 360 platform recently convened a conversation that cut straight to that question. In our webinar, Transforming Federal Data Into a Strategic National Asset for the AI Era, we sat down with Kevin Murphy, Acting CDO and CAIO at NASA, and Kyle Jourdan, Head of Qlik’s AI Practice, to talk plainly about what it takes to make this work, not in theory, but in practice, under real constraints, with real stakes.

What 150 Petabytes Teaches You About Data Governance

NASA manages over 150 petabytes of public scientific data across research centers, cloud platforms, classified systems, and a sprawling contractor network, each with different sensitivity levels, stewardship models, and formats. The lesson is clear: agencies that try to consolidate everything into a single repository lose the very thing that makes data valuable. Fidelity, provenance, and lineage collapse when you force disparate data into a single mold. The smarter path is federation. Get data discoverable by the right people through governed, data catalogs or marketplaces that make trustworthy data API-accessible. If your agency has siloed data, it is almost certainly a governance problem before it is a technology problem.

Trust Is Not a Feature. It Is the Foundation.

What separates agencies that successfully deploy AI from those that don’t is whether their people trust the outputs. That trust comes from knowing where the data came from, who curated it, how recent it is, and whether it was built for the use case at hand. Explainability and data lineage are the activation mechanism for AI adoption at scale. When a workforce can trace an AI output to a vetted, governed source, resistance drops and utilization climbs. Before your agency asks what AI tools to buy, ask whether your data environment is one your workforce would trust with a mission critical decision. If the answer is uncertain, that is where the investment needs to go.

The 80 Percent Nobody Talks About

Every serious AI deployment is preceded by unglamorous data preparation: cleaning, aggregating, reconciling formats, and validating outputs. In large-scale federal AI projects, that work can consume 50 to 80 percent of total effort. NASA experienced this firsthand when, in partnership with IBM, it built Surya, the first foundation model for heliophysics, trained on nine years of solar imagery from the Solar Dynamics Observatory. The result is a model that predicts solar flares two hours in advance with 16 percent greater accuracy than existing benchmarks. But the path ran directly through years of data preparation and rigorous scientific validation. AI readiness is a precondition, not a parallel track. Agencies that treat data preparation as a downstream task will find themselves perpetually unable to move pilots into production.

The Governance Advantage

There is a persistent misconception that governance slows things down. Thoughtful governance combined with workforce data literacy efforts, are what make acceleration possible. NASA applies more flexibility for research proofs of concept and substantially more rigor for human spaceflight systems, allowing innovation to move fast where risk is low while maintaining quality standards where it matters most. Agencies that invest in governance frameworks now, including data trust scoring, model evaluation, and AI guardrails, are building the runway their AI programs will need in 18 months. Agencies that defer will spend that time rebuilding trust they should never have broken.

The Most Expensive Decision You Can Make Right Now

Analysis paralysis has a cost, and in the current environment that cost is compounding. The agencies leading in AI-enabled mission delivery over the next three to five years are not the ones with the most sophisticated strategies on paper. They are the ones that started building, discovered their gaps through real deployments, and iterated quickly. You will not find every data quality problem in a planning session. The choice is whether you find those gaps in controlled use cases or experiments with room to improve, or in an operational failure with consequences attached.

Watch It. Share It. Then Act on It.

Watch the webinar on demand on our GIST 360 Platform and bring your data, AI, and architecture principals into the discussion. Join the GIST 360 community to stay connected with the practitioners and federal leaders actively solving these problems. Our upcoming events are where policy meets execution and where the real conversations happen. Ready to talk about your agency’s data readiness? Reach out to our team and let’s map the gap between where your data environment is today and where your AI mission requires it to be. The window for early mover advantage is open. It will not stay open forever