A new AI model replaces months of simulation with near-instant predictions, changing how spacecraft operations are prepared
Updated
April 24, 2026 10:53 AM

Northrop Grumman Stargaze serves as the mother ship for the Pegasus, an air-launched orbital rocket. PHOTO: UNSPLASH
Flexcompute, a startup that builds software to simulate real-world physics, is working with Northrop Grumman to change how space missions are prepared. Together, they have developed an AI-based system that can predict how spacecraft respond during critical manoeuvres such as docking—when one spacecraft moves in and connects with another in orbit. These steps have traditionally taken months of preparation.
At the centre of this work is a long-standing problem in space operations. When a spacecraft fires its thrusters, the exhaust plume interacts with nearby surfaces. These interactions can affect movement, temperature and stability. Because these effects are difficult to test in real conditions, engineers have relied on large volumes of computer simulations to estimate outcomes before a mission. That process is slow and resource-intensive.
The new system replaces much of that workflow with a trained AI model. Instead of running millions of simulations, the model learns patterns from physics-based data and can make predictions in seconds. It also provides a measure of uncertainty, which helps engineers understand how reliable those predictions are when making decisions.
"At Northrop Grumman, we're pioneering physics AI to accelerate design and solve complex simulation and modelling problems like plume impingement—critical for station keeping, rendezvous and space robotics. Simply put: we're pushing the boundaries of advanced space operations", said Fahad Khan, Director of AI Foundations at Northrop Grumman. "Partnering with Flexcompute and NVIDIA, we're accelerating innovation and mission timelines to deliver superior space capabilities for customers at the speed they need".
The system is built using technology from NVIDIA, which provides the computing framework behind the model. Flexcompute has adapted it to handle the specific challenges of spaceflight, including how gases expand and interact in a vacuum. The result is a tool that can simulate complex scenarios much faster while maintaining the level of accuracy needed for mission planning.
By shortening preparation time, the model changes how engineers approach spacecraft design and operations. Faster predictions mean teams can test more scenarios and adjust plans more quickly. It also helps improve fuel use and extend the lifespan of spacecraft.
"Northrop Grumman's confidence reflects what sets Flexcompute apart", said Vera Yang, President and Co-Founder of Flexcompute. "We are able to take the most accurate and scalable physics foundations and evolve them into highly trained, customized Physics AI solutions that engineers can rely on. This work shows how we are transforming the role of simulation, not just speeding it up, but expanding what engineers can confidently solve and how quickly they can act".
The collaboration points to a broader shift in how engineering problems are being handled. Instead of relying only on detailed simulations that take time to run, companies are beginning to use AI systems that can approximate those results quickly while still reflecting the underlying physics.
"The industry's most ambitious space missions now demand a level of speed and precision that traditional engineering cycles can no longer sustain", said Tim Costa, vice president and general manager of computational engineering at NVIDIA. "By integrating NVIDIA PhysicsNeMo, Northrop Grumman and Flexcompute are transforming complex simulations like plume impingement from days of compute into seconds of insight, drastically accelerating the path from mission concept to orbit".
What emerges from this work is a shift in how missions are prepared. When prediction cycles move from months to seconds, testing and decision-making can happen faster. For space operations, where timing and precision are closely linked, that change could reshape how systems are built and run.
Keep Reading
Why investors are backing Applied Brain Research’s on-device voice AI approach.
Updated
January 28, 2026 5:53 PM

Plastic model of a human's brain. PHOTO: UNSPLASH
Applied Brain Research (ABR), a Canada-based startup, has closed its seed funding round to advance its work in “on-device voice AI”. The round was led by Two Small Fish Ventures, with its general partner Eva Lau joining ABR’s board, reflecting investor confidence in the company’s technical direction and market focus.
The round was oversubscribed, meaning more investors wanted to participate than the company had planned for. That response reflects growing interest in technologies that reduce reliance on cloud-based AI systems.
ABR is focused on a clear problem in voice-enabled products today. Most voice features depend on cloud servers to process speech, which can cause delays, increase costs, raise privacy concerns and limit performance on devices with small batteries or limited computing power.
ABR’s approach is built around keeping voice AI fully on-device. Instead of relying on cloud connectivity, its technology allows devices to process speech locally, enabling faster responses and more predictable performance while reducing data exposure.
Central to this approach is the company’s TSP1 chip, a processor designed specifically for handling time-based data such as speech. Built for real-time voice processing at the edge, TSP1 allows tasks like speech recognition and text-to-speech to run on smaller, power-constrained devices.
This specialization is particularly relevant as voice interfaces become more common across emerging products. Many edge devices such as wearables or mobile robotics cannot support traditional voice AI systems without compromising battery life or responsiveness. The TSP1 addresses this limitation by enabling these capabilities at significantly lower power levels than conventional alternatives. According to the company, full speech-to-text and text-to-speech can run at under 30 milliwatts of power, which is roughly 10 to 100 times lower than many existing alternatives. This level of efficiency makes advanced voice interaction feasible on devices where power consumption has long been a limiting factor.
That efficiency makes the technology applicable across a wide range of use cases. In augmented reality glasses, it supports responsive, hands-free voice control. In robotics, it enables real-time voice interaction without cloud latency or ongoing service costs. For wearables, it expands voice functionality without severely impacting battery life. In medical devices, it allows on-device inference while keeping sensitive data local. And in automotive systems, it enables consistent voice experiences regardless of network availability.
For investors, this combination of timing and technology is what stands out. Voice interfaces are becoming more common, while reliance on cloud infrastructure is increasingly seen as a limitation rather than a strength. ABR sits at the intersection of those two shifts.
With fresh funding in place, ABR is now working with partners across AR, robotics, healthcare, automotive and wearables to bring that future closer. For startup watchers, it’s a reminder that some of the most meaningful AI advances aren’t about bigger models but about making intelligence fit where it actually needs to live.