A new AI model replaces months of simulation with near-instant predictions, changing how spacecraft operations are prepared
Updated
April 24, 2026 10:53 AM

Northrop Grumman Stargaze serves as the mother ship for the Pegasus, an air-launched orbital rocket. PHOTO: UNSPLASH
Flexcompute, a startup that builds software to simulate real-world physics, is working with Northrop Grumman to change how space missions are prepared. Together, they have developed an AI-based system that can predict how spacecraft respond during critical manoeuvres such as docking—when one spacecraft moves in and connects with another in orbit. These steps have traditionally taken months of preparation.
At the centre of this work is a long-standing problem in space operations. When a spacecraft fires its thrusters, the exhaust plume interacts with nearby surfaces. These interactions can affect movement, temperature and stability. Because these effects are difficult to test in real conditions, engineers have relied on large volumes of computer simulations to estimate outcomes before a mission. That process is slow and resource-intensive.
The new system replaces much of that workflow with a trained AI model. Instead of running millions of simulations, the model learns patterns from physics-based data and can make predictions in seconds. It also provides a measure of uncertainty, which helps engineers understand how reliable those predictions are when making decisions.
"At Northrop Grumman, we're pioneering physics AI to accelerate design and solve complex simulation and modelling problems like plume impingement—critical for station keeping, rendezvous and space robotics. Simply put: we're pushing the boundaries of advanced space operations", said Fahad Khan, Director of AI Foundations at Northrop Grumman. "Partnering with Flexcompute and NVIDIA, we're accelerating innovation and mission timelines to deliver superior space capabilities for customers at the speed they need".
The system is built using technology from NVIDIA, which provides the computing framework behind the model. Flexcompute has adapted it to handle the specific challenges of spaceflight, including how gases expand and interact in a vacuum. The result is a tool that can simulate complex scenarios much faster while maintaining the level of accuracy needed for mission planning.
By shortening preparation time, the model changes how engineers approach spacecraft design and operations. Faster predictions mean teams can test more scenarios and adjust plans more quickly. It also helps improve fuel use and extend the lifespan of spacecraft.
"Northrop Grumman's confidence reflects what sets Flexcompute apart", said Vera Yang, President and Co-Founder of Flexcompute. "We are able to take the most accurate and scalable physics foundations and evolve them into highly trained, customized Physics AI solutions that engineers can rely on. This work shows how we are transforming the role of simulation, not just speeding it up, but expanding what engineers can confidently solve and how quickly they can act".
The collaboration points to a broader shift in how engineering problems are being handled. Instead of relying only on detailed simulations that take time to run, companies are beginning to use AI systems that can approximate those results quickly while still reflecting the underlying physics.
"The industry's most ambitious space missions now demand a level of speed and precision that traditional engineering cycles can no longer sustain", said Tim Costa, vice president and general manager of computational engineering at NVIDIA. "By integrating NVIDIA PhysicsNeMo, Northrop Grumman and Flexcompute are transforming complex simulations like plume impingement from days of compute into seconds of insight, drastically accelerating the path from mission concept to orbit".
What emerges from this work is a shift in how missions are prepared. When prediction cycles move from months to seconds, testing and decision-making can happen faster. For space operations, where timing and precision are closely linked, that change could reshape how systems are built and run.
Keep Reading
At under US$1,000, Hypernova isn’t just eyewear—it’s Meta’s push to make AR feel ordinary.
Updated
January 8, 2026 6:34 PM

Closeup of the Ray-Ban logo and the built-in ultra-wide 12 MP camera on a pair of new Ray-Ban Meta Wayfarer smart glasses. PHOTO: ADOBE STOCK
Meta is preparing to launch its next big wearable: the Hypernova smart glasses. Unlike earlier experiments like the Ray-Ban Stories, these new glasses promise more advanced features at a price point under US$1,000. With a launch set for September 17 at Meta’s annual Connect conference, the Hypernova is already drawing attention for blending design, technology and accessibility.
In this article, let’s take a closer look at Hypernova’s design, features, pricing and the challenges Meta faces as it tries to bring smart glasses into everyday life.
Meta’s earlier Ray-Ban glasses offered cameras and audio but no display. Hypernova changes that: The glasses will ship with a built-in micro-display, giving wearers quick access to maps, messages, notifications and even Meta’s AI assistant. It’s a step toward everyday AR that feels useful and natural, not experimental.
Perhaps most importantly, the price makes them attainable. While early estimates placed the cost above US$1,000, Meta has committed to a launch price of around US$800. That’s still premium, but it moves AR smart glasses into reach for more consumers.
Hypernova weighs about 70 grams, roughly 20 grams heavier than the Ray-Ban Meta models. The added weight likely comes from added components like the new display and extra sensors.
To keep the glasses stylish, Meta continues its partnership with EssilorLuxottica, the company behind Ray-Ban and Prada eyewear. Thicker frames—especially Prada’s designs—help hide the hardware like chips, microphones and batteries without making the glasses look oversized.
The glasses stick close to the classic Ray-Ban silhouette but feature slightly bulkier arms. On the left side, a touch-sensitive bar lets users control functions with taps and swipes. For example, a two-finger tap can trigger a photo or start video recording.
Hypernova introduces something the earlier Ray-Ban glasses never had: a display built right into the lens. In the bottom-right corner of the right lens, a small micro-screen uses waveguide optics to project a digital overlay with about a 20° field of view. This means you can glance at turn-by-turn directions, check a notification or quickly consult Meta’s AI assistant without pulling out your phone. It’s discreet, practical and a major step up from the older models, which were limited to capturing photos and videos, handling calls and playing music via speakers.
Alongside the glasses comes the Ceres wristband, a companion device powered by electromyography (EMG). The band picks up the tiny electrical signals in your wrist and fingers, translating them into commands. A pinch might let you select something, a wrist flick could scroll a page, and a swipe could move between screens. The idea is to avoid clunky buttons or having to talk to your glasses in public. Meta has also been experimenting with handwriting recognition through the band, though it’s not clear if that feature will be ready in time for launch.
Meta doesn’t just want Hypernova to be useful—it wants it to be fun. Code found in leaked firmware revealed a small game called Hypertrail. It looks to borrow ideas from the 1981 arcade shooter Galaga, letting wearers play a simple, retro-inspired game right through their glasses. It’s not the main attraction, but it shows Meta is trying to make Hypernova feel more like a playful everyday gadget rather than just a piece of serious tech.
Hypernova runs on a customized version of Android and pairs with smartphones through the Meta View app. Out of the box, it should support the basics: calls, music and message notifications. Leaks suggest several apps will come preinstalled, including Camera, Gallery, Maps, WhatsApp, Messenger and Meta AI. A Qualcomm processor powers the whole setup, helping it run smoothly while keeping energy demands reasonable.
Meta is also trying to bring in outside developers. In August 2025, CNBC reported that the company invited third-party developers—especially in generative AI—to build experimental apps for Hypernova and the Ceres wristband. The Meta Connect 2025 agenda even highlights sessions on a new smart glasses SDK and toolkit. The push shows Meta’s interest in making Hypernova more than just a device; it wants a broader platform with apps that go beyond its own first-party software.
During development, Hypernova was rumored to cost as much as US$1,400. By pricing it around US$800, Meta signals that it wants adoption more than profit. The company is keeping production limited (around 150,000 units), showing it sees this as a market test rather than a mass rollout. Still, the sub-US$1,000 price tag makes advanced AR far more accessible than before.
Despite its promise, Hypernova may still face hurdles. The Ceres wristband can struggle if worn loosely, and some testers have reported issues based on which arm it’s worn on or even when wearing long sleeves. In short, getting EMG input right for everyone will be critical.
Privacy is another major concern. In past experiments, researchers hacked Ray-Ban Meta glasses to run facial recognition, instantly identifying strangers and pulling personal info. Meta has added guidelines, like a recording indicator light, but critics argue these measures are too easy to ignore. Moreover, data captured by smart glasses can feed into AI training, raising questions about consent and surveillance.
The Meta Hypernova smart glasses mark a turning point in wearable tech. They’re lighter and more stylish than bulky AR headsets, while offering real-world features like navigation, messaging and hands-free control. At under US$1,000, they aim to make AR glasses more than a luxury gadget—they’re a step toward everyday use.
Whether Hypernova succeeds will depend on how well it balances style, usability and privacy. But one thing is clear: Meta is betting that always-on, glanceable AR can move from science fiction to daily life.