Sonilo and Shutterstock are betting that licensed training data could define the future of AI music.
Updated
May 13, 2026 3:39 PM

A human operating a digital turntable. PHOTO: UNSPLASH
As copyright disputes continue to grow around AI-generated music, Sonilo, the world’s first professionally licensed video-to-music AI platform, has partnered with Shutterstock to train its models on licensed music catalogs.
The agreement gives Sonilo access to Shutterstock’s music library for AI model training. According to the companies, it is Shutterstock’s first partnership with a video-to-music AI platform and the timing is significant. AI music companies are facing growing pressure over how their systems are trained. Artists and record labels have increasingly challenged the use of copyrighted music in AI datasets, especially when licensing agreements or compensation structures are unclear.
That tension has created a divide across the industry. Some companies have continued building models around scraped or disputed data. Others are trying to position licensing as part of the product itself.
Sonilo falls into the second group. The company says its models are trained only on licensed material where artists and rights holders have agreed to participate and receive compensation. The Shutterstock partnership strengthens that position while giving Sonilo access to a larger pool of commercially cleared music.
The collaboration also points to a broader change happening inside generative AI. As AI tools move into commercial production, companies are being pushed to show not just what their models can generate, but also where their training data comes from.
Sonilo’s platform is built around video rather than text prompts. The system analyses footage directly, studies pacing and emotional tone, then generates an original soundtrack to match the content. The company says this removes the need for manual music searches, syncing or editing workflows. The generated tracks are cleared for commercial use across social media, branded content and broadcast production.
Shawn Song, CEO of Sonilo, said: "Music has always been the last unsolved layer of video creation, and video has always carried its own soundtrack. We built Sonilo to hear it and compose from it, without a single text prompt. But how we build matters as much as what we build. While others have chosen to take artists' work without permission and charge creators for the privilege, we've chosen a different path—one where artists are compensated from day one. Partnering with Shutterstock reflects that standard. Every model we train meets a bar the music industry can stand behind, because the most innovative AI platforms don't have to come at the expense of the artists who make all of these possible."
For Shutterstock, the deal expands the company’s growing role in generative AI infrastructure. The company has increasingly focused on licensing content for AI systems across images, video and music.
Jessica April, Vice President of Data Licensing & AI Services at Shutterstock, said: "AI innovation depends on access to high-quality, rights-cleared content and trusted licensing partnerships. Sonilo's approach reflects the growing demand for responsibly sourced training data and commercially safe AI workflows. We're pleased to support companies building generative AI products with licensed content and scalable data solutions that help accelerate innovation while respecting creators and rights holders."
The partnership also comes as Sonilo expands into creator and developer ecosystems. Earlier this month, the company launched as a native node inside ComfyUI, an open-source AI workflow platform used by millions of creators. Sonilo also offers API access for integration into creator tools, video platforms, game engines and other AI systems.
As AI-generated music becomes more common across advertising, creator platforms and digital media, the industry’s focus is shifting beyond generation alone. Questions around licensing, ownership and compensation are increasingly shaping how AI music companies position themselves and build trust with creators.
Keep Reading
A new AI model replaces months of simulation with near-instant predictions, changing how spacecraft operations are prepared
Updated
April 24, 2026 10:53 AM

Northrop Grumman Stargaze serves as the mother ship for the Pegasus, an air-launched orbital rocket. PHOTO: UNSPLASH
Flexcompute, a startup that builds software to simulate real-world physics, is working with Northrop Grumman to change how space missions are prepared. Together, they have developed an AI-based system that can predict how spacecraft respond during critical manoeuvres such as docking—when one spacecraft moves in and connects with another in orbit. These steps have traditionally taken months of preparation.
At the centre of this work is a long-standing problem in space operations. When a spacecraft fires its thrusters, the exhaust plume interacts with nearby surfaces. These interactions can affect movement, temperature and stability. Because these effects are difficult to test in real conditions, engineers have relied on large volumes of computer simulations to estimate outcomes before a mission. That process is slow and resource-intensive.
The new system replaces much of that workflow with a trained AI model. Instead of running millions of simulations, the model learns patterns from physics-based data and can make predictions in seconds. It also provides a measure of uncertainty, which helps engineers understand how reliable those predictions are when making decisions.
"At Northrop Grumman, we're pioneering physics AI to accelerate design and solve complex simulation and modelling problems like plume impingement—critical for station keeping, rendezvous and space robotics. Simply put: we're pushing the boundaries of advanced space operations", said Fahad Khan, Director of AI Foundations at Northrop Grumman. "Partnering with Flexcompute and NVIDIA, we're accelerating innovation and mission timelines to deliver superior space capabilities for customers at the speed they need".
The system is built using technology from NVIDIA, which provides the computing framework behind the model. Flexcompute has adapted it to handle the specific challenges of spaceflight, including how gases expand and interact in a vacuum. The result is a tool that can simulate complex scenarios much faster while maintaining the level of accuracy needed for mission planning.
By shortening preparation time, the model changes how engineers approach spacecraft design and operations. Faster predictions mean teams can test more scenarios and adjust plans more quickly. It also helps improve fuel use and extend the lifespan of spacecraft.
"Northrop Grumman's confidence reflects what sets Flexcompute apart", said Vera Yang, President and Co-Founder of Flexcompute. "We are able to take the most accurate and scalable physics foundations and evolve them into highly trained, customized Physics AI solutions that engineers can rely on. This work shows how we are transforming the role of simulation, not just speeding it up, but expanding what engineers can confidently solve and how quickly they can act".
The collaboration points to a broader shift in how engineering problems are being handled. Instead of relying only on detailed simulations that take time to run, companies are beginning to use AI systems that can approximate those results quickly while still reflecting the underlying physics.
"The industry's most ambitious space missions now demand a level of speed and precision that traditional engineering cycles can no longer sustain", said Tim Costa, vice president and general manager of computational engineering at NVIDIA. "By integrating NVIDIA PhysicsNeMo, Northrop Grumman and Flexcompute are transforming complex simulations like plume impingement from days of compute into seconds of insight, drastically accelerating the path from mission concept to orbit".
What emerges from this work is a shift in how missions are prepared. When prediction cycles move from months to seconds, testing and decision-making can happen faster. For space operations, where timing and precision are closely linked, that change could reshape how systems are built and run.