Sonilo and Shutterstock are betting that licensed training data could define the future of AI music.
Updated
May 13, 2026 3:39 PM

A human operating a digital turntable. PHOTO: UNSPLASH
As copyright disputes continue to grow around AI-generated music, Sonilo, the world’s first professionally licensed video-to-music AI platform, has partnered with Shutterstock to train its models on licensed music catalogs.
The agreement gives Sonilo access to Shutterstock’s music library for AI model training. According to the companies, it is Shutterstock’s first partnership with a video-to-music AI platform and the timing is significant. AI music companies are facing growing pressure over how their systems are trained. Artists and record labels have increasingly challenged the use of copyrighted music in AI datasets, especially when licensing agreements or compensation structures are unclear.
That tension has created a divide across the industry. Some companies have continued building models around scraped or disputed data. Others are trying to position licensing as part of the product itself.
Sonilo falls into the second group. The company says its models are trained only on licensed material where artists and rights holders have agreed to participate and receive compensation. The Shutterstock partnership strengthens that position while giving Sonilo access to a larger pool of commercially cleared music.
The collaboration also points to a broader change happening inside generative AI. As AI tools move into commercial production, companies are being pushed to show not just what their models can generate, but also where their training data comes from.
Sonilo’s platform is built around video rather than text prompts. The system analyses footage directly, studies pacing and emotional tone, then generates an original soundtrack to match the content. The company says this removes the need for manual music searches, syncing or editing workflows. The generated tracks are cleared for commercial use across social media, branded content and broadcast production.
Shawn Song, CEO of Sonilo, said: "Music has always been the last unsolved layer of video creation, and video has always carried its own soundtrack. We built Sonilo to hear it and compose from it, without a single text prompt. But how we build matters as much as what we build. While others have chosen to take artists' work without permission and charge creators for the privilege, we've chosen a different path—one where artists are compensated from day one. Partnering with Shutterstock reflects that standard. Every model we train meets a bar the music industry can stand behind, because the most innovative AI platforms don't have to come at the expense of the artists who make all of these possible."
For Shutterstock, the deal expands the company’s growing role in generative AI infrastructure. The company has increasingly focused on licensing content for AI systems across images, video and music.
Jessica April, Vice President of Data Licensing & AI Services at Shutterstock, said: "AI innovation depends on access to high-quality, rights-cleared content and trusted licensing partnerships. Sonilo's approach reflects the growing demand for responsibly sourced training data and commercially safe AI workflows. We're pleased to support companies building generative AI products with licensed content and scalable data solutions that help accelerate innovation while respecting creators and rights holders."
The partnership also comes as Sonilo expands into creator and developer ecosystems. Earlier this month, the company launched as a native node inside ComfyUI, an open-source AI workflow platform used by millions of creators. Sonilo also offers API access for integration into creator tools, video platforms, game engines and other AI systems.
As AI-generated music becomes more common across advertising, creator platforms and digital media, the industry’s focus is shifting beyond generation alone. Questions around licensing, ownership and compensation are increasingly shaping how AI music companies position themselves and build trust with creators.
Keep Reading
A rare policy consensus emerges as AI’s impact moves beyond innovation into governance and societal risk
Updated
May 5, 2026 5:42 PM

A mechanical hand reaching for the hand of flesh. PHOTO: UNSPLASH
A new survey from Povaddo, a policy research firm, suggests that concern about artificial intelligence is no longer limited to industry or academia. It is now firmly present within the policy community.
The survey draws on responses from 301 public policy professionals across the United States and Europe, including lawmakers, staffers and analysts involved in shaping and evaluating public policy. A majority of respondents—61%—say governments are falling short in addressing the negative impacts of AI.
There is also broad agreement that regulation needs to increase. In the United States, 92% of respondents support stronger AI regulation, compared to 70% in Europe. At a time when consensus is often difficult, the findings point to a shared view across policy circles that current frameworks are not keeping pace with technological development.
Differences emerge when looking at how AI is affecting national contexts. In the U.S., 57% of policy experts believe AI is already harming the labor market. In Europe, 34% say the same. U.S. respondents are also more likely to see AI as a greater threat to jobs than immigration, with 63% holding that view compared to 47% in Europe.
On misinformation, responses are closely aligned. A large majority of policy experts in both regions expect an AI-driven misinformation crisis within the next one to two years—87% in the U.S. and 82% in Europe. Many also believe that AI-generated or AI-amplified misinformation could affect elections and public health information.
Some respondents frame the risks in more fundamental terms. In the United States, 41% of policy experts say AI poses an existential threat to humanity. In Europe, 29% share that view. U.S. respondents are also more likely to believe that advances in AI could harm global security and stability.
The findings come as policymakers begin to respond more actively. In the U.S., Senators Josh Hawley, Richard Blumenthal and Mark Warner have introduced bipartisan legislation focused on AI accountability, including measures aimed at protecting workers and children.
In Europe, the introduction of the EU AI Act marks a more advanced regulatory approach. The framework sets out rules based on levels of risk and is widely seen as the first comprehensive attempt to govern AI at scale.
William Stewart, President and Founder of Povaddo, said: "What makes these findings so significant is who is saying it. These are the practitioners who work inside the policy process every day, spanning every corner of the policy world from defense to healthcare to finance, not activists or everyday citizens. These findings foreshadow real action. The current path of governments accelerating AI deployment while falling short on governance is not sustainable, and the people who know that best are the ones in this survey. You cannot have nine-in-ten policy insiders demanding more regulation and four-in-ten calling AI an existential threat without that eventually moving the needle in Washington and Brussels in terms of legislative or regulatory action".
Taken together, the survey reflects a shift in how AI is being discussed within policymaking circles. Concern is no longer limited to future risks. It is increasingly tied to current gaps in governance and the pace of deployment.