Sonilo and Shutterstock are betting that licensed training data could define the future of AI music.
Updated
May 13, 2026 3:39 PM

A human operating a digital turntable. PHOTO: UNSPLASH
As copyright disputes continue to grow around AI-generated music, Sonilo, the world’s first professionally licensed video-to-music AI platform, has partnered with Shutterstock to train its models on licensed music catalogs.
The agreement gives Sonilo access to Shutterstock’s music library for AI model training. According to the companies, it is Shutterstock’s first partnership with a video-to-music AI platform and the timing is significant. AI music companies are facing growing pressure over how their systems are trained. Artists and record labels have increasingly challenged the use of copyrighted music in AI datasets, especially when licensing agreements or compensation structures are unclear.
That tension has created a divide across the industry. Some companies have continued building models around scraped or disputed data. Others are trying to position licensing as part of the product itself.
Sonilo falls into the second group. The company says its models are trained only on licensed material where artists and rights holders have agreed to participate and receive compensation. The Shutterstock partnership strengthens that position while giving Sonilo access to a larger pool of commercially cleared music.
The collaboration also points to a broader change happening inside generative AI. As AI tools move into commercial production, companies are being pushed to show not just what their models can generate, but also where their training data comes from.
Sonilo’s platform is built around video rather than text prompts. The system analyses footage directly, studies pacing and emotional tone, then generates an original soundtrack to match the content. The company says this removes the need for manual music searches, syncing or editing workflows. The generated tracks are cleared for commercial use across social media, branded content and broadcast production.
Shawn Song, CEO of Sonilo, said: "Music has always been the last unsolved layer of video creation, and video has always carried its own soundtrack. We built Sonilo to hear it and compose from it, without a single text prompt. But how we build matters as much as what we build. While others have chosen to take artists' work without permission and charge creators for the privilege, we've chosen a different path—one where artists are compensated from day one. Partnering with Shutterstock reflects that standard. Every model we train meets a bar the music industry can stand behind, because the most innovative AI platforms don't have to come at the expense of the artists who make all of these possible."
For Shutterstock, the deal expands the company’s growing role in generative AI infrastructure. The company has increasingly focused on licensing content for AI systems across images, video and music.
Jessica April, Vice President of Data Licensing & AI Services at Shutterstock, said: "AI innovation depends on access to high-quality, rights-cleared content and trusted licensing partnerships. Sonilo's approach reflects the growing demand for responsibly sourced training data and commercially safe AI workflows. We're pleased to support companies building generative AI products with licensed content and scalable data solutions that help accelerate innovation while respecting creators and rights holders."
The partnership also comes as Sonilo expands into creator and developer ecosystems. Earlier this month, the company launched as a native node inside ComfyUI, an open-source AI workflow platform used by millions of creators. Sonilo also offers API access for integration into creator tools, video platforms, game engines and other AI systems.
As AI-generated music becomes more common across advertising, creator platforms and digital media, the industry’s focus is shifting beyond generation alone. Questions around licensing, ownership and compensation are increasingly shaping how AI music companies position themselves and build trust with creators.
Keep Reading
A wearable ring, conversational AI and US$23M in funding. Sandbar wants to rethink how we interact with technology
Updated
April 1, 2026 8:55 AM

Sandbar's Stream ring. PHOTO: SANDBAR
Sandbar, a New York–based interface startup, has raised US$23 million in Series A funding to develop a wearable device that lets people interact with artificial intelligence via voice rather than screens.
Adjacent and Kindred Ventures led the round; both venture firms focused on early-stage technology startups. The investment brings Sandbar’s total funding to us$36 million. Earlier backing included a US$10 million seed round led by True Ventures, a venture capital firm, as well as a US$3 million pre-seed round supported by Upfront Ventures, a venture firm and Betaworks, a startup studio and investment firm.
Sandbar was founded by Mina Fahmi and Kirak Hong, who previously worked together at CTRL-labs, a neural interface startup acquired by Meta in 2019. Their earlier work explored how computers could respond more directly to human intent — an idea that continues to shape Sandbar’s approach to AI interfaces.
The new funding will help the company expand its team across machine learning, interaction design and software engineering as it prepares to launch its first product. That product, called Stream, combines a wearable ring with a conversational AI interface. The system allows users to speak to an AI assistant without unlocking a phone or opening an app.
The concept is simple. Instead of typing into a screen, users press a button on the ring and talk. The system can capture notes, organize ideas, retrieve information from the web or trigger actions through connected applications.
The ring includes a microphone, a touchpad and subtle haptic feedback. These elements allow the device to respond through gentle vibrations rather than visual alerts. According to the company, the ring only listens when the user presses the button — a design meant to address common concerns around always-on microphones.
That design reflects a larger shift Sandbar believes is underway. As AI assistants become more capable, many startups are experimenting with new ways to interact with them. The focus is moving away from screens and keyboards toward interfaces that feel more natural and immediate.
Stream uses multiple AI models working together to process requests, search the web and structure information in real time. The company says users remain in control of their data and can choose whether to share information with other apps.
Sandbar is also developing a feature called Inner Voice, which responds using a voice customized to the user. The feature will debut during a closed beta planned for this spring, giving the company time to refine how the software behaves in everyday use.
The startup currently employs a team of 15 people. Many have worked on well-known consumer devices including the iPhone, Fitbit, Kindle and Vision Pro. Recent hires include Sam Bowen, formerly of Amazon and Fitbit, who joined as vice president of hardware and Brooke Travis, previously at Equinox, Dior and Gap, who now leads marketing.
Sandbar plans to begin shipping Stream in summer 2026 after completing early testing. As artificial intelligence tools become more integrated into daily life, the company is betting that the next shift in computing will not come from another app — but from new ways for people to interact with AI itself.