Sonilo and Shutterstock are betting that licensed training data could define the future of AI music.
As copyright disputes continue to grow around AI-generated music, Sonilo, the world’s first professionally licensed video-to-music AI platform, has partnered with Shutterstock to train its models on licensed music catalogs.
The agreement gives Sonilo access to Shutterstock’s music library for AI model training. According to the companies, it is Shutterstock’s first partnership with a video-to-music AI platform and the timing is significant. AI music companies are facing growing pressure over how their systems are trained. Artists and record labels have increasingly challenged the use of copyrighted music in AI datasets, especially when licensing agreements or compensation structures are unclear.
That tension has created a divide across the industry. Some companies have continued building models around scraped or disputed data. Others are trying to position licensing as part of the product itself.
Sonilo falls into the second group. The company says its models are trained only on licensed material where artists and rights holders have agreed to participate and receive compensation. The Shutterstock partnership strengthens that position while giving Sonilo access to a larger pool of commercially cleared music.
The collaboration also points to a broader change happening inside generative AI. As AI tools move into commercial production, companies are being pushed to show not just what their models can generate, but also where their training data comes from.
Sonilo’s platform is built around video rather than text prompts. The system analyses footage directly, studies pacing and emotional tone, then generates an original soundtrack to match the content. The company says this removes the need for manual music searches, syncing or editing workflows. The generated tracks are cleared for commercial use across social media, branded content and broadcast production.
Shawn Song, CEO of Sonilo, said: "Music has always been the last unsolved layer of video creation, and video has always carried its own soundtrack. We built Sonilo to hear it and compose from it, without a single text prompt. But how we build matters as much as what we build. While others have chosen to take artists' work without permission and charge creators for the privilege, we've chosen a different path—one where artists are compensated from day one. Partnering with Shutterstock reflects that standard. Every model we train meets a bar the music industry can stand behind, because the most innovative AI platforms don't have to come at the expense of the artists who make all of these possible."
For Shutterstock, the deal expands the company’s growing role in generative AI infrastructure. The company has increasingly focused on licensing content for AI systems across images, video and music.
Jessica April, Vice President of Data Licensing & AI Services at Shutterstock, said: "AI innovation depends on access to high-quality, rights-cleared content and trusted licensing partnerships. Sonilo's approach reflects the growing demand for responsibly sourced training data and commercially safe AI workflows. We're pleased to support companies building generative AI products with licensed content and scalable data solutions that help accelerate innovation while respecting creators and rights holders."
The partnership also comes as Sonilo expands into creator and developer ecosystems. Earlier this month, the company launched as a native node inside ComfyUI, an open-source AI workflow platform used by millions of creators. Sonilo also offers API access for integration into creator tools, video platforms, game engines and other AI systems.
As AI-generated music becomes more common across advertising, creator platforms and digital media, the industry’s focus is shifting beyond generation alone. Questions around licensing, ownership and compensation are increasingly shaping how AI music companies position themselves and build trust with creators.
As workplace knowledge spreads across chats, AI firms are building systems that can structure, retrieve and preserve it over time.
Votee AI, an enterprise AI company headquartered in Hong Kong, has partnered with its Toronto-based research lab Beever AI to launch Beever Atlas. The new platform is designed to turn workplace chats into searchable knowledge that AI systems can retrieve and understand.
The release focuses on a growing issue inside organisations. Much of today’s workplace knowledge now exists inside chat platforms such as Slack, Microsoft Teams, Discord and Telegram. Important discussions, project decisions and technical information often disappear into long message histories that are difficult to search later.
Beever AI developed the platform to organise those conversations into a structured system for AI assistants. The software connects with Telegram, Discord, Mattermost, Microsoft Teams and Slack, then converts conversations into linked records of people, projects, files and decisions.
The collaboration combines Votee AI’s enterprise infrastructure work with Beever AI’s research around AI memory systems. The companies are releasing two versions of the product. The open-source edition is aimed at individual developers, researchers and creators. The enterprise edition is designed for banks, government agencies and larger organisations with stricter security requirements.
The release also reflects a broader shift happening across the AI industry. Companies are increasingly looking at how AI systems store and retrieve long-term knowledge, rather than relying solely on large context windows or search-based retrieval.
Earlier this year, OpenAI founding member and former director of AI at Tesla Andrej Karpathy discussed the growing need for what he described as “LLM Knowledge Bases.” He argued that AI systems need structured and evolving memory rather than depending only on context windows and vector search.
Beever Atlas approaches that problem through workplace communication. Instead of focusing mainly on uploaded files, the system is designed around conversations that happen daily across team chat platforms. It can also process images, PDFs, voice notes and video files within the same searchable system.
The companies say the software is designed to work directly with AI assistants and coding tools such as Cursor, AWS Kiro and Qwen Code. Integrations for OpenClaw and Hermes Agent are expected later in 2026.
Pak-Sun Ting, Co-Founder and CEO of Votee AI said: "Hong Kong has always been known for property and finance. Beever Atlas is proof that world-class AI infrastructure can emerge from an HK-headquartered company and be shared openly with the world. Every growing organization faces the same silent liability: conversational knowledge loss. Beever Atlas turns this perishable resource into a compounding organizational asset."
A large part of the enterprise version focuses on privacy and access control. The system mirrors permissions from Slack and Microsoft Teams so users can only retrieve information they are already authorised to access. Permission updates are reflected automatically when access changes inside company systems.
The enterprise edition also includes audit logs, encryption controls and data retention settings for organisations handling sensitive internal data. Companies can run the software entirely inside their own infrastructure using Docker and connect it to their preferred AI models through LiteLLM.
The companies argue that organising information is more useful than simply storing chat archives. Jacky Chan Co-Founder and CTO of Votee AI said: "The key technical decision was to treat agent memory as a knowledge engineering problem, not a retrieval problem. Structure beats similarity — a typed graph of who works on what is more useful to an AI than vector search over a Slack archive."
The software also includes protections against prompt injection attacks and systems designed to reduce hallucinated responses. According to the companies, the AI is designed to return “I don't know” with citations when confidence is low instead of generating unsupported answers.
As workplace communication becomes increasingly fragmented across chat platforms, companies are beginning to treat internal conversations as information that AI systems can organise, retrieve and build on. Beever Atlas reflects a broader push to turn everyday workplace communication into long-term organisational memory.
A rare policy consensus emerges as AI’s impact moves beyond innovation into governance and societal risk
A new survey from Povaddo, a policy research firm, suggests that concern about artificial intelligence is no longer limited to industry or academia. It is now firmly present within the policy community.
The survey draws on responses from 301 public policy professionals across the United States and Europe, including lawmakers, staffers and analysts involved in shaping and evaluating public policy. A majority of respondents—61%—say governments are falling short in addressing the negative impacts of AI.
There is also broad agreement that regulation needs to increase. In the United States, 92% of respondents support stronger AI regulation, compared to 70% in Europe. At a time when consensus is often difficult, the findings point to a shared view across policy circles that current frameworks are not keeping pace with technological development.
Differences emerge when looking at how AI is affecting national contexts. In the U.S., 57% of policy experts believe AI is already harming the labor market. In Europe, 34% say the same. U.S. respondents are also more likely to see AI as a greater threat to jobs than immigration, with 63% holding that view compared to 47% in Europe.
On misinformation, responses are closely aligned. A large majority of policy experts in both regions expect an AI-driven misinformation crisis within the next one to two years—87% in the U.S. and 82% in Europe. Many also believe that AI-generated or AI-amplified misinformation could affect elections and public health information.
Some respondents frame the risks in more fundamental terms. In the United States, 41% of policy experts say AI poses an existential threat to humanity. In Europe, 29% share that view. U.S. respondents are also more likely to believe that advances in AI could harm global security and stability.
The findings come as policymakers begin to respond more actively. In the U.S., Senators Josh Hawley, Richard Blumenthal and Mark Warner have introduced bipartisan legislation focused on AI accountability, including measures aimed at protecting workers and children.
In Europe, the introduction of the EU AI Act marks a more advanced regulatory approach. The framework sets out rules based on levels of risk and is widely seen as the first comprehensive attempt to govern AI at scale.
William Stewart, President and Founder of Povaddo, said: "What makes these findings so significant is who is saying it. These are the practitioners who work inside the policy process every day, spanning every corner of the policy world from defense to healthcare to finance, not activists or everyday citizens. These findings foreshadow real action. The current path of governments accelerating AI deployment while falling short on governance is not sustainable, and the people who know that best are the ones in this survey. You cannot have nine-in-ten policy insiders demanding more regulation and four-in-ten calling AI an existential threat without that eventually moving the needle in Washington and Brussels in terms of legislative or regulatory action".
Taken together, the survey reflects a shift in how AI is being discussed within policymaking circles. Concern is no longer limited to future risks. It is increasingly tied to current gaps in governance and the pace of deployment.
With operations across 50 countries, MagicLab is pairing new robot systems with a platform strategy aimed at wider commercial adoption
MagicLab Robotics is a Chinese startup that describes itself as an embodied AI company. At an event in Silicon Valley this week, it outlined its global ambitions and introduced new products designed for real-world use. The company said its international business now spans more than 50 countries and regions, with overseas markets accounting for 60% of total sales in 2025. That gives some indication of how quickly Chinese robotics firms are expanding beyond their home market.
At the centre of the announcement was MagicLab’s latest product line-up. It included Magic-Mix, described as a foundational world model for robots, the H01 dexterous robotic hand and its humanoid robot, MagicBot X1. In practical terms, the company is trying to build robots that can better understand their surroundings and perform physical tasks with greater precision. That is the core idea behind embodied AI, where intelligence is combined with movement and interaction in the real world rather than limited to software alone.
MagicLab says it develops both hardware and software internally. Its product range includes humanoid robots and four-legged machines, with systems designed for factories, commercial services and home use. The company also outlined where it sees demand emerging. It listed sectors such as healthcare, manufacturing, logistics, security, public safety, education and household assistance.
That wide spread of target markets reflects a broader challenge in robotics. Building capable machines is only one part of the equation. The harder task is finding enough practical uses where customers are willing to pay for them.
MagicLab also used the summit to set out a long-term commercial goal. It projected a path toward US$14 billion in annual revenue by 2036 through wider adoption of embodied AI systems. It also announced what it calls the “Co-Create 1000 Initiative”, a plan to work with external developers and partner companies.
As part of that effort, the startup said it plans to invest US$1 billion over the next five years to build a developer ecosystem that would allow third parties to create new applications for its robots. The strategy mirrors what happened in smartphones and cloud software, where ecosystems often mattered as much as the original hardware. If robotics follows a similar path, companies that attract developers could gain an advantage over those selling machines alone.
For now, MagicLab’s announcement is less about immediate breakthroughs and more about positioning. The company is presenting itself not simply as a robot maker, but as a platform business seeking a role in the next phase of intelligent machines.
A new AI model replaces months of simulation with near-instant predictions, changing how spacecraft operations are prepared
Flexcompute, a startup that builds software to simulate real-world physics, is working with Northrop Grumman to change how space missions are prepared. Together, they have developed an AI-based system that can predict how spacecraft respond during critical manoeuvres such as docking—when one spacecraft moves in and connects with another in orbit. These steps have traditionally taken months of preparation.
At the centre of this work is a long-standing problem in space operations. When a spacecraft fires its thrusters, the exhaust plume interacts with nearby surfaces. These interactions can affect movement, temperature and stability. Because these effects are difficult to test in real conditions, engineers have relied on large volumes of computer simulations to estimate outcomes before a mission. That process is slow and resource-intensive.
The new system replaces much of that workflow with a trained AI model. Instead of running millions of simulations, the model learns patterns from physics-based data and can make predictions in seconds. It also provides a measure of uncertainty, which helps engineers understand how reliable those predictions are when making decisions.
"At Northrop Grumman, we're pioneering physics AI to accelerate design and solve complex simulation and modelling problems like plume impingement—critical for station keeping, rendezvous and space robotics. Simply put: we're pushing the boundaries of advanced space operations", said Fahad Khan, Director of AI Foundations at Northrop Grumman. "Partnering with Flexcompute and NVIDIA, we're accelerating innovation and mission timelines to deliver superior space capabilities for customers at the speed they need".
The system is built using technology from NVIDIA, which provides the computing framework behind the model. Flexcompute has adapted it to handle the specific challenges of spaceflight, including how gases expand and interact in a vacuum. The result is a tool that can simulate complex scenarios much faster while maintaining the level of accuracy needed for mission planning.
By shortening preparation time, the model changes how engineers approach spacecraft design and operations. Faster predictions mean teams can test more scenarios and adjust plans more quickly. It also helps improve fuel use and extend the lifespan of spacecraft.
"Northrop Grumman's confidence reflects what sets Flexcompute apart", said Vera Yang, President and Co-Founder of Flexcompute. "We are able to take the most accurate and scalable physics foundations and evolve them into highly trained, customized Physics AI solutions that engineers can rely on. This work shows how we are transforming the role of simulation, not just speeding it up, but expanding what engineers can confidently solve and how quickly they can act".
The collaboration points to a broader shift in how engineering problems are being handled. Instead of relying only on detailed simulations that take time to run, companies are beginning to use AI systems that can approximate those results quickly while still reflecting the underlying physics.
"The industry's most ambitious space missions now demand a level of speed and precision that traditional engineering cycles can no longer sustain", said Tim Costa, vice president and general manager of computational engineering at NVIDIA. "By integrating NVIDIA PhysicsNeMo, Northrop Grumman and Flexcompute are transforming complex simulations like plume impingement from days of compute into seconds of insight, drastically accelerating the path from mission concept to orbit".
What emerges from this work is a shift in how missions are prepared. When prediction cycles move from months to seconds, testing and decision-making can happen faster. For space operations, where timing and precision are closely linked, that change could reshape how systems are built and run.
Vizrt shows how live video can be produced anywhere, without complex studio setups
Vizrt, a media technology company, has introduced a new AI-powered tool to simplify the creation of virtual scenes in live production. Its latest release, the AI Keyer, is built around a simple idea: remove the need for green screens and make virtual production possible in almost any environment.
Traditionally, creating virtual backgrounds or augmented reality (AR) scenes requires controlled studio setups, green screens, precise lighting and skilled operators. That makes high-end visual production expensive and difficult to scale, especially for smaller teams or live, on-the-ground reporting.
The AI Keyer is designed to address that gap. It uses AI trained on real-world footage to identify people in a frame and separate them from the background in real time. This allows production teams to replace backgrounds, insert AR graphics or place presenters into virtual environments—whether they are indoors, outdoors or on location.
"Creating XR environments typically demands large infrastructure investments and requires specialized skills for daily operations. The Vizrt AI Keyer removes all these constraints, so high-quality virtual scenes and AR graphics become a reality for live productions of every size", says Edouard Griveaud, Senior Product Manager at Vizrt.
In practical terms, this means a presenter can appear in a different location without moving, a remote speaker can be placed inside a virtual event space or branded graphics can be added to live interviews without a complex setup. The system works without chroma keying, reducing both preparation time and production overhead.
This shift also reflects how the company is approaching AI more broadly. Instead of treating it as a background feature, Vizrt is positioning AI as a core part of the content creation and delivery process.
"AI is transforming the world, and the creative industries are no exception. At Vizrt, we have been on this journey for years, embedding intelligence into our solutions, empowering storytellers and delivering real, measurable impact for our customers", says Rohit Nagarajan, CEO of Vizrt. "That is not a vision for tomorrow. That is happening today. The Vizrt AI Keyer is the latest proof point of our relentless commitment to innovation. Putting breakthrough technology in the hands of every creative, at every level, everywhere in the world".
Beyond the product itself, the direction is clear. By removing the need for green screens and complex setups, tools like the AI Keyer make it easier to produce high-quality visual content in more flexible settings. The result is a production model that is less tied to physical studios and more adaptable to real-world environments, where content can be created and adjusted in real time.
A new approach examines how individual cells respond to drugs, aiming to identify risks earlier in development.
DeepCyte, a startup in the drug development space, is focusing on a long-standing problem: why drugs that appear safe in early testing still fail in clinical trials or are withdrawn later due to toxicity. DeepCyte has launched with US$1.5 million in seed funding to build tools that detect and explain the harmful effects of drugs at much earlier stages.
The startup’s approach focuses on how individual cells respond to a drug. Instead of analysing cells in bulk, it studies them one by one. This helps capture differences in how cells react, which are often missed in traditional testing methods.
Drug toxicity remains one of the main reasons for failure in drug development. Methods such as animal testing and bulk cell analysis do not always reflect how human cells behave. This gap has pushed the industry to look for more reliable and human-relevant ways to test drug safety.
DeepCyte combines cell-level data with artificial intelligence. Its platform, MetaCore, studies what is happening inside individual cells by capturing detailed molecular information. This data is used to build large datasets that can train AI models.
Additionally, the company has developed an AI system called DeeImmuno. It is designed to predict whether a drug could be toxic and identify the biological reasons behind it. In internal testing on 100 drugs, the system identified different types of toxicity and their underlying mechanisms with a reported accuracy of 94 percent.
The focus on explaining why a drug is toxic, not just whether it is, reflects a broader shift in the industry. Regulators such as the U.S. Food and Drug Administration and the European Medicines Agency have been encouraging methods that rely more on human cell data and clearer biological evidence. The seed funding will be used to develop and scale these tools. The company aims to help drug developers make earlier decisions, which could reduce costly failures in later stages. Whether tools like this become widely used will depend on how they perform in real-world settings. For now, DeepCyte’s approach highlights a growing effort to make drug testing more precise by focusing on how drugs affect cells at the most detailed level.
A planned city explores how real-time data and automation can shape everyday urban systems
A newly built district in northern China is being used to test how cities function when infrastructure, data and automation are integrated from the ground up. In Xiong'an New Area, traffic systems, public monitoring and urban services are designed to respond in real time rather than operate on fixed rules.
At the centre of this is a traffic management system powered by more than 20,000 roadside sensors. These track traffic flow, vehicle types and congestion levels, feeding data into an AI system that adjusts signals in milliseconds. Official figures show this has reduced the average number of stops per vehicle by half. The system also detects equipment faults, sends alerts and generates maintenance requests without manual input.
Automation extends beyond roads. Drones are deployed across the city for routine monitoring. In the Rongdong district, roadside units release drones that follow fixed patrol routes of around 1.27 kilometres, completing each run in about five minutes. They are used to monitor traffic, detect illegal parking and inspect public spaces. Similar systems operate in parks to track water levels and issue flood alerts, while in some work zones, drones transport packages of up to five kilograms between buildings.
These applications reflect a broader approach: integrating multiple systems into a single, connected urban framework. Unlike older cities where infrastructure evolves in layers, Xiong’an has been built with coordinated digital systems from the outset. This allows transport, maintenance and public services to operate through shared data systems rather than in isolation.
Alongside this, the area is being developed as a technology and innovation hub. Since its establishment in 2017, it has attracted more than 400 branches of state-owned enterprises and over 200 companies working in sectors such as artificial intelligence, aerospace information and digital technology.
This ecosystem supports projects like the “Xiong’an-1” satellite, which completed research, design, production and testing within eight months of regulatory approval in 2025. The satellite is currently undergoing testing, with a planned launch expected in the second quarter of 2026. It forms part of a broader push to build an aerospace information industry in the region.
The area is also structured to bring companies, research and production closer together. At the Zhongguancun Science Park in Xiong’an, which spans 207,000 square metres, 269 technology companies operate across sectors including AI, robotics and biotechnology. The park hosts more than 2,700 researchers and industry professionals, with companies organised into sector-specific clusters.
Policy support continues to shape this development. In early 2026, the State Council approved the upgrade of Xiong’an’s high-tech industrial development zone to national level status, with a focus on attracting high-end research and strengthening links between scientific development and industrial output.
Xiong’an is positioned as a testing ground for how smart city systems can be deployed at scale. The model depends on coordinated planning, integrated infrastructure and sustained policy support. Whether these systems can be adapted to existing cities, where infrastructure and governance are more fragmented, remains an open question.
Backed by Menlo Ventures, BrainGrid tackles planning gaps as AI makes software building accessible to more founders.
As artificial intelligence makes it easier to write code, a different problem is starting to surface. Building software is no longer limited by technical skill alone. Increasingly, the challenge lies in deciding what to build, how to structure it, and how to turn an idea into something that actually works.
That shift sits at the centre of BrainGrid, a startup that has raised $1 million in pre-seed funding led by Menlo Ventures, with participation from Next Tier Ventures and Brainstorm Ventures. The company is building what it describes as an AI-powered planning layer for people who want to create software but may not have a technical background.
The timing reflects a broader change in how products are being built. Tools like Claude Code and Cursor have made it possible to generate working code through simple prompts. For many first-time founders, this has lowered the barrier to entry. But writing code is only one part of the process. Turning that code into a reliable product requires structure, sequencing and clarity—areas where many projects begin to fall apart.
In traditional teams, this responsibility sits with product managers who define what needs to be built and in what order. Without that layer, even well-written code can lead to products that feel disjointed or incomplete. Features may not work together, integrations can break and the final product often does not match the original idea.
BrainGrid is designed to address that gap. Instead of focusing on generating code, it helps users map out the structure of a product before development begins. The aim is to give builders a clearer starting point so that the tools they use—whether human or AI—can produce more consistent results.
The company says more than 500 builders have already used it to create software products across areas like fitness, healthcare and productivity. These range from first-time founders experimenting with new ideas to experienced developers working independently. In many cases, the products are already live and generating revenue, suggesting that the demand is not just for experimentation but for building something that can scale.
For investors, the appeal lies in the evolving role of software development. As AI takes on more of the technical work, the value shifts toward defining the problem and structuring the solution. In that sense, planning becomes less of a background task and more of a core capability.
The US$1 million raise is relatively modest, but it points to a larger trend. As more people gain access to AI tools, the number of potential builders expands. What remains limited is the ability to organise ideas into products that work in the real world. If that shift continues, the next wave of software may not be defined by who can code, but by who can plan.
HSUHK’s award-winning system shows how AI, drones and AR can cut training time, reduce errors and reshape warehouse operations
As global tech ecosystems become more interconnected, the ability to move innovation across borders is becoming just as important as building it. A new partnership between MTR Lab, the investment arm of MTR Corporation and ZGC Science City Ltd, a government-backed technology ecosystem based in Beijing’s Haidian district, reflects this shift.
At its core, the collaboration is designed to connect high-potential Chinese startups with global capital, real-world deployment opportunities and international markets. It focuses on sectors like AI, robotics, smart mobility and sustainable urban development—areas where China already has strong technical depth but where scaling beyond domestic markets can be more complex.
This is where the partnership begins to matter. ZGC Science City sits at the center of one of China’s most concentrated innovation clusters, with thousands of AI companies and a growing base of specialised and high-growth firms. MTR Lab, on the other hand, brings access to international markets, industry networks and practical deployment environments tied to infrastructure, transport and urban systems. Together, they are attempting to bridge a familiar gap: turning local innovation into globally relevant products.
In practice, the model is straightforward. ZGC Science City will introduce MTR Lab to startups working in priority sectors, creating a pipeline for potential investment and collaboration. From there, MTR Lab can support these companies through funding, pilot projects and access to overseas markets. The idea is not just to invest, but to help startups test and apply their technologies in real-world settings, particularly in complex urban environments.
The timing is notable. China’s AI and deep tech ecosystem has expanded rapidly, with thousands of companies contributing to advancements in automation, smart infrastructure and sustainability. At the same time, global demand for these technologies is rising, especially as cities look for more efficient and scalable solutions. Yet, moving from innovation to adoption often requires cross-border coordination—something individual startups may struggle to navigate alone.
This partnership also builds on a broader pattern. Corporate venture arms like MTR Lab are increasingly positioning themselves not just as investors, but as connectors between markets. By combining capital with access to infrastructure and deployment scenarios, they offer startups a way to move faster from development to real-world use. For ZGC Science City, the collaboration adds an international layer to its ecosystem, helping local companies extend beyond domestic growth.
What emerges is a model that goes beyond a typical investment announcement. It reflects a growing recognition that innovation today is rarely confined to one geography. Technologies may be developed in one ecosystem, refined in another and scaled globally through partnerships like this.
As cross-border collaboration becomes more central to how startups grow, partnerships like the one between MTR Lab and ZGC Science City point to a more connected innovation landscape—one where access, not just invention, defines success.