A Hong Kong pilot explores how creator-led distribution could reshape livestreaming for global competitions
Updated
April 8, 2026 5:28 PM

A dance crew performs in sync on stage at World of Dance under spotlights. PHOTO: WORLD OF DANCE HONG KONG
On January 22, 2026, World of Dance Hong Kong became the first global event to pilot Mitico’s community-based livestreaming model. The idea is simple: rethink how live competitions are shared in a digital-first world.
Instead of relying on a single official broadcast, the event was produced as one centralised live feed. It was then distributed across multiple creators and influencers, each hosting the stream for their own audience.
This gave creators room to add their own commentary, adapt the language and bring in cultural context that suited their communities, while the production remained consistent behind the scenes.
“Dance is a universal language”, said David Gonzalez, President of World of Dance. “Our collaboration with Mitico to produce an international, creator-led livestream in Hong Kong allowed a regional competition to reach a global audience. With personalised commentary from hosts in different languages, we can begin to see how regional events may connect through global communities”. This approach points to a shift away from traditional broadcaster-led distribution and toward creator-led amplification.
.jpg)
Mitico’s approach begins with a familiar industry challenge: the high cost of production and licensing, which often makes it difficult to livestream cultural and sports events at scale.
“Many cultural and sports competitions are never livestreamed because traditional broadcasting is too costly and complex”, said Chengcheng Li, Founder of Mitico. “By distributing a centralised production feed through creators and community hosts, regional events can reach global audiences while maintaining a unified production workflow”.
World of Dance (WOD) offered a natural test environment. It started as a global dance competition platform before entering a television partnership with NBC, which later produced four seasons of the World of Dance reality series. While the television programme concluded in 2021, the competition business has continued to expand through an international network of partners. Today, World of Dance competitions are represented in more than 72 countries, producing nearly 100 events each year, with a digital audience of more than 34 million followers across platforms
Despite that scale, many competitions are not livestreamed due to the high production costs and technical demands associated with traditional broadcasting. The Hong Kong event was selected to assess whether a community-led distribution model could offer a more scalable alternative for live coverage.
While no changes to World of Dance’s broader distribution strategy have been announced, the Hong Kong pilot offers an early indication of how global competitions may rethink livestreaming in an increasingly creator-driven media environment.
Keep Reading
The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.
Updated
January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK
As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.
Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.
At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.
Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.
This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.
The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.
Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.
Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.