With operations across 50 countries, MagicLab is pairing new robot systems with a platform strategy aimed at wider commercial adoption
Updated
May 1, 2026 2:16 PM
.jpg)
A standing yellow robotic arm. PHOTO: UNSPLASH
MagicLab Robotics is a Chinese startup that describes itself as an embodied AI company. At an event in Silicon Valley this week, it outlined its global ambitions and introduced new products designed for real-world use. The company said its international business now spans more than 50 countries and regions, with overseas markets accounting for 60% of total sales in 2025. That gives some indication of how quickly Chinese robotics firms are expanding beyond their home market.
At the centre of the announcement was MagicLab’s latest product line-up. It included Magic-Mix, described as a foundational world model for robots, the H01 dexterous robotic hand and its humanoid robot, MagicBot X1. In practical terms, the company is trying to build robots that can better understand their surroundings and perform physical tasks with greater precision. That is the core idea behind embodied AI, where intelligence is combined with movement and interaction in the real world rather than limited to software alone.
MagicLab says it develops both hardware and software internally. Its product range includes humanoid robots and four-legged machines, with systems designed for factories, commercial services and home use. The company also outlined where it sees demand emerging. It listed sectors such as healthcare, manufacturing, logistics, security, public safety, education and household assistance.
That wide spread of target markets reflects a broader challenge in robotics. Building capable machines is only one part of the equation. The harder task is finding enough practical uses where customers are willing to pay for them.
MagicLab also used the summit to set out a long-term commercial goal. It projected a path toward US$14 billion in annual revenue by 2036 through wider adoption of embodied AI systems. It also announced what it calls the “Co-Create 1000 Initiative”, a plan to work with external developers and partner companies.
As part of that effort, the startup said it plans to invest US$1 billion over the next five years to build a developer ecosystem that would allow third parties to create new applications for its robots. The strategy mirrors what happened in smartphones and cloud software, where ecosystems often mattered as much as the original hardware. If robotics follows a similar path, companies that attract developers could gain an advantage over those selling machines alone.
For now, MagicLab’s announcement is less about immediate breakthroughs and more about positioning. The company is presenting itself not simply as a robot maker, but as a platform business seeking a role in the next phase of intelligent machines.
Keep Reading
The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.
Updated
January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK
As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.
Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.
At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.
Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.
This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.
The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.
Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.
Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.