Operations & Scale

AI Platforms and the Changing Mechanics of Cross-Border Sourcing

How ChinaMarket uses digital tools to make cross-border sourcing faster and more accessible for smaller businesses

Updated

April 23, 2026 10:00 AM

A rack of colourful scarves. PHOTO: UNSPLASH

The 5th RCEP (Shandong) Import Commodities Expo opened this week at the Linyi International Expo Center, bringing together more than 5,300 buyers and over 400 exhibitors from 48 countries. Alongside the scale of the event, a quieter shift was visible in how trade itself is being organised.

ChinaMarket, the official platform of Linyi Mall, used the expo to show how sourcing is moving from manual coordination to software-led systems. On the first day, it hosted procurement matchmaking sessions and signed agreements with buyer groups from Argentina, South Korea and Ghana. But the focus was less on the deals themselves and more on the mechanism behind them.

The platform operates as a structured network of verified manufacturers, grouped by industrial clusters. Instead of buyers searching supplier by supplier, the system uses data and AI tools to match demand with production capacity. At the expo, this process was made visible through real-time data screens and guided sourcing sessions, where procurement teams connected directly with factories across categories such as building materials, textiles and electronics.

"Sourcing suppliers separately was time-consuming and inefficient. ChinaMarket accurately matches our needs and recommends reliable factories, saving us considerable effort," commented an Argentine buyer.

The underlying problem being addressed is not new. Cross-border sourcing is often slow, fragmented and dependent on intermediaries. What is changing is how that process is being compressed. By combining supplier verification, demand matching and communication into a single system, platforms like ChinaMarket aim to shorten sourcing cycles. They also reduce uncertainty in procurement decisions.  

Financing is another layer where the model is evolving. Even when suppliers and buyers are matched efficiently, access to capital can still slow transactions down. Small and medium-sized firms often face constraints around payment terms and access to credit in international trade.

ChinaMarket’s “data + order financing” model links transaction data with financial services, allowing funding decisions to be tied more directly to verified orders rather than external collateral. In practice, this shifts part of the risk assessment from institutions to platform-level data.

The company is also extending this structure into agricultural supply chains. At the expo, it signed an agreement with a local government in Yinan County to build a digitally managed agricultural belt. The model combines sourcing at origin with platform distribution, with an emphasis on traceability for buyers across RCEP markets. This reflects a broader attempt to standardise supply visibility in sectors that are typically less digitised.

Geographically, the platform has been expanding into Southeast Asia. It has launched a digital marketplace in Malaysia and established operations in Indonesia, including support for government-linked procurement projects. These moves suggest a focus on embedding the platform within regional trade flows rather than operating as a standalone marketplace.

"We aim to be a 'super connector' between Chinese industrial belts and global markets", said Quan Chuanxiao, Chairman of Depth Digital Technology Group and ChinaMarket. "By digitizing the cross-border trade process, we solve trust and efficiency issues, making it simpler, faster, and more reliable for overseas buyers to source from China".

What emerges from the expo is less about a single platform and more about a shift in infrastructure. Trade is gradually moving toward systems where discovery, verification, negotiation and financing are handled within integrated digital layers. The question is not whether sourcing can be digitised, but how reliably these systems can scale across industries where trust and execution still depend on physical outcomes.

Keep Reading

Artificial Intelligence

The Real Cost of Scaling AI: How Supermicro and NVIDIA Are Rebuilding Data Center Infrastructure

The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.

Updated

January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK

As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.

Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.

At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.

Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.

This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.

The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.

Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.

Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.