Operations & Scale

Singapore Startup Circles Uses OpenAI to Rethink Telecom Customer Service

Circles is using AI to turn telecom support from a cost centre into a faster, more personalised growth engine

Updated

May 1, 2026 2:04 PM

A woman holding a phone while using a laptop. PHOTO: ADOBE STOCK

Circles, a Singapore startup that builds software for digital telecom operators, has launched an AI concierge as part of its partnership with OpenAI. The release marks a new step in the company’s effort to modernise how telecom providers serve and retain customers. The move reflects a wider shift in the telecom sector. Many operators still rely on older support systems that can be slow, fragmented and costly to run. AI is now being tested as a way to improve service while creating new revenue opportunities.

Circles said the concierge is built on OpenAI’s API platform and sits within what it calls an AI-native telecom stack. In practical terms, the system is designed to handle customer support, account changes and personalised offers through automated interactions.

One part of the platform is called CareX. According to the company, it can deal with billing issues, service requests and network-related problems. Circles said CareX currently resolves 85% of customer queries globally without human intervention and reaches a 95% resolution rate on end-to-end tasks. That matters because customer support remains one of the larger operating costs for telecom providers. Faster automated handling could lower pressure on service teams while reducing wait times for users.

The second part of the platform is Xplore IQ, which focuses on revenue growth. The tool is designed to predict what a customer may need, recommend a suitable plan or offer and complete upgrades or downgrades automatically. Circles said the early rollout has led to a 22% rise in average revenue per user for Circles.Life Singapore. It also said personalised offers helped reduce customer churn by 9%.

"AI should empower users - not force-fit into outdated journeys. OpenAI's role has been critical in enabling Circles to scale this vision globally. With the AI concierge, we are moving beyond providing simple answers to delivering real-world outcomes, along with balancing cost and latency to maximize value for operators and customers alike", said Awais Malik, Global Chief Growth Officer at Circles.

"Circles is demonstrating how advanced AI can modernize essential industries like telecommunications at scale. By combining frontier models with multi-agent systems, they are enabling telecom operators globally to deliver faster, smarter and more personalized customer experiences. This milestone is a strong example of how AI can deliver tangible value for businesses and customers they serve", Oliver Jay, Managing Director, International for OpenAI, added.

Together, the tools are intended to connect customer service, operations and sales into one system. Rather than treating support and monetisation as separate functions, the company is combining them into a single digital layer.

Circles said the partnership will continue over the next two years as both companies work toward a more autonomous telecom model. Whether that vision is achieved remains to be seen, but the direction is clear: telecom operators are increasingly treating AI as core infrastructure rather than an optional add-on.

Keep Reading

Artificial Intelligence

The Real Cost of Scaling AI: How Supermicro and NVIDIA Are Rebuilding Data Center Infrastructure

The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.

Updated

January 8, 2026 6:31 PM

The inside of a data centre, with rows of server racks. PHOTO: FREEPIK

As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.

Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.

At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.

Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.

This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.

The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.

Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.

Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.