Quantara AI launches a continuous platform designed to estimate the financial impact of cyber risk as companies move beyond periodic assessments
Cyber risk is increasingly treated as a financial issue. Boards want to know how much a cyber incident could cost the company, how it could affect earnings, and whether current security spending is justified.
Yet many organizations still measure cyber risk through periodic reviews. These assessments are often conducted once or twice a year, supported by consultants and spreadsheet models. By the time the report reaches senior leadership, the company’s systems may have changed and new threats may have emerged. The way risk is measured does not always match how quickly it evolves.
This gap is where Quantara AI is positioning its new platform. Quantara AI, a Boise-based cybersecurity startup, has introduced what it describes as the industry’s first persistent AI-powered cyber risk solution. The system is designed to run continuously rather than rely on occasional assessments.
The company’s core argument is straightforward: not every security weakness carries the same financial consequence. Instead of ranking issues only by technical severity, the platform analyzes active threats, identifies which company systems are exposed, and estimates how much money a successful attack could cost. It uses statistical models, including Value at Risk (VaR), to calculate potential losses. It also estimates how specific security improvements could reduce that projected loss.
The timing aligns with a broader market shift. International Data Corporation (IDC) projects that by 2028, 40% of enterprises will adopt AI-based cyber risk quantification platforms. These tools convert security data into financial estimates that can guide budgeting and investment decisions. The forecast reflects growing pressure on security leaders to present risk in terms that boards and regulators understand.
Traditional compliance and risk management systems often focus on meeting regulatory standards. Vulnerability management programs typically score weaknesses based on technical characteristics. Consultant-led risk studies provide detailed analysis, but they are usually performed at set intervals. In fast-changing threat environments, that model can leave decision-makers working with outdated information.
Quantara’s platform attempts to replace that periodic process with continuous measurement. It brings together threat data, internal system information and financial modeling in one system. The goal is to show, at any given time, which specific weaknesses could lead to the largest financial losses.
Cyber risk quantification as a concept is not new. What is changing is the expectation that these calculations be updated regularly and tied directly to financial decision-making. As cyber incidents carry clearer monetary consequences, companies are looking for ways to measure exposure with greater precision.
The broader question is whether enterprises will shift fully toward continuous, AI-driven risk analysis or continue relying on periodic external assessments. What is clear is that cybersecurity discussions are moving closer to financial reporting — and tools that estimate potential loss in dollar terms are becoming central to that shift.
The IT services firm strengthens its collaboration with Google Cloud to help enterprises move AI from pilot projects to production systems
Enterprise interest in AI has moved quickly from experimentation to execution. Many organizations have tested generative tools, but turning those tools into systems that can run inside daily operations remains a separate challenge. Cognizant, an IT services firm, is expanding its partnership with Google Cloud to help enterprises move from AI pilots to fully deployed, production-ready systems.
Cognizant and Google Cloud are deepening their collaboration around Google’s Gemini Enterprise and Google Workspace. Cognizant is deploying these tools across its own workforce first, using them to support internal productivity and collaboration. The idea is simple: test and refine the systems internally, then package similar capabilities for clients.
The focus of the partnership is what Cognizant calls “agentic AI.” In practical terms, this refers to AI systems that can plan, act and complete tasks with limited human input. Instead of generating isolated outputs, these systems are designed to fit into business workflows and carry out structured tasks.
To make that workable at scale, Cognizant is building delivery infrastructure around the technology. The company is setting up a dedicated Gemini Enterprise Center of Excellence and formalizing an Agent Development Lifecycle. This framework covers the full process, from early design and blueprinting to validation and production rollout. The aim is to give enterprises a clearer path from the AI concept to a deployed system.
Cognizant also plans to introduce a bundled productivity offering that combines Gemini Enterprise with Google Workspace. The targeted use cases are operational rather than experimental. These include collaborative content creation, supplier communications and other workflow-heavy processes that can be standardized and automated.
Beyond productivity tools, Cognizant is integrating Gemini into its broader service platforms. Through Cognizant Ignition, enabled by Gemini, the company supports early-stage discovery and prototyping while helping clients strengthen their data foundations. Its Agent Foundry platform provides pre-configured and no-code capabilities for specific use cases such as AI-powered contact centers and intelligent order management. These tools are designed to reduce the amount of custom development required for each deployment.
Scaling is another element of the strategy. Cognizant, a multi-year Google Cloud Data Partner of the Year award winner, says it will rely on a global network of Gemini-trained specialists to deliver these systems. The company is also expanding work tied to Google Distributed Cloud and showcasing capabilities through its Google Experience Zones and Gen AI Studios.
For Google Cloud, the partnership reinforces its enterprise AI ecosystem. Cloud providers can offer models and infrastructure, but enterprise adoption often depends on service partners that can integrate tools into existing systems and manage ongoing operations. By aligning closely with Cognizant, Google strengthens its ability to move Gemini from platform capability to production deployment.
The announcement does not introduce a new AI model. Instead, it reflects a shift in emphasis. The core question is no longer whether AI tools exist, but how they are implemented, governed and scaled across large organizations. Cognizant’s expanded role suggests that execution frameworks, internal deployment and structured delivery models are becoming central to how enterprises approach AI.
In that sense, the partnership is less about new technology and more about operational maturity. It highlights how AI is moving from isolated pilots to managed systems embedded in business processes — a transition that will likely define the next phase of enterprise adoption.
A closer look at the tech, AI, and open ecosystem behind Tien Kung 3.0’s real-world push
Humanoid robotics has advanced quickly in recent years. Machines can now walk, balance, and interact with their surroundings in ways that once seemed out of reach. Yet most deployments remain limited. Many robots perform well in controlled settings but struggle in real-world environments. Integration is often complex, hardware interfaces are closed, software tools are fragmented, and scaling across industries remains difficult.
Against this backdrop, X-Humanoid has introduced its latest general-purpose platform, Embodied Tien Kung 3.0. The company positions it not simply as another humanoid robot, but as a system designed to address the practical barriers that have slowed adoption, with a focus on openness and usability.
At the hardware level, Embodied Tien Kung 3.0 is built for mobility, strength, and stability. It is equipped with high-torque integrated joints that provide strong limb force for high-load applications. The company says it is the first full-size humanoid robot to achieve whole-body, high-dynamic motion control integrated with tactile interaction. In practice, this means the robot is designed to maintain balance and execute dynamic movements even in uneven or cluttered environments. It can clear one-meter obstacles, perform consecutive high-dynamic maneuvers, and carry out actions such as kneeling, bending, and turning with coordinated whole-body control.
Precision is also a focus. Through multi-degree-of-freedom limb coordination and calibrated joint linkage, the system is designed to achieve millimeter-level operational accuracy. This level of control is intended to support industrial-grade tasks that require consistent performance and minimal error across changing conditions.
But hardware is only part of the equation. The company pairs the robot with its proprietary Wise KaiWu general-purpose embodied AI platform. This system supports perception, reasoning, and real-time control through what the company describes as a coordinated “brain–cerebellum” architecture. It establishes a continuous perception–decision–execution loop, allowing the robot to operate with greater autonomy and reduced reliance on remote control.
For higher-level cognition, Wise KaiWu incorporates components such as a world model and vision-language models (VLM) to interpret visual scenes, understand language instructions, and break complex objectives into structured steps. For real-time execution, a vision-language-action (VLA) model and full autonomous navigation system manage obstacle avoidance and precise motion under variable conditions. The platform also supports multi-agent collaboration, enabling cross-platform compatibility, asynchronous task coordination, and centralized scheduling across multiple robots.
A central part of the platform is openness. The company states that the system is designed to address compatibility and adaptation challenges across both development and deployment layers. On the hardware side, Embodied Tien Kung 3.0 includes multiple expansion interfaces that support different end-effectors and tools, allowing faster adaptation to industrial manufacturing, specialized operations, and commercial service scenarios. On the software side, the Wise KaiWu ecosystem provides documentation, toolchains, and a low-code development environment. It supports widely adopted communication standards, including ROS2, MQTT, and TCP/IP, enabling partners to customize applications without rebuilding core systems.
The company also highlights its open-source approach. X-Humanoid has open-sourced key components from the Embodied Tien Kung and Wise KaiWu platforms, including the robot body architecture, motion control framework, world model, embodied VLM and cross-ontology VLA models, training toolchains, the RoboMIND dataset, and the ArtVIP simulation asset library. By opening access to these elements, the company aims to reduce development costs, lower technical barriers, and encourage broader participation from researchers, universities, and enterprises.
Embodied Tien Kung 3.0 enters a market where technical progress is visible but large-scale adoption remains uneven. The gap is not only about movement or strength. It is about integration, interoperability, and the ability to operate reliably and autonomously in everyday industrial and commercial settings. If platforms can reduce fragmentation and simplify deployment, humanoid robots may move beyond demonstrations and into sustained commercial use.
In that sense, the significance of Embodied Tien Kung 3.0 lies less in isolated technical claims and more in how its high-dynamic hardware, embodied AI system, open interfaces, and collaborative architecture are structured to work together. Whether that integrated approach can close the deployment gap will shape how quickly humanoid robotics becomes part of real-world operations.
A new safety layer aims to help robots sense people in real time without slowing production
Algorized has raised US$13 million in a Series A round to advance its AI-powered safety and sensing technology for factories and warehouses. The California- and Switzerland-based robotics startup says the funding will help expand a system designed to transform how robots interact with people. The round was led by Run Ventures, with participation from the Amazon Industrial Innovation Fund and Acrobator Ventures, alongside continued backing from existing investors.
At its core, Algorized is building what it calls an intelligence layer for “physical AI” — industrial robots and autonomous machines that function in real-world settings such as factories and warehouses. While generative AI has transformed software and digital workflows, bringing AI into physical environments presents a different challenge. In these settings, machines must not only complete tasks efficiently but also move safely around human workers.
This is where a clear gap exists. Today, most industrial robots rely on camera-based monitoring systems or predefined safety zones. For instance, when a worker steps into a marked area near a robotic arm, the system is programmed to slow down or stop the machine completely. This approach reduces the risk of accidents. However, it also means production lines can pause frequently, even when there is no immediate danger. In high-speed manufacturing environments, those repeated slowdowns can add up to significant productivity losses.
Algorized’s technology is designed to reduce that trade-off between safety and efficiency. Instead of relying solely on cameras, the company utilizes wireless signals — including Ultra-Wideband (UWB), mmWave, and Wi-Fi — to detect movement and human presence. By analysing small changes in these radio signals, the system can detect motion and breathing patterns in a space. This helps machines determine where people are and how they are moving, even in conditions where cameras may struggle, such as poor lighting, dust or visual obstruction.
Importantly, this data is processed locally at the facility itself — not sent to a remote cloud server for analysis. In practical terms, this means decisions are made on-site, within milliseconds. Reducing this delay, or latency, allows robots to adjust their movements immediately instead of defaulting to a full stop. The aim is to create machines that can respond smoothly and continuously, rather than reacting in a binary stop-or-go manner.
With the new funding, Algorized plans to scale commercial deployments of its platform, known as the Predictive Safety Engine. The company will also invest in refining its intent-recognition models, which are designed to anticipate how humans are likely to move within a workspace. In parallel, it intends to expand its engineering and support teams across Europe and the United States. These efforts build on earlier public demonstrations and ongoing collaborations with manufacturing partners, particularly in the automotive and industrial sectors.
For investors, the appeal goes beyond safety compliance. As factories become more automated, even small improvements in uptime and workflow continuity can translate into meaningful financial gains. Because Algorized’s system works with existing wireless infrastructure, manufacturers may be able to upgrade machine awareness without overhauling their entire hardware setup.
More broadly, the company is addressing a structural limitation in industrial automation. Robotics has advanced rapidly in precision and power, yet human-robot collaboration is still governed by rigid safety systems that prioritise stopping over adapting. By combining wireless sensing with edge-based AI models, Algorized is attempting to give machines a more continuous awareness of their surroundings from the start.
From plush figures to digital pets, a new class of AI toys is emerging — built not around screens or sensors, but around memory, language and emotional awareness
Spielwarenmesse in Nuremberg is the global meeting point for the toy industry, where brands and designers preview what will shape how children play and learn next. At this year’s fair, one message stood out clearly: toys are no longer built just to entertain, but to listen, respond and grow with children. Tuya Smart, a global AI cloud platform company, used the event to show how AI-powered toys are turning familiar formats into interactive companions that can talk, react emotionally and adapt over time.
The company’s central argument was simple but far-reaching. The next generation of artificial intelligence toys will not be defined by motors, sensors or screens alone, but by how well they understand human behavior. Instead of being single-function objects, smart toys for children are becoming systems that combine language models, emotion recognition and memory to support ongoing interaction.
One of the most talked-about examples was Tuya Smart’s Nebula Plush AI Toy. At first glance, it looks like a soft, expressive plush figure. Inside, it uses emotional recognition to change its LED facial expressions in real time. If a child sounds sad or excited, the toy’s eyes respond visually. It supports natural conversation, reacts to hugs and touch and combines storytelling, news-style updates and interactive games. Over time, it builds memory, allowing it to behave less like a gadget and more like an interactive AI toy that recalls past interactions.
Another example was Walulu, also developed using Tuya’s AI toy platform. Walulu is an AI pet built around personalization. It can detect up to 19 emotional states and speak more than 60 languages. It connects to major large language models such as ChatGPT, Gemini, DeepSeek, Qwen and Doubao. Through simple app-based controls, users choose traits like cheerful, quiet, curious or thoughtful. Those choices shape how Walulu talks and reacts. Instead of repeating scripts, it adjusts its tone and behavior over time. The result is not a novelty item, but an emotionally responsive AI toy that feels consistent in daily use.
Tuya also showed how educational AI toys can extend into learning and exploration. Its AI Learning Camera blends computer vision with interactive content. When it recognizes an object, it links it to cultural and learning material. If a child points it at a foreign word, it offers real-time pronunciation and translation. It can also turn drawings into digital artwork, encouraging active creativity rather than passive screen time. In this sense, AI toys for kids are becoming tools for learning as much as play.
These products point to a larger strategy. Tuya is not just making toys — it is building the AI toy development platform behind them. Through its AI Toy Solution, developers can design a toy’s personality, memory logic and behavior without training models from scratch. The system integrates with leading AI models and supports multi-turn conversation and emotional feedback, turning standard hardware into responsive AI companions.
The platform supports multiple development paths. Brands can use ready-to-market OEM solutions, add AI to existing products or build custom toys around their own characters. Plush toys, robots, educational tools and wearables can all become AI-powered toys without changing their physical design.
Because these products are made for children and families, safety is built in. Tuya’s system includes parental controls, conversation history review and content management. It supports standards such as GDPR and CCPA with encryption and data localization.
From a business standpoint, Tuya’s pitch is speed and scale. The company says its AI toy infrastructure can cut development time by more than half and reduce R&D costs by up to 50 percent. Its AIoT network spans over 200 countries and supports more than 60 languages, making global deployment of AI toys easier.
What emerged at Spielwarenmesse 2026 was not just a lineup of smart gadgets, but a clear shift in the category. AI toys are evolving into emotionally aware systems that talk, listen, remember and adapt. Their value lies not in sounding clever, but in fitting naturally into everyday life.
The fair did not present AI toys as a distant future. It showed them as products already entering the mainstream. The real question now is not whether toys will use AI, but how carefully that intelligence is designed for children.
With Phia’s AI, the new luxury is knowing what’s worth buying
AI has transformed how we shop—predicting trends, powering virtual try-ons and streamlining fashion logistics. Yet some of the biggest pain points remain: endless scrolling, too many tabs and never knowing if you’ve overpaid. That’s the gap Phia aims to close.
Co-founded by Phoebe Gates, daughter of Bill Gates, and climate activist Sophia Kianni, Phia was born in a Stanford dorm room and launched in April 2025. The app, available on mobile and as a browser extension, compares prices across over 40,000 retailers and thrift platforms to show what an item really costs. Its hallmark feature, “Should I Buy This?”, instantly flags whether something is overpriced, fair or a genuine deal.
The mission is simple: make shopping smarter, fairer and more sustainable. In just five months, Phia has attracted more than 500,000 users, indexed billions of products and built over 5,000 brand partnerships. It also secured a US$8 million seed round led by Kleiner Perkins, joined by Hailey Bieber, Kris Jenner, Sara Blakely and Sheryl Sandberg—investors who bridge tech, retail and culture. “Phia is redefining how people make purchase decisions,” said Annie Case, partner at Kleiner Perkins.
Phia’s AI engine scans real-time data from more than 250 million products across its network, including Vestiaire Collective, StockX, eBay and Poshmark. Beyond comparing prices, the app helps users discover cheaper or more sustainable options by displaying pre-owned items next to new ones—helping users see the full spectrum of choices before they buy. It also evaluates how different brands perform over time, analysing how well their products hold resale value. This insight helps shoppers judge whether a purchase is likely to last in value or if opting for a second-hand version makes more sense. The result is a platform that naturally encourages circular shopping—keeping items in use longer through resale, repair or recycling—and resonates strongly with Gen Z and millennial values of sustainability and mindful spending.
By encouraging transparency and smarter choices, Phia signals a broader shift in consumer technology: one where AI doesn’t just automate decisions but empowers users to understand them. Instead of merely digitizing the act of shopping, Phia embodies data-driven accountability—using intelligent search to help consumers make informed and ethical choices in markets long clouded by complexity. Retail analysts believe this level of visibility could push brands to maintain accurate and competitive pricing. Skeptics, however, argue that Phia must evolve beyond comparison to create emotional connection and loyalty. Still, one fact stands out: algorithms are no longer just recommending what we buy—they’re rewriting how we decide.
With new funding powering GPU expansion and advanced personalization tools, Phia’s next step is to build a true AI shopping agent—one that helps people buy better, live smarter and rethink what it means to shop with purpose.
Where Hollywood magic meets AI intelligence — Hong Kong becomes the new stage for virtual humans
In an era where pixels and intelligence converge, few companies bridge art and science as seamlessly as Digital Domain. Founded three decades ago by visionary filmmaker James Cameron, the company built its name through cinematic wizardry—bringing to life the impossible worlds of Titanic, The Curious Case of Benjamin Button and the Marvel universe. But today, its focus has evolved far beyond Hollywood: Digital Domain is reimagining the future of AI-driven virtual humans—and it’s doing so from right here in Hong Kong.
.jpg)
“AI and visual technology are merging faster than anyone imagined,” says William Wong, Chairman and CEO of Digital Domain. “For us, the question is not whether AI will reshape entertainment—it already has. The question is how we can extend that power into everyday life.”
Though globally recognized for its work on blockbuster films and AAA games, Digital Domain’s story is also deeply connected to Asia. A Hong Kong–listed company, it operates a network of production and research centers across North America, China and India. In 2024, it announced a major milestone—setting up a new R&D hub at Hong Kong Science Park focused on advancing artificial intelligence and virtual human technologies. “Our roots are in visual storytelling, but AI is unlocking a new frontier,” Wong says. “Hong Kong has been very proactive in promoting innovation and research, and with the right partnerships, we see real potential to make this a global R&D base.”
Building on that commitment, the company plans to invest about HK$200 million over five years, assembling a team of more than 40 professional talents specializing in computer vision, machine learning and digital production. For now, the team is still growing and has room to expand. “Talent is everything,” says Wong. “We want to grow local expertise while bringing in global experience to accelerate the learning curve.”


Digital Domain’s latest chapter revolves around one of AI’s most fascinating frontiers: the creation of virtual humans.
These are hyperrealistic, AI-powered virtual humans capable of speaking, moving and responding in real time. Using the advanced motion-capture and rendering techniques that transformed Hollywood visual effects, the company now builds digital personalities that appear on screens and in physical environments—serving in media, education, retail and even public services.
One of its most visible projects is “Aida”, the AI-powered presenter who delivers nightly weather reports on the Radio Television Hong Kong (RTHK). Another initiative, now in testing, will soon feature AI-powered concierges greeting travelers at airports, able to communicate in multiple languages and provide real-time personalized services. Similar collaborations are under way in healthcare, customer service and education.
“What’s exciting,” says Wong, “is that our technologies amplify human capability, helping to deliver better experiences, greater efficiency and higher capacity. AI-powered virtual humans can interact naturally, emotionally and in any language. They can help scale creativity and service, not replace it.”
To make that possible, Digital Domain has designed its system for compatibility and flexibility. It can connect to major AI models—from OpenAI and Google to Baidu—and operate across cloud platforms like AWS, Alibaba Cloud and Microsoft Azure. “It’s about openness,” says Wong. “Our clients can choose the AI brain that best fits their business.”
Establishing a permanent R&D base in Hong Kong marks a turning point for the company—and, in a broader sense, for the city’s technology ecosystem. With the support of the Office for Attracting Strategic Enterprises (OASES) in Hong Kong, Digital Domain hopes to make the city a creative hub where AI meets visual arts. “Hong Kong is the perfect meeting point,” Wong says. “It combines international exposure with a growing innovation ecosystem. We want to make it a hub for creative AI.”
As part of this effort, the company is also collaborating with universities such as the University of Hong Kong, City University of Hong Kong and Hong Kong Baptist University to co-develop new AI solutions and nurture the next generation of engineers. “The goal,” Wong notes, “is not just R&D for the sake of research—but R&D that translates into real-world impact.”

The collaboration with OASES underscores how both the company and the city share a vision for innovation-led growth. As Peter Yan King-shun, Director-General of OASES, notes, the initiative reflects Hong Kong’s growing strength as a global innovation and technology hub. “OASES was set up to attract high-potential enterprises from around the world across key sectors such as AI, data science, and cultural and creative technology,” he says. “Digital Domain’s new R&D center is a strong example of how Hong Kong can combine world-class talent, technology and creativity to drive innovation and global competitiveness.”
Digital Domain’s story mirrors the evolution of Hong Kong’s own innovation landscape—where creativity, technology and global ambition converge. From the big screen to the next generation of intelligent avatars, the company continues to prove that imagination is not bound by borders, but powered by the courage to reinvent what’s possible.
A closer look at how reading, conversation, and AI are being combined
In the past, “educational toys” usually meant flashcards, prerecorded stories or apps that asked children to tap a screen. ChooChoo takes a different approach. It is designed not to instruct children at them, but to talk with them.
ChooChoo is an AI-powered interactive reading companion built for children aged three to six. Instead of playing stories passively, it engages kids in conversation while reading. It asks questions, reacts to answers, introduces new words in context and adjusts the story flow based on how the child responds. The goal is not entertainment alone, but language development through dialogue.
That idea is rooted in research, not novelty. ChooChoo is inspired by dialogic reading methods from Yale’s early childhood language development work, which show that children learn language faster when stories become two-way conversations rather than one-way narration. Used consistently, this approach has been shown to improve vocabulary, comprehension and confidence within weeks.
The project was created by Dr. Diana Zhu, who holds a PhD from Yale and focused her work on how children acquire language. Her aim with ChooChoo was to turn academic insight into something practical and warm enough to live in a child’s room. The result is a device that listens, responds and adapts instead of simply playing content on command.
What makes this possible is not just AI, but where that AI runs.
Unlike many smart toys that rely heavily on the cloud, ChooChoo is built on RiseLink’s edge AI platform. That means much of the intelligence happens directly on the device itself rather than being sent back and forth to remote servers. This design choice has three major implications.
First, it reduces delay. Conversations feel natural because the toy can respond almost instantly. Second, it lowers power consumption, allowing the device to stay “always on” without draining the battery quickly. Third, it improves privacy. Sensitive interactions are processed locally instead of being continuously streamed online.
RiseLink’s hardware, including its ultra-low-power AI system-on-chip designs, is already used at large scale in consumer electronics. The company ships hundreds of millions of connected chips every year and works with global brands like LG, Samsung, Midea and Hisense. In ChooChoo’s case, that same industrial-grade reliability is being applied to a child’s learning environment.
The result is a toy that behaves less like a gadget and more like a conversational partner. It engages children in back-and-forth discussion during stories, introduces new vocabulary in natural context, pays attention to comprehension and emotional language and adjusts its pace and tone based on each child’s interests and progress. Parents can also view progress through an optional app that shows what words their child has learned and how the system is adjusting over time.
What matters here is not that ChooChoo is “smart,” but that it reflects a shift in how technology enters early education. Instead of replacing teachers or parents, tools like this are designed to support human interaction by modeling it. The emphasis is on listening, responding and encouraging curiosity rather than testing or drilling.
That same philosophy is starting to shape the future of companion robots more broadly. As edge AI improves and hardware becomes smaller and more energy efficient, we are likely to see more devices that live alongside people instead of in front of them. Not just toys, but helpers, tutors and assistants that operate quietly in the background, responding when needed and staying out of the way when not.
In that sense, ChooChoo is less about novelty and more about direction. It shows what happens when AI is designed not for spectacle, but for presence. Not for control, but for conversation.
If companion robots become part of daily life in the coming years, their success may depend less on how powerful they are and more on how well they understand when to speak, when to listen and how to grow with the people who use them.
How ECOPEACE uses autonomous robots and data to monitor and maintain urban water bodies.
South Korea–based water technology company ECOPEACE is working on a practical challenge many cities face today: keeping urban water bodies clean as pollution and algae growth become more frequent. Rather than relying on periodic cleanup drives, the company focuses on systems that can monitor and manage water conditions on an ongoing basis.
At the core of ECOPEACE’s work are autonomous water-cleanup robots known as ECOBOT. These machines operate directly on lakes, reservoirs and rivers, removing algae and surface waste while also collecting information about water quality. The idea is to combine cleaning with constant observation so changes in water conditions do not go unnoticed.
Alongside the robots, ECOPEACE uses a filtration and treatment system designed to process polluted water continuously. This system filters out contaminants using fine metal filters and treats the water using electrical processes. It also cleans itself automatically, which allows it to run for long periods without frequent manual maintenance.
The role of AI in this setup is largely about decision-making rather than direct control. Sensors placed across the water body collect data such as pollution levels and water quality indicators. The software then analyses this data to spot early signs of issues like algae growth. Based on these patterns, the system adjusts how the robots and filtration units operate, such as changing treatment intensity or water flow. In simple terms, the technology helps the system respond sooner instead of waiting for visible problems to appear.
ECOPEACE has already deployed these systems across several reservoirs, rivers and urban waterways in South Korea. Those projects have helped refine how the robots, sensors and software work together in real environments rather than controlled test sites.
Building on that experience, the company has begun expanding beyond Korea. It is currently running pilot and proof-of-concept projects in Singapore and the United Arab Emirates. These deployments are testing how the technology performs in dense urban settings where waterways are closely linked to public health, infrastructure and daily city life.
Both regions have invested heavily in smart city initiatives and water management, making them suitable test beds for automated monitoring and cleanup systems. The pilots focus on algae control, surface cleaning and real-time tracking of water quality rather than large-scale rollout.
As cities continue to grow and climate-related pressures on water systems increase, managing waterways is becoming less about occasional intervention and more about continuous oversight. ECOPEACE’s approach reflects that shift by using automation and data to address problems early and reduce the need for reactive cleanup later.
December 30, 2025
How Korea is trying to take control of its AI future.
SK Telecom, South Korea’s largest mobile operator, has unveiled A.X K1, a hyperscale artificial intelligence model with 519 billion parameters. The model sits at the center of a government-backed effort to build advanced AI systems and domestic AI infrastructure within Korea. This comes at a time when companies in the United States and China largely dominate the development of the most powerful large language models.
Rather than framing A.X K1 as just another large language model, SK Telecom is positioning it as part of a broader push to build sovereign AI capacity from the ground up. The model is being developed as part of the Korean government’s Sovereign AI Foundation Model project, which aims to ensure that core AI systems are built, trained and operated within the country. In simple terms, the initiative focuses on reducing reliance on foreign AI platforms and cloud-based AI infrastructure, while giving Korea more control over how artificial intelligence is developed and deployed at scale.
One of the gaps this approach is trying to address is how AI knowledge flows across a national ecosystem. Today, the most powerful AI foundation models are often closed, expensive and concentrated within a small number of global technology companies. A.X K1 is designed to function as a “teacher model,” meaning it can transfer its capabilities to smaller, more specialized AI systems. This allows developers, enterprises and public institutions to build tailored AI tools without starting from scratch or depending entirely on overseas AI providers.
That distinction matters because most real-world applications of artificial intelligence do not require massive models operating independently. They require focused, reliable AI systems designed for specific use cases such as customer service, enterprise search, manufacturing automation or mobility. By anchoring those systems to a large, domestically developed foundation model, SK Telecom and its partners are aiming to create a more resilient and self-sustaining AI ecosystem.
The effort also reflects a shift in how AI is being positioned for everyday use. SK Telecom plans to connect A.X K1 to services that already reach millions of users, including its AI assistant platform A., which operates across phone calls, messaging, web services and mobile applications. The broader goal is to make advanced AI feel less like a distant research asset and more like an embedded digital infrastructure that supports daily interactions.
This approach extends beyond consumer-facing services. Members of the SKT consortium are testing how the hyperscale AI model can support industrial and enterprise applications, including manufacturing systems, game development, robotics and autonomous technologies. The underlying logic is that national competitiveness in artificial intelligence now depends not only on model performance, but on whether those models can be deployed, adapted and validated in real-world environments.
There is also a hardware dimension to the project. Operating an AI model at the 500-billion-parameter scale places heavy demands on computing infrastructure, particularly memory performance and communication between processors. A.X K1 is being used to test and validate Korea’s semiconductor and AI chip capabilities under real workloads, linking large-scale AI software development directly to domestic semiconductor innovation.
The initiative brings together technology companies, universities and research institutions, including Krafton, KAIST and Seoul National University. Each contributes specialized expertise ranging from data validation and multimodal AI research to system scalability. More than 20 institutions have already expressed interest in testing and deploying the model, reinforcing the idea that A.X K1 is being treated as shared national AI infrastructure rather than a closed commercial product.
Looking ahead, SK Telecom plans to release A.X K1 as open-source AI software, alongside APIs and portions of the training data. If fully implemented, the move could lower barriers for developers, startups and researchers across Korea’s AI ecosystem, enabling them to build on top of a large-scale foundation model without incurring the cost and complexity of developing one independently.