Artificial Intelligence

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

AI actor Tilly Norwood releases a musical video arguing that artificial intelligence can expand creativity in film

As Hollywood prepares for this weekend’s Oscars, a different kind of performer is stepping into the spotlight — one that doesn’t physically exist.

Tilly Norwood, described as the world’s first AI actor, has released her debut musical comedy video, Take the Lead. The project arrives at a moment when artificial intelligence has become one of the most contentious topics in the film industry.

The message of the song is simple. AI should not be seen as a threat to actors. Instead, it can become another creative tool. The release also offers a first look at what Norwood’s creators call the “Tillyverse”. It is envisioned as a cloud-based entertainment world where AI characters can live, interact and perform.

Behind the character is actor and producer Eline van der Velden. She is the CEO of production company Particle6 and AI talent studio Xicoia. Van der Velden created Tilly as a way to experiment with how artificial intelligence could be used in storytelling.

The timing is not accidental. The entertainment industry has spent the past few years debating the role AI should play in filmmaking and acting. Questions about digital replicas, automated performances and creative ownership continue to divide artists and studios.

Norwood’s musical video enters that debate with a different tone. Instead of warning about AI replacing actors, the project suggests that the technology could expand what performers are able to do.

The video itself also serves as a technical experiment. The song Take the Lead was generated using the AI music platform Suno. The video was then produced using a combination of widely available AI tools and Particle6’s own creative process.

One of the newer techniques used in the project is performance capture. Van der Velden physically acted out Tilly’s movements and expressions so the digital character could mirror a human performance. But the production was far from automated. According to Particle6, a team of 18 people worked on the video. The group included a director, editor, production designer, costume designer, comedy writer and creative technologist. In other words, the project still relied heavily on human creativity.

“Tilly has always been a vehicle to test the creative capabilities and boundaries of AI,” van der Velden said. “It’s not about taking anyone’s job”. She added that even with powerful tools, good AI content still takes time, taste and creative direction.

The project also reflects how quickly production technology is evolving. Tools that once required large studios are now accessible to smaller creative teams experimenting with AI-driven storytelling.

For Particle6, the character of Tilly Norwood acts as a testing ground. Each project explores how AI performers might be developed, directed and integrated into entertainment. Whether audiences embrace digital actors remains an open question. Many in the industry are still wary of how AI could reshape creative work.

But projects like Take the Lead show another possibility. Instead of replacing performers, artificial intelligence could become part of the creative process itself. In that sense, Tilly Norwood may represent something more than a virtual performer. She is also an experiment in how humans and machines might collaborate in the future of entertainment.

A wearable ring, conversational AI and US$23M in funding. Sandbar wants to rethink how we interact with technology

Sandbar, a New York–based interface startup, has raised US$23 million in Series A funding to develop a wearable device that lets people interact with artificial intelligence via voice rather than screens.

Adjacent and Kindred Ventures led the round; both venture firms focused on early-stage technology startups. The investment brings Sandbar’s total funding to us$36 million. Earlier backing included a US$10 million seed round led by True Ventures, a venture capital firm, as well as a US$3 million pre-seed round supported by Upfront Ventures, a venture firm and Betaworks, a startup studio and investment firm.

Sandbar was founded by Mina Fahmi and Kirak Hong, who previously worked together at CTRL-labs, a neural interface startup acquired by Meta in 2019. Their earlier work explored how computers could respond more directly to human intent — an idea that continues to shape Sandbar’s approach to AI interfaces.

The new funding will help the company expand its team across machine learning, interaction design and software engineering as it prepares to launch its first product. That product, called Stream, combines a wearable ring with a conversational AI interface. The system allows users to speak to an AI assistant without unlocking a phone or opening an app.

The concept is simple. Instead of typing into a screen, users press a button on the ring and talk. The system can capture notes, organize ideas, retrieve information from the web or trigger actions through connected applications.

The ring includes a microphone, a touchpad and subtle haptic feedback. These elements allow the device to respond through gentle vibrations rather than visual alerts. According to the company, the ring only listens when the user presses the button — a design meant to address common concerns around always-on microphones.

That design reflects a larger shift Sandbar believes is underway. As AI assistants become more capable, many startups are experimenting with new ways to interact with them. The focus is moving away from screens and keyboards toward interfaces that feel more natural and immediate.

Stream uses multiple AI models working together to process requests, search the web and structure information in real time. The company says users remain in control of their data and can choose whether to share information with other apps.

Sandbar is also developing a feature called Inner Voice, which responds using a voice customized to the user. The feature will debut during a closed beta planned for this spring, giving the company time to refine how the software behaves in everyday use.

The startup currently employs a team of 15 people. Many have worked on well-known consumer devices including the iPhone, Fitbit, Kindle and Vision Pro. Recent hires include Sam Bowen, formerly of Amazon and Fitbit, who joined as vice president of hardware and Brooke Travis, previously at Equinox, Dior and Gap, who now leads marketing.

Sandbar plans to begin shipping Stream in summer 2026 after completing early testing. As artificial intelligence tools become more integrated into daily life, the company is betting that the next shift in computing will not come from another app — but from new ways for people to interact with AI itself.

Structured AI interviews and human judgment combine to address the global talent shortage

As hiring pressures mount across global markets, ManpowerGroup is turning to technology to strengthen how it connects people to work. The workforce solutions major has announced a global partnership with Hubert, a startup focused on AI-driven structured interviews. The aim is simple: make hiring faster and fairer, without removing the human touch.

ManpowerGroup has spent decades operating at the center of the global labor market. The company works with employers across industries to fill roles, manage workforce planning and build talent pipelines. With millions of placements each year, it has a clear view of how strained hiring has become. A large share of employers today report difficulty finding skilled talent. At the same time, candidates expect more transparency, quicker feedback and flexibility in how they engage with employers.

Hubert enters this picture as a specialist in structured digital interviewing. The startup has built tools that allow candidates to complete interviews online, at any time, while being assessed against consistent criteria. Instead of relying on informal screening calls or resume filters, its system focuses on standardized questions tied directly to job requirements. The idea is to bring more consistency to early-stage hiring.

The partnership brings these capabilities into ManpowerGroup’s global operations. AI-powered interviews will now support the first stage of screening, helping recruiters identify qualified candidates earlier in the process. This does not replace recruiters. Final decisions and contextual judgment remain with experienced hiring professionals. What changes is the speed and structure of the initial assessment.

For employers, this could mean earlier visibility into job-ready talent and less time spent on manual screening. For candidates, it offers more flexibility. A significant portion of interviews on Hubert’s platform are completed outside regular office hours, allowing applicants to engage when it suits them. That flexibility can make a difference in competitive labor markets where timing matters.

The collaboration is also positioned as a step toward reducing bias. By evaluating each candidate against the same transparent standards, the process becomes more consistent. While no system can remove bias entirely, structured assessments can reduce the variability that often comes with unstructured interviews.

At its core, the partnership addresses a gap many large organizations are facing. They need scale and speed, but they cannot afford to lose the human judgment that good hiring depends on. Manual processes are too slow. Fully automated systems can feel impersonal and risky. ManpowerGroup’s approach suggests a middle path, where technology handles repetition and structure and recruiters focus on potential and fit.

The move also reflects a broader shift in the workforce industry. AI is no longer being tested on the sidelines. It is being built into the foundation of hiring operations. For established players like ManpowerGroup, the challenge is not whether to adopt AI, but how to do so responsibly and at scale.

By working with Hubert, the company is signaling that the future of recruitment will likely blend structured digital tools with human expertise. In a market defined by talent shortages and rising expectations, that balance may prove critical.

AI meets AR: How Rokid Glasses bring multilingual, real-time intelligence to smart eyewear globally

Rokid, a Chinese company specializing in AI-powered smart eyewear and human–computer interaction, has rolled out a major software update for the international version of its Rokid Glasses. This update makes it the first smart glasses manufacturer to natively support Google’s Gemini, alongside three other leading large language models: OpenAI’s ChatGPT, Alibaba’s Qwen and DeepSeek.

The integration is powered by Rokid’s device-to-cloud architecture, which enables users to switch between AI models on the fly. In practice, this means a traveler can receive a real-time translation in Japanese using one AI model, then quickly switch to ChatGPT to answer a technical query—without noticeable delay. The system also supports multi-modal inputs like voice and gestures, making interactions more intuitive for everyday use.

This is more than a routine software update. By combining AI models from both U.S. and Chinese developers, Rokid is making its smart glasses relevant to global users, with features that adapt to local languages and preferences while maintaining high performance.  

These technological advancements have directly fueled Rokid’s international growth. Between November 2024 and October 2025, Shangpu Group data shows Rokid Glasses ranked No.1 in global sales for AI glasses with display functionality. Crowdfunding milestones further reflect this momentum: the product became the fastest smart glasses to raise over 100 million Japanese Yen on Japan’s MAKUAKE platform and broke Kickstarter records for smart eyewear.

Taken together, Rokid’s update highlights a shift in the smart glasses space: success increasingly comes from openness, flexibility and localized AI experiences rather than closed, single-platform ecosystems. By giving users choice, integrating global AI capabilities and bridging cultural and linguistic gaps, Rokid is positioning itself as a serious contender in the international AR and AI wearable market.

The focus is no longer just AI-generated worlds, but how those worlds become structured digital products

As AI tools improve, creating 3D content is becoming faster and easier. However, building that content into interactive experiences still requires time, structure and technical work. That difference between generation and execution is where HTC VIVERSE and World Labs are focusing their new collaboration.

HTC VIVERSE is a 3D content platform developed by HTC. It provides creators with tools to build, refine and publish interactive virtual environments. Meanwhile, World Labs is an AI startup founded by researcher Fei-Fei Li and a team of machine learning specialists. The company recently introduced Marble, a tool that generates full 3D environments from simple text, image or video prompts.

While Marble can quickly create a digital world, that world on its own is not yet a finished experience. It still needs structure, navigation and interaction. This is where VIVERSE fits in. By combining Marble’s world generation with VIVERSE’s building tools, creators can move from an AI-generated scene to a usable, interactive product.

In practice, the workflow works in two steps. First, Marble produces the base 3D environment. Then, creators bring that environment into VIVERSE, where they add game mechanics, scenes and interactive elements. In this model, AI handles the early visual creation, while the human creator defines how users explore and interact with the world.

To demonstrate this process, the companies developed three example projects. Whiskerhill turns a Marble-generated world into a simple quest-based experience. Whiskerport connects multiple AI-generated scenes into a multi-level environment that users navigate through portals. Clockwork Conspiracy, built by VIVERSE, uses Marble’s generation system to create a more structured, multi-scene game. These projects are not just demos. They serve as proof that AI-generated worlds can evolve beyond static visuals and become interactive environments.

This matters because generative AI is often judged by how quickly it produces content. However, speed alone does not create usable products. Digital experiences still require sequencing, design decisions and user interaction. As a result, the real challenge is not generation, but integration — connecting AI output to tools that make it functional.

Seen in this context, the collaboration is less about a single product and more about workflow. VIVERSE provides a system that allows AI-generated environments to be edited and structured. World Labs provides the engine that creates those environments in the first place. Together, they are testing whether AI can fit directly into a full production pipeline rather than remain a standalone tool.

Ultimately, the collaboration reflects a broader change in creative technology. AI is no longer only producing isolated assets. It is beginning to plug into the larger process of building complete experiences. The key question is no longer how quickly a world can be generated, but how easily that world can be turned into something people can actually use and explore.

Quantara AI launches a continuous platform designed to estimate the financial impact of cyber risk as companies move beyond periodic assessments

Cyber risk is increasingly treated as a financial issue. Boards want to know how much a cyber incident could cost the company, how it could affect earnings, and whether current security spending is justified.

Yet many organizations still measure cyber risk through periodic reviews. These assessments are often conducted once or twice a year, supported by consultants and spreadsheet models. By the time the report reaches senior leadership, the company’s systems may have changed and new threats may have emerged. The way risk is measured does not always match how quickly it evolves.

This gap is where Quantara AI is positioning its new platform. Quantara AI, a Boise-based cybersecurity startup, has introduced what it describes as the industry’s first persistent AI-powered cyber risk solution. The system is designed to run continuously rather than rely on occasional assessments.

The company’s core argument is straightforward: not every security weakness carries the same financial consequence. Instead of ranking issues only by technical severity, the platform analyzes active threats, identifies which company systems are exposed, and estimates how much money a successful attack could cost. It uses statistical models, including Value at Risk (VaR), to calculate potential losses. It also estimates how specific security improvements could reduce that projected loss.

The timing aligns with a broader market shift. International Data Corporation (IDC) projects that by 2028, 40% of enterprises will adopt AI-based cyber risk quantification platforms. These tools convert security data into financial estimates that can guide budgeting and investment decisions. The forecast reflects growing pressure on security leaders to present risk in terms that boards and regulators understand.

Traditional compliance and risk management systems often focus on meeting regulatory standards. Vulnerability management programs typically score weaknesses based on technical characteristics. Consultant-led risk studies provide detailed analysis, but they are usually performed at set intervals. In fast-changing threat environments, that model can leave decision-makers working with outdated information.

Quantara’s platform attempts to replace that periodic process with continuous measurement. It brings together threat data, internal system information and financial modeling in one system. The goal is to show, at any given time, which specific weaknesses could lead to the largest financial losses.

Cyber risk quantification as a concept is not new. What is changing is the expectation that these calculations be updated regularly and tied directly to financial decision-making. As cyber incidents carry clearer monetary consequences, companies are looking for ways to measure exposure with greater precision.

The broader question is whether enterprises will shift fully toward continuous, AI-driven risk analysis or continue relying on periodic external assessments. What is clear is that cybersecurity discussions are moving closer to financial reporting — and tools that estimate potential loss in dollar terms are becoming central to that shift.

The IT services firm strengthens its collaboration with Google Cloud to help enterprises move AI from pilot projects to production systems

Enterprise interest in AI has moved quickly from experimentation to execution. Many organizations have tested generative tools, but turning those tools into systems that can run inside daily operations remains a separate challenge. Cognizant, an IT services firm, is expanding its partnership with Google Cloud to help enterprises move from AI pilots to fully deployed, production-ready systems.

Cognizant and Google Cloud are deepening their collaboration around Google’s Gemini Enterprise and Google Workspace. Cognizant is deploying these tools across its own workforce first, using them to support internal productivity and collaboration. The idea is simple: test and refine the systems internally, then package similar capabilities for clients.

The focus of the partnership is what Cognizant calls “agentic AI.” In practical terms, this refers to AI systems that can plan, act and complete tasks with limited human input. Instead of generating isolated outputs, these systems are designed to fit into business workflows and carry out structured tasks.

To make that workable at scale, Cognizant is building delivery infrastructure around the technology. The company is setting up a dedicated Gemini Enterprise Center of Excellence and formalizing an Agent Development Lifecycle. This framework covers the full process, from early design and blueprinting to validation and production rollout. The aim is to give enterprises a clearer path from the AI concept to a deployed system.

Cognizant also plans to introduce a bundled productivity offering that combines Gemini Enterprise with Google Workspace. The targeted use cases are operational rather than experimental. These include collaborative content creation, supplier communications and other workflow-heavy processes that can be standardized and automated.

Beyond productivity tools, Cognizant is integrating Gemini into its broader service platforms. Through Cognizant Ignition, enabled by Gemini, the company supports early-stage discovery and prototyping while helping clients strengthen their data foundations. Its Agent Foundry platform provides pre-configured and no-code capabilities for specific use cases such as AI-powered contact centers and intelligent order management. These tools are designed to reduce the amount of custom development required for each deployment.

Scaling is another element of the strategy. Cognizant, a multi-year Google Cloud Data Partner of the Year award winner, says it will rely on a global network of Gemini-trained specialists to deliver these systems. The company is also expanding work tied to Google Distributed Cloud and showcasing capabilities through its Google Experience Zones and Gen AI Studios.

For Google Cloud, the partnership reinforces its enterprise AI ecosystem. Cloud providers can offer models and infrastructure, but enterprise adoption often depends on service partners that can integrate tools into existing systems and manage ongoing operations. By aligning closely with Cognizant, Google strengthens its ability to move Gemini from platform capability to production deployment.

The announcement does not introduce a new AI model. Instead, it reflects a shift in emphasis. The core question is no longer whether AI tools exist, but how they are implemented, governed and scaled across large organizations. Cognizant’s expanded role suggests that execution frameworks, internal deployment and structured delivery models are becoming central to how enterprises approach AI.

In that sense, the partnership is less about new technology and more about operational maturity. It highlights how AI is moving from isolated pilots to managed systems embedded in business processes — a transition that will likely define the next phase of enterprise adoption.

A closer look at the tech, AI, and open ecosystem behind Tien Kung 3.0’s real-world push

Humanoid robotics has advanced quickly in recent years. Machines can now walk, balance, and interact with their surroundings in ways that once seemed out of reach. Yet most deployments remain limited. Many robots perform well in controlled settings but struggle in real-world environments. Integration is often complex, hardware interfaces are closed, software tools are fragmented, and scaling across industries remains difficult.

Against this backdrop, X-Humanoid has introduced its latest general-purpose platform, Embodied Tien Kung 3.0. The company positions it not simply as another humanoid robot, but as a system designed to address the practical barriers that have slowed adoption, with a focus on openness and usability.

At the hardware level, Embodied Tien Kung 3.0 is built for mobility, strength, and stability. It is equipped with high-torque integrated joints that provide strong limb force for high-load applications. The company says it is the first full-size humanoid robot to achieve whole-body, high-dynamic motion control integrated with tactile interaction. In practice, this means the robot is designed to maintain balance and execute dynamic movements even in uneven or cluttered environments. It can clear one-meter obstacles, perform consecutive high-dynamic maneuvers, and carry out actions such as kneeling, bending, and turning with coordinated whole-body control.

Precision is also a focus. Through multi-degree-of-freedom limb coordination and calibrated joint linkage, the system is designed to achieve millimeter-level operational accuracy. This level of control is intended to support industrial-grade tasks that require consistent performance and minimal error across changing conditions.

But hardware is only part of the equation. The company pairs the robot with its proprietary Wise KaiWu general-purpose embodied AI platform. This system supports perception, reasoning, and real-time control through what the company describes as a coordinated “brain–cerebellum” architecture. It establishes a continuous perception–decision–execution loop, allowing the robot to operate with greater autonomy and reduced reliance on remote control.

For higher-level cognition, Wise KaiWu incorporates components such as a world model and vision-language models (VLM) to interpret visual scenes, understand language instructions, and break complex objectives into structured steps. For real-time execution, a vision-language-action (VLA) model and full autonomous navigation system manage obstacle avoidance and precise motion under variable conditions. The platform also supports multi-agent collaboration, enabling cross-platform compatibility, asynchronous task coordination, and centralized scheduling across multiple robots.

A central part of the platform is openness. The company states that the system is designed to address compatibility and adaptation challenges across both development and deployment layers. On the hardware side, Embodied Tien Kung 3.0 includes multiple expansion interfaces that support different end-effectors and tools, allowing faster adaptation to industrial manufacturing, specialized operations, and commercial service scenarios. On the software side, the Wise KaiWu ecosystem provides documentation, toolchains, and a low-code development environment. It supports widely adopted communication standards, including ROS2, MQTT, and TCP/IP, enabling partners to customize applications without rebuilding core systems.

The company also highlights its open-source approach. X-Humanoid has open-sourced key components from the Embodied Tien Kung and Wise KaiWu platforms, including the robot body architecture, motion control framework, world model, embodied VLM and cross-ontology VLA models, training toolchains, the RoboMIND dataset, and the ArtVIP simulation asset library. By opening access to these elements, the company aims to reduce development costs, lower technical barriers, and encourage broader participation from researchers, universities, and enterprises.

Embodied Tien Kung 3.0 enters a market where technical progress is visible but large-scale adoption remains uneven. The gap is not only about movement or strength. It is about integration, interoperability, and the ability to operate reliably and autonomously in everyday industrial and commercial settings. If platforms can reduce fragmentation and simplify deployment, humanoid robots may move beyond demonstrations and into sustained commercial use.

In that sense, the significance of Embodied Tien Kung 3.0 lies less in isolated technical claims and more in how its high-dynamic hardware, embodied AI system, open interfaces, and collaborative architecture are structured to work together. Whether that integrated approach can close the deployment gap will shape how quickly humanoid robotics becomes part of real-world operations.

A new safety layer aims to help robots sense people in real time without slowing production

Algorized has raised US$13 million in a Series A round to advance its AI-powered safety and sensing technology for factories and warehouses. The California- and Switzerland-based robotics startup says the funding will help expand a system designed to transform how robots interact with people. The round was led by Run Ventures, with participation from the Amazon Industrial Innovation Fund and Acrobator Ventures, alongside continued backing from existing investors.

At its core, Algorized is building what it calls an intelligence layer for “physical AI” — industrial robots and autonomous machines that function in real-world settings such as factories and warehouses. While generative AI has transformed software and digital workflows, bringing AI into physical environments presents a different challenge. In these settings, machines must not only complete tasks efficiently but also move safely around human workers.

This is where a clear gap exists. Today, most industrial robots rely on camera-based monitoring systems or predefined safety zones. For instance, when a worker steps into a marked area near a robotic arm, the system is programmed to slow down or stop the machine completely. This approach reduces the risk of accidents. However, it also means production lines can pause frequently, even when there is no immediate danger. In high-speed manufacturing environments, those repeated slowdowns can add up to significant productivity losses.

Algorized’s technology is designed to reduce that trade-off between safety and efficiency. Instead of relying solely on cameras, the company utilizes wireless signals — including Ultra-Wideband (UWB), mmWave, and Wi-Fi — to detect movement and human presence. By analysing small changes in these radio signals, the system can detect motion and breathing patterns in a space. This helps machines determine where people are and how they are moving, even in conditions where cameras may struggle, such as poor lighting, dust or visual obstruction.

Importantly, this data is processed locally at the facility itself — not sent to a remote cloud server for analysis. In practical terms, this means decisions are made on-site, within milliseconds. Reducing this delay, or latency, allows robots to adjust their movements immediately instead of defaulting to a full stop. The aim is to create machines that can respond smoothly and continuously, rather than reacting in a binary stop-or-go manner.

With the new funding, Algorized plans to scale commercial deployments of its platform, known as the Predictive Safety Engine. The company will also invest in refining its intent-recognition models, which are designed to anticipate how humans are likely to move within a workspace. In parallel, it intends to expand its engineering and support teams across Europe and the United States. These efforts build on earlier public demonstrations and ongoing collaborations with manufacturing partners, particularly in the automotive and industrial sectors.

For investors, the appeal goes beyond safety compliance. As factories become more automated, even small improvements in uptime and workflow continuity can translate into meaningful financial gains. Because Algorized’s system works with existing wireless infrastructure, manufacturers may be able to upgrade machine awareness without overhauling their entire hardware setup.

More broadly, the company is addressing a structural limitation in industrial automation. Robotics has advanced rapidly in precision and power, yet human-robot collaboration is still governed by rigid safety systems that prioritise stopping over adapting. By combining wireless sensing with edge-based AI models, Algorized is attempting to give machines a more continuous awareness of their surroundings from the start.

From plush figures to digital pets, a new class of AI toys is emerging — built not around screens or sensors, but around memory, language and emotional awareness

Spielwarenmesse in Nuremberg is the global meeting point for the toy industry, where brands and designers preview what will shape how children play and learn next. At this year’s fair, one message stood out clearly: toys are no longer built just to entertain, but to listen, respond and grow with children. Tuya Smart, a global AI cloud platform company, used the event to show how AI-powered toys are turning familiar formats into interactive companions that can talk, react emotionally and adapt over time.

The company’s central argument was simple but far-reaching. The next generation of artificial intelligence toys will not be defined by motors, sensors or screens alone, but by how well they understand human behavior. Instead of being single-function objects, smart toys for children are becoming systems that combine language models, emotion recognition and memory to support ongoing interaction.

One of the most talked-about examples was Tuya Smart’s Nebula Plush AI Toy. At first glance, it looks like a soft, expressive plush figure. Inside, it uses emotional recognition to change its LED facial expressions in real time. If a child sounds sad or excited, the toy’s eyes respond visually. It supports natural conversation, reacts to hugs and touch and combines storytelling, news-style updates and interactive games. Over time, it builds memory, allowing it to behave less like a gadget and more like an interactive AI toy that recalls past interactions.

Another example was Walulu, also developed using Tuya’s AI toy platform. Walulu is an AI pet built around personalization. It can detect up to 19 emotional states and speak more than 60 languages. It connects to major large language models such as ChatGPT, Gemini, DeepSeek, Qwen and Doubao. Through simple app-based controls, users choose traits like cheerful, quiet, curious or thoughtful. Those choices shape how Walulu talks and reacts. Instead of repeating scripts, it adjusts its tone and behavior over time. The result is not a novelty item, but an emotionally responsive AI toy that feels consistent in daily use.

Tuya also showed how educational AI toys can extend into learning and exploration. Its AI Learning Camera blends computer vision with interactive content. When it recognizes an object, it links it to cultural and learning material. If a child points it at a foreign word, it offers real-time pronunciation and translation. It can also turn drawings into digital artwork, encouraging active creativity rather than passive screen time. In this sense, AI toys for kids are becoming tools for learning as much as play.

These products point to a larger strategy. Tuya is not just making toys — it is building the AI toy development platform behind them. Through its AI Toy Solution, developers can design a toy’s personality, memory logic and behavior without training models from scratch. The system integrates with leading AI models and supports multi-turn conversation and emotional feedback, turning standard hardware into responsive AI companions.

The platform supports multiple development paths. Brands can use ready-to-market OEM solutions, add AI to existing products or build custom toys around their own characters. Plush toys, robots, educational tools and wearables can all become AI-powered toys without changing their physical design.

Because these products are made for children and families, safety is built in. Tuya’s system includes parental controls, conversation history review and content management. It supports standards such as GDPR and CCPA with encryption and data localization.

From a business standpoint, Tuya’s pitch is speed and scale. The company says its AI toy infrastructure can cut development time by more than half and reduce R&D costs by up to 50 percent. Its AIoT network spans over 200 countries and supports more than 60 languages, making global deployment of AI toys easier.

What emerged at Spielwarenmesse 2026 was not just a lineup of smart gadgets, but a clear shift in the category. AI toys are evolving into emotionally aware systems that talk, listen, remember and adapt. Their value lies not in sounding clever, but in fitting naturally into everyday life.

The fair did not present AI toys as a distant future. It showed them as products already entering the mainstream. The real question now is not whether toys will use AI, but how carefully that intelligence is designed for children.