Corporate Innovation

HONOR Robot Phone: A Moving AI Camera or Just Another Smartphone Gimmick?

A smartphone that moves, tracks and responds in real time—but is it real utility or just a marketing gimmick?

Updated

April 15, 2026 6:00 PM

HONOR Robot Phone, with its camera arm extended. PHOTO: HONOR

Smartphones today feel more familiar than new. Each year brings better performance and better cameras, but fewer real surprises. So when a company unveils something called a “Robot Phone”, it’s bound to get attention.  

HONOR did exactly that at the Mobile World Congress (MWC) in Barcelona this year. While most smartphone brands are focused on software upgrades, HONOR is trying something different with hardware. Its Robot Phone is built to move and adjust on its own. The camera sits on a motorized system that can tilt, track motion and shift angles automatically. It almost looks like a small robotic head, following whatever is happening in front of it. It can pick up sound, recognize motion and stay visually aware of its surroundings. This result feels less like using a regular phone and more like interacting with something responsive.  

So what makes HONOR’s Robot Phone different from the smartphones we already use? Here’s a closer look at its camera system, AI features and design, and whether it is truly something new or simply smart marketing.

What does the HONOR Robot Phone do?

At its core, the Robot Phone still works like a regular smartphone. What makes it different is the camera system. It has a 200MP camera that sits on a motorized arm with a three-axis gimbal, which extends when in use and folds back into the phone when not needed. The compact motor gives the camera physical movement, while motion control allows it to sense, track and follow a person or object in real time. That means it can keep a subject in frame without constant manual adjustment.  

The camera also adds a more playful side to the experience. It can respond with simple gestures, such as nodding or shaking its head, and it can even move in sync with music.

This setup could be particularly useful for content creators. As CNET tech journalist and YouTuber Andrew Lanxon pointed out, it removes the need to carry a separate gimbal. Since the robotic camera module can easily fold into the body of the phone, it is easier to carry around and more convenient for filming or taking photos on the go.  

The Robot Phone also has the practical advantage of a smartphone display. It gives users a bigger screen than a standalone camera for framing, monitoring and reviewing footage. Since it runs on Android, the process of recording, editing and sharing content is also more direct.  

The Robot Phone’s Design: How the moving camera fits inside

The most impressive part of the HONOR Robot Phone design is how it fits a moving camera system into the body of a smartphone without needing external attachments.  

To make this possible, HONOR uses a custom micro motor that is 70% smaller than mainstream competitors. The company also says it is the industry’s smallest four-degrees-of-freedom (4DoF) gimbal system. To support the stable movement of the camera module, the internal structure uses high-strength materials such as steel and titanium alloy. These materials help the mechanism stay durable as it shifts and repositions over time.

Battery life is another obvious question. HONOR has not revealed the battery capacity of the Robot Phone itself, but it did showcase its Silicon-Carbon Blade Battery technology at MWC 2026. The company says this battery is designed to increase energy density while keeping devices slim, and that it could support capacities of 7,000 mAh and beyond in future foldable devices.  

That is not specific to the Robot Phone, but it does hint at the kind of battery improvements that may be needed for smartphones with moving parts and more advanced camera systems.

The AI features of the Robot Phone

The AI features in Honor’s Robot Phone are focused on how the device sees and responds to its surroundings in real time. At the most basic level, the phone can track what is happening in a scene and adjust itself without constant user input.

On the functional side, the system keeps subjects framed and in focus automatically. Its AI Object Tracking ensures subjects stay centred, while AI SpinShot enables controlled 90° and 180° rotations for smoother transitions, even when the phone is used one-handed. It can also detect motion and recognize sound, which lets it respond to activity as it happens instead of reacting frame by frame.

The AI becomes more noticeable in the way the device behaves. When activated, the camera module unfolds and the screen displays a pair of animated eyes that track the user’s face and gaze. Honor calls this “embodied AI”, meaning the assistant expresses itself through movement rather than only voice or text. The camera module can adjust its angle during video calls, which makes it feel a little more physically present.

According to Thomas Bai, AI product expert at Honor, the goal is to move beyond passive assistance. By combining sensing, movement and real-time processing, the device is designed to interact with its environment in a more continuous way. In practice, that could mean interpreting its surroundings and responding as situations change, such as when someone is moving through an unfamiliar space.

The gaps beneath the hype

The Robot Phone has sparked curiosity, but there is still a lot we do not know. For one thing, it is still a prototype, with a release expected later this year. Early signs also suggest it may be expensive, partly because of rising memory chip costs. Some of its more playful features also feel uncertain. In demos, the phone can move along to music, but with only a handful of pre-set tracks, it is hard to tell whether that feature will be genuinely useful or remain more of a showcase moment.

Then there are the practical questions. A motorized camera system could make the phone heavier and more top-heavy, which may affect comfort during daily use. Running a motor alongside continuous AI tracking will also likely put pressure on battery life. These are not dealbreakers, but they are trade-offs that will matter outside of a demo.

Privacy is another concern that is hard to overlook. Some of the AI features rely on cloud processing, which means certain data is sent to external servers instead of being processed fully on the device. That is common in many AI systems today, but it feels more significant here because the phone is built to actively track movement and reposition its camera in real time. For some people, that level of autonomy may feel intrusive rather than helpful. It also raises bigger questions about what sensors are built into the device and how much data they collect during everyday use.  

Final verdict: Is the HONOR Robot Phone worth paying attention to?

So, is the HONOR Robot Phone a real step forward, or just a clever idea packaged well?

The answer depends on who it is for.  

For content creators, the appeal is obvious. Early indications suggest it could make video capture easier by reducing the need for extra gear. Honor’s collaboration with cinema camera company ARRI also suggests a serious push toward more cinematic smartphone footage.

For everyone else, the value is less clear. Outside of content creation, it is still hard to see how these features would translate into everyday use in a meaningful way.

For now, the Robot Phone sits somewhere between promise and experiment. Whether it turns into a genuinely useful new kind of smartphone or fades away as a novelty will only become clear once it moves beyond controlled demos and into real life.

Keep Reading

Major Forums & Conferences

How CES 2026 Reframed the Role of Robots

Examining how robots are moving from demonstrations to daily use.

Updated

January 28, 2026 5:53 PM

An industrial robotic arm capable of autonomous welding. PHOTO: ADOBE STOCK

CES 2026 did not frame robotics as a distant future or a technological spectacle. Instead, it highlighted machines designed for the slow, practical work of fitting into human systems. Across the show floor, robots were no longer performing for attention but being shaped by real-world constraints—space, safety, fatigue and repetition.

They appeared in factories, homes, emergency settings and industrial sites, each responding to a specific kind of human limitation. Together, these four robots reveal how robotics is being redefined: not as a replacement for people, but as infrastructure that quietly takes on work humans are least meant to carry alone.

1. Hyundai’s Atlas: From lab to factory

Hyundai Motor unveiled its electric humanoid robot, Atlas, during a media day on January 5, 2026, at the Mandalay Bay Convention Center in Las Vegas as part of CES 2026. Developed with Boston Dynamics, Hyundai’s U.S.-based robotics subsidiary, Atlas was presented in two forms: a research prototype and a commercial model designed for real factory environments.

Shown under the theme “AI Robotics, Beyond the Lab to Life: Partnering Human Progress,” Atlas is designed to work alongside humans rather than replace them. The premise is straightforward—robots take on physically demanding and repetitive tasks such as sorting and assembly, while people focus on work requiring judgment, creativity and decision-making.

Built for industrial use, the commercial version of Atlas is designed to adapt quickly, with Hyundai stating it can learn new tasks within a day. Its adult-sized humanoid form features 56 degrees of freedom, enabling flexible, human-like movement. Tactile sensors in its hands and a 360-degree vision system support spatial awareness and precise operation.

Atlas is also engineered for demanding conditions. It can lift up to 50 kilograms, operate in temperatures ranging from –20°C to 40°C and is waterproof, making it suitable for challenging factory settings.

Looking ahead, Hyundai expects Atlas to begin with parts sorting and sequencing by 2028, move into assembly by 2030 and later take on precision tasks that require sustained physical effort and focus.

2. Widemount’s Smart Firefighting Robot: Built for hazard zones

Widemount’s Smart Firefighting Robot is designed to operate in environments that are difficult and dangerous for humans to enter. Developed by Widemount Dynamics, a spinout from the Hong Kong Polytechnic University, the robot is built to support emergency teams during fires, particularly in enclosed and smoke-filled spaces.

The robot can move through buildings and industrial facilities even when visibility is near zero. Rather than relying on cameras or GPS, it uses radar-based mapping to understand its surroundings and determine a safe path forward. This allows it to continue operating when smoke, heat or debris would normally restrict access.

As it approaches a fire, the robot analyses the burning object. Its onboard AI helps identify the material involved and selects an appropriate extinguishing method. Sensors simultaneously assess flame intensity and send real-time updates to command centres, giving responders clearer situational awareness.

When actively fighting a fire, the robot can aim directly at the source and deploy extinguishing agents autonomously. The system continuously adjusts its actions based on incoming sensor data, reducing the need for constant human intervention during high-risk situations.

3. LG Electronics’ LG CLOiD: Automation for domestic spaces

At CES 2026, LG Electronics offered a glimpse into how household work could gradually shift from people to machines. The company introduced LG CLOiD, an AI-powered home robot designed to manage everyday chores by working directly with connected appliances within LG’s ThinQ ecosystem.

Designed for indoor living spaces, CLOiD features a compact upper body with two articulated arms, a head unit and a wheeled base that enables steady movement across floors. Its torso can tilt to adjust height, allowing it to reach items placed low or on kitchen counters. The arms and hands are built for careful handling, enabling the robot to grip common household objects rather than heavy tools. The head also functions as a mobile control unit, housing cameras, sensors, a display and voice interaction capabilities for communication and monitoring.

In practice, CLOiD acts as a task coordinator. It can retrieve items from appliances, operate ovens and washing machines and manage laundry cycles from start to finish, including folding and stacking clothes. By connecting multiple devices through the ThinQ system, the robot turns separate appliances into a single, coordinated workflow.

These capabilities are supported by LG’s Physical AI system. CLOiD uses vision to recognise objects and interpret its surroundings, language processing to understand instructions and action control to execute tasks step by step. Together, these systems allow the robot to follow routines, respond to user input and adjust task execution over time.

4. Doosan Robotics’ Scan & Go: Automation at an industrial scale

Doosan Robotics introduced Scan & Go at CES 2026, an AI-driven robotic system designed to automate large-scale surface repair and inspection. The solution targets environments with complex, irregular surfaces that are difficult to pre-program, such as aircraft structures, wind turbine blades and large industrial installations.

Scan & Go operates by scanning surfaces on site and building an understanding of their shape in real time. Instead of relying on detailed digital models or manual coding, the system plans its movements based on live data. This enables it to adapt to variations in size, curvature and surface condition without extensive setup.

The underlying technology combines 3D sensing with AI-based motion planning. The system interprets surface data, generates tool paths and refines its actions as work progresses. In practical terms, this reduces manual intervention while maintaining consistency across large work areas.

By handling surface preparation and inspection tasks that are time-consuming and physically demanding, Scan & Go is positioned as a support tool for industrial teams operating at scale.

A shift from demonstration to deployment

Taken together, these robots signal a clear shift in how machines are being designed and deployed. Across factories, homes, emergency sites and industrial infrastructure, robotics is moving beyond demonstrations and into practical roles that support human work.

The unifying theme is not replacement, but relief—robots taking on tasks that are repetitive, hazardous or physically demanding. CES 2026 suggests that robotics is evolving from spectacle to utility, with a growing focus on systems that adapt to real environments, respond to genuine constraints and integrate into everyday workflows.