Technology

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Artificial Intelligence

Backed by Menlo Ventures, BrainGrid tackles planning gaps as AI makes software building accessible to more founders.

As artificial intelligence makes it easier to write code, a different problem is starting to surface. Building software is no longer limited by technical skill alone. Increasingly, the challenge lies in deciding what to build, how to structure it, and how to turn an idea into something that actually works.

That shift sits at the centre of BrainGrid, a startup that has raised $1 million in pre-seed funding led by Menlo Ventures, with participation from Next Tier Ventures and Brainstorm Ventures. The company is building what it describes as an AI-powered planning layer for people who want to create software but may not have a technical background.

The timing reflects a broader change in how products are being built. Tools like Claude Code and Cursor have made it possible to generate working code through simple prompts. For many first-time founders, this has lowered the barrier to entry. But writing code is only one part of the process. Turning that code into a reliable product requires structure, sequencing and clarity—areas where many projects begin to fall apart.

In traditional teams, this responsibility sits with product managers who define what needs to be built and in what order. Without that layer, even well-written code can lead to products that feel disjointed or incomplete. Features may not work together, integrations can break and the final product often does not match the original idea.

BrainGrid is designed to address that gap. Instead of focusing on generating code, it helps users map out the structure of a product before development begins. The aim is to give builders a clearer starting point so that the tools they use—whether human or AI—can produce more consistent results.

The company says more than 500 builders have already used it to create software products across areas like fitness, healthcare and productivity. These range from first-time founders experimenting with new ideas to experienced developers working independently. In many cases, the products are already live and generating revenue, suggesting that the demand is not just for experimentation but for building something that can scale.

For investors, the appeal lies in the evolving role of software development. As AI takes on more of the technical work, the value shifts toward defining the problem and structuring the solution. In that sense, planning becomes less of a background task and more of a core capability.

The US$1 million raise is relatively modest, but it points to a larger trend. As more people gain access to AI tools, the number of potential builders expands. What remains limited is the ability to organise ideas into products that work in the real world. If that shift continues, the next wave of software may not be defined by who can code, but by who can plan.

Artificial Intelligence

HSUHK’s award-winning system shows how AI, drones and AR can cut training time, reduce errors and reshape warehouse operations

As global tech ecosystems become more interconnected, the ability to move innovation across borders is becoming just as important as building it. A new partnership between MTR Lab, the investment arm of MTR Corporation and ZGC Science City Ltd, a government-backed technology ecosystem based in Beijing’s Haidian district, reflects this shift.

At its core, the collaboration is designed to connect high-potential Chinese startups with global capital, real-world deployment opportunities and international markets. It focuses on sectors like AI, robotics, smart mobility and sustainable urban development—areas where China already has strong technical depth but where scaling beyond domestic markets can be more complex.

This is where the partnership begins to matter. ZGC Science City sits at the center of one of China’s most concentrated innovation clusters, with thousands of AI companies and a growing base of specialised and high-growth firms. MTR Lab, on the other hand, brings access to international markets, industry networks and practical deployment environments tied to infrastructure, transport and urban systems. Together, they are attempting to bridge a familiar gap: turning local innovation into globally relevant products.

In practice, the model is straightforward. ZGC Science City will introduce MTR Lab to startups working in priority sectors, creating a pipeline for potential investment and collaboration. From there, MTR Lab can support these companies through funding, pilot projects and access to overseas markets. The idea is not just to invest, but to help startups test and apply their technologies in real-world settings, particularly in complex urban environments.

The timing is notable. China’s AI and deep tech ecosystem has expanded rapidly, with thousands of companies contributing to advancements in automation, smart infrastructure and sustainability. At the same time, global demand for these technologies is rising, especially as cities look for more efficient and scalable solutions. Yet, moving from innovation to adoption often requires cross-border coordination—something individual startups may struggle to navigate alone.

This partnership also builds on a broader pattern. Corporate venture arms like MTR Lab are increasingly positioning themselves not just as investors, but as connectors between markets. By combining capital with access to infrastructure and deployment scenarios, they offer startups a way to move faster from development to real-world use. For ZGC Science City, the collaboration adds an international layer to its ecosystem, helping local companies extend beyond domestic growth.

What emerges is a model that goes beyond a typical investment announcement. It reflects a growing recognition that innovation today is rarely confined to one geography. Technologies may be developed in one ecosystem, refined in another and scaled globally through partnerships like this.

As cross-border collaboration becomes more central to how startups grow, partnerships like the one between MTR Lab and ZGC Science City point to a more connected innovation landscape—one where access, not just invention, defines success.

Health & Biotech

Endometriosis often takes years to diagnose. This ultrasound simulation innovation could help change that

Endometriosis affects roughly one in ten women worldwide, yet diagnosing the condition often takes years. In many cases, patients experience symptoms for nearly a decade before receiving a confirmed diagnosis. One reason is that detecting endometriosis through ultrasound requires specialized training and clinicians do not always encounter enough real cases to build that expertise.

To address this gap, medical simulation company Surgical Science has introduced a new ultrasound training module designed specifically for identifying endometriosis. The system allows clinicians to practice scanning techniques in a virtual environment, helping them recognize signs of the disease without relying solely on real-patient cases.

A key feature of the simulator is training on the “sliding sign,” an ultrasound indicator used to detect deep endometriosis. Because the condition can appear differently from patient to patient, mastering this assessment in real clinical settings can be difficult. The simulator allows clinicians to repeat the process across multiple scenarios, improving their ability to identify the condition during routine examinations.

The module also incorporates the International Deep Endometriosis Analysis (IDEA) protocol, which provides a structured method for performing a complete pelvic ultrasound assessment. Additional training cases, region-based scenarios and certification options are included to support standardized learning.

Early training results suggest strong improvements in clinician confidence, including higher skill levels in transvaginal ultrasound and better recognition of deep endometriosis. By expanding access to structured ultrasound training, simulation tools like this could help reduce diagnostic delays and improve care for millions of women living with the condition.

Deep Tech

A global survey shows robot anxiety drops when people encounter robots in real life

People often assume robots make people uneasy everywhere. But a new global study suggests something more nuanced. Robot anxiety tends to be highest in places where people rarely see robots in real life. Where robots are more visible, attitudes are often far more positive. That insight comes from a global study by Hexagon AB, which surveyed 18,000 participants across nine major markets. The research explored how adults and children think about robots and how those views change depending on everyday exposure.

In the United Kingdom, anxiety about robots is the highest among the countries studied. Around 52% of adults say they feel worried that something might go wrong when they think about interacting with or working alongside robots. South Korea sits at the other end of the spectrum, with only 29% reporting similar concerns. One factor appears to explain much of the gap: familiarity.

British adults are among the least likely to have encountered robots in real life. Only about 30% say they have seen or used one. In contrast, countries where robots are more visible tend to report greater comfort. China offers the clearest example. Around 75% of adults there say they have seen or interacted with robots. At the same time, 81% say they feel excited about the technology’s future potential.

The study suggests that attitudes toward robots are not fixed. Instead, they shift depending on where people encounter them and what tasks they perform. When robots are seen solving clear, practical problems, confidence tends to rise.

Across the surveyed countries, adults report the highest comfort levels with robots working in factories and warehouses. Around 63% say they are comfortable with robots in those environments. These are settings where tasks are clearly defined and safety standards are well understood. Acceptance drops in more personal spaces. Only 46% say they feel comfortable with robots in the home, while comfort falls further to 39% when robots are imagined in classrooms.

In other words, context matters. People appear more willing to accept robots when they take on physically demanding or dangerous work. Half of the respondents say improved safety is one of the main advantages of robotics in those environments. A similar share point to productivity gains as another benefit. Another finding challenges a common assumption about public fears. Job loss is often described as the biggest concern surrounding robotics. But the study suggests security risk worries people more.

Around 51% of adults say their biggest concern about robots at work is the possibility that the machines could be hacked or misused. That fear outweighs worries about physical malfunction or injury, which stand at 41%. Concerns about being replaced at work appear at the same level.

For many respondents, the issue is not simply whether robots can perform tasks. It is whether the systems controlling them are secure. According to researchers involved in the study, these concerns reflect how people evaluate emerging technologies. Instead of having a single opinion about robotics, people tend to judge each situation individually.

A robot helping assemble products in a factory may feel acceptable. The same technology operating in more sensitive environments can raise different questions. Dr. Jim Everett, an associate professor in moral psychology, says trust in artificial intelligence and robotics is often misunderstood. People are not simply asking whether they trust the technology, he notes. They are thinking about specific tools performing specific roles.

A robot assisting in a classroom or helping in healthcare carries different expectations than an AI system used in defense or surveillance. Even though these technologies are often grouped together in public debates, people evaluate them differently depending on their purpose.

Finally, the study also highlights another important factor shaping public attitudes: experience. When people actually encounter robots, fear often declines. Michael Szollosy, a robotics researcher involved in the project, says reactions tend to change quickly when individuals meet a robot for the first time.

The idea of an autonomous machine can feel intimidating in theory. But when people see a small service robot or an industrial machine performing a straightforward task, the reaction is often much calmer. Exposure can shift perceptions from abstract fears to practical understanding.

That shift matters because robotics is moving steadily into everyday environments. From manufacturing and logistics to healthcare and public services, machines capable of autonomous or semi-autonomous work are becoming more common.

As that happens, the study suggests public confidence may depend less on technical breakthroughs and more on visibility and transparency. Burkhard Boeckem, chief technology officer at Hexagon AB, argues that trust grows when people understand what robots are designed to do and where their limits lie.

Anxiety tends to increase when systems feel invisible or poorly understood. Clear boundaries and clear explanations can have the opposite effect. When people see robots working safely alongside humans, performing well-defined tasks and operating within clear rules, the technology becomes easier to accept.

In that sense, the future of robotics may depend as much on public familiarity as on engineering. The machines themselves are advancing quickly. But the relationship between humans and robots is still being negotiated. For now, the study offers a simple insight: the more people encounter robots in everyday life, the less mysterious they become. And once the mystery fades, the conversation often changes from fear to curiosity.

Artificial Intelligence

AI actor Tilly Norwood releases a musical video arguing that artificial intelligence can expand creativity in film

As Hollywood prepares for this weekend’s Oscars, a different kind of performer is stepping into the spotlight — one that doesn’t physically exist.

Tilly Norwood, described as the world’s first AI actor, has released her debut musical comedy video, Take the Lead. The project arrives at a moment when artificial intelligence has become one of the most contentious topics in the film industry.

The message of the song is simple. AI should not be seen as a threat to actors. Instead, it can become another creative tool. The release also offers a first look at what Norwood’s creators call the “Tillyverse”. It is envisioned as a cloud-based entertainment world where AI characters can live, interact and perform.

Behind the character is actor and producer Eline van der Velden. She is the CEO of production company Particle6 and AI talent studio Xicoia. Van der Velden created Tilly as a way to experiment with how artificial intelligence could be used in storytelling.

The timing is not accidental. The entertainment industry has spent the past few years debating the role AI should play in filmmaking and acting. Questions about digital replicas, automated performances and creative ownership continue to divide artists and studios.

Norwood’s musical video enters that debate with a different tone. Instead of warning about AI replacing actors, the project suggests that the technology could expand what performers are able to do.

The video itself also serves as a technical experiment. The song Take the Lead was generated using the AI music platform Suno. The video was then produced using a combination of widely available AI tools and Particle6’s own creative process.

One of the newer techniques used in the project is performance capture. Van der Velden physically acted out Tilly’s movements and expressions so the digital character could mirror a human performance. But the production was far from automated. According to Particle6, a team of 18 people worked on the video. The group included a director, editor, production designer, costume designer, comedy writer and creative technologist. In other words, the project still relied heavily on human creativity.

“Tilly has always been a vehicle to test the creative capabilities and boundaries of AI,” van der Velden said. “It’s not about taking anyone’s job”. She added that even with powerful tools, good AI content still takes time, taste and creative direction.

The project also reflects how quickly production technology is evolving. Tools that once required large studios are now accessible to smaller creative teams experimenting with AI-driven storytelling.

For Particle6, the character of Tilly Norwood acts as a testing ground. Each project explores how AI performers might be developed, directed and integrated into entertainment. Whether audiences embrace digital actors remains an open question. Many in the industry are still wary of how AI could reshape creative work.

But projects like Take the Lead show another possibility. Instead of replacing performers, artificial intelligence could become part of the creative process itself. In that sense, Tilly Norwood may represent something more than a virtual performer. She is also an experiment in how humans and machines might collaborate in the future of entertainment.

Artificial Intelligence

A wearable ring, conversational AI and US$23M in funding. Sandbar wants to rethink how we interact with technology

Sandbar, a New York–based interface startup, has raised US$23 million in Series A funding to develop a wearable device that lets people interact with artificial intelligence via voice rather than screens.

Adjacent and Kindred Ventures led the round; both venture firms focused on early-stage technology startups. The investment brings Sandbar’s total funding to us$36 million. Earlier backing included a US$10 million seed round led by True Ventures, a venture capital firm, as well as a US$3 million pre-seed round supported by Upfront Ventures, a venture firm and Betaworks, a startup studio and investment firm.

Sandbar was founded by Mina Fahmi and Kirak Hong, who previously worked together at CTRL-labs, a neural interface startup acquired by Meta in 2019. Their earlier work explored how computers could respond more directly to human intent — an idea that continues to shape Sandbar’s approach to AI interfaces.

The new funding will help the company expand its team across machine learning, interaction design and software engineering as it prepares to launch its first product. That product, called Stream, combines a wearable ring with a conversational AI interface. The system allows users to speak to an AI assistant without unlocking a phone or opening an app.

The concept is simple. Instead of typing into a screen, users press a button on the ring and talk. The system can capture notes, organize ideas, retrieve information from the web or trigger actions through connected applications.

The ring includes a microphone, a touchpad and subtle haptic feedback. These elements allow the device to respond through gentle vibrations rather than visual alerts. According to the company, the ring only listens when the user presses the button — a design meant to address common concerns around always-on microphones.

That design reflects a larger shift Sandbar believes is underway. As AI assistants become more capable, many startups are experimenting with new ways to interact with them. The focus is moving away from screens and keyboards toward interfaces that feel more natural and immediate.

Stream uses multiple AI models working together to process requests, search the web and structure information in real time. The company says users remain in control of their data and can choose whether to share information with other apps.

Sandbar is also developing a feature called Inner Voice, which responds using a voice customized to the user. The feature will debut during a closed beta planned for this spring, giving the company time to refine how the software behaves in everyday use.

The startup currently employs a team of 15 people. Many have worked on well-known consumer devices including the iPhone, Fitbit, Kindle and Vision Pro. Recent hires include Sam Bowen, formerly of Amazon and Fitbit, who joined as vice president of hardware and Brooke Travis, previously at Equinox, Dior and Gap, who now leads marketing.

Sandbar plans to begin shipping Stream in summer 2026 after completing early testing. As artificial intelligence tools become more integrated into daily life, the company is betting that the next shift in computing will not come from another app — but from new ways for people to interact with AI itself.

Deep Tech

A humanoid robot being escorted away by police in Macau has gone viral online, prompting jokes about what some called the world’s first “robot arrest.”

Police in Macau recently detained a humanoid robot after it frightened an elderly woman on a public street. The unusual encounter quickly spread online, prompting jokes about what some called the world’s first “robot arrest”.

On the evening of March 5, the robot was taken away by officers after the encounter triggered alarm among bystanders. Videos circulating on social media show an elderly woman confronting the robot on a sidewalk, visibly distressed and shouting that her “heart is pounding” while demanding to know why such “nonsense” was happening on the street.  In the clip, the robot raises both hands toward the woman after she lashes out in fear — a gesture many viewers interpreted as a sign of apology.

Shortly afterwards, two officers from the Macau Public Security Police Force were seen escorting the robot and a man believed to be its operator away from the area. An officer is seen placing his right hand on the robot’s shoulder — the same posture police often use when presenting arrested suspects in official photographs.

That scene quickly spread online, fuelling jokes about what some called the world’s first “robot arrest”.

Photos shared online show a humanoid robot with long limbs and exposed mechanical joints, built from a black metallic frame without an outer shell. In dim lighting, several commenters said it resembled a “moving skeleton” — a striking sight for pedestrians encountering it unexpectedly on the street.

Witnesses said the woman appeared severely shaken and an ambulance was eventually called to take her to the hospital.  

The incident also sparked discussion online about robots operating in public spaces. Some commenters argued that experimental technologies should be tested in controlled environments, while others said machines moving through public areas should have clearer designs or safety measures to avoid alarming pedestrians.

It remains unclear who deployed the robot or what purpose it was serving in the area at the time of the incident. Authorities have not released further details about the device or whether any action was taken following the encounter.

Deep Tech

Getting to the Moon was the first chapter. Interlune and Astrolab are working on how to operate there.

As plans for a long-term human presence on the Moon pick up pace, the focus is shifting from landing there to working there. It is one thing to reach the surface. It is another to build roads, prepare sites and extract materials in a way that can support real activity.

That is where Interlune and Astrolab come in. Interlune is a space resources company. Astrolab builds planetary rovers. The two are now working together to mount Interlune’s lunar digging system onto Astrolab’s Flexible Logistics and Exploration (FLEX) rover. They have completed a concept study and are planning hardware testing in Houston.

The aim is straightforward: combine a rover that can move reliably across the Moon with equipment that can dig, collect and handle lunar soil. Interlune is focused on harvesting natural resources from the Moon, starting with helium-3. To do that at scale, the system cannot sit in one place. It has to move across the surface, handle dust and operate in harsh conditions. "Reliable, autonomous mobility is crucial to the Interlune harvesting system and broader lunar infrastructure development", said Rob Meyerson, co-founder and CEO of Interlune. "Astrolab's FLEX is the right vehicle for the job".

By fitting its digging and collection hardware onto FLEX, Interlune is working toward a mobile system that can gather large amounts of lunar soil and support future construction needs. Beyond helium-3, the same setup could help prepare base sites, level ground, build protective barriers and lay the groundwork for other structures. In simple terms, it is about turning a rover into a working machine for the Moon.

The partnership also connects to Interlune’s work with Vermeer Corporation to develop equipment for continuous, high-volume digging adapted to lunar conditions. Taken together, the goal is to build systems that can support both commercial and government missions — whether that means resource extraction or preparing land for future bases.

For Astrolab, the collaboration strengthens the role of FLEX as more than just a transport vehicle.

"Working with Interlune further differentiates FLEX as the rover of choice for commercial and government Moon missions", said Jaret Matthews, Astrolab founder and CEO. "Interlune's expertise in developing and testing highly specialized regolith simulant will further enhance FLEX's ability to mitigate dust and operate in extreme environments".

Testing will be centered in Houston, which is becoming an important hub for commercial space development. Astrolab was the first company to lease space at the Texas A&M University Space Institute, currently under construction at NASA’s Johnson Space Center. Interlune operates the Houston-based Interlune Research Lab, where it creates and tests simulated versions of lunar soil.

That detail matters. Moon dust is fine, abrasive and difficult to manage. Before any hardware flies, it needs to prove it can survive and function in those conditions. By testing their systems in realistic soil simulants, the companies can refine how the rover moves and how the digging system performs.

The Houston lab is partially funded by the Texas Space Commission, reflecting the growing role of regional space initiatives in supporting private companies building beyond Earth. Overall, the collaboration is not about grand promises. It is about integrating hardware, running real tests and taking practical steps toward operating on the Moon.  

Artificial Intelligence

Structured AI interviews and human judgment combine to address the global talent shortage

As hiring pressures mount across global markets, ManpowerGroup is turning to technology to strengthen how it connects people to work. The workforce solutions major has announced a global partnership with Hubert, a startup focused on AI-driven structured interviews. The aim is simple: make hiring faster and fairer, without removing the human touch.

ManpowerGroup has spent decades operating at the center of the global labor market. The company works with employers across industries to fill roles, manage workforce planning and build talent pipelines. With millions of placements each year, it has a clear view of how strained hiring has become. A large share of employers today report difficulty finding skilled talent. At the same time, candidates expect more transparency, quicker feedback and flexibility in how they engage with employers.

Hubert enters this picture as a specialist in structured digital interviewing. The startup has built tools that allow candidates to complete interviews online, at any time, while being assessed against consistent criteria. Instead of relying on informal screening calls or resume filters, its system focuses on standardized questions tied directly to job requirements. The idea is to bring more consistency to early-stage hiring.

The partnership brings these capabilities into ManpowerGroup’s global operations. AI-powered interviews will now support the first stage of screening, helping recruiters identify qualified candidates earlier in the process. This does not replace recruiters. Final decisions and contextual judgment remain with experienced hiring professionals. What changes is the speed and structure of the initial assessment.

For employers, this could mean earlier visibility into job-ready talent and less time spent on manual screening. For candidates, it offers more flexibility. A significant portion of interviews on Hubert’s platform are completed outside regular office hours, allowing applicants to engage when it suits them. That flexibility can make a difference in competitive labor markets where timing matters.

The collaboration is also positioned as a step toward reducing bias. By evaluating each candidate against the same transparent standards, the process becomes more consistent. While no system can remove bias entirely, structured assessments can reduce the variability that often comes with unstructured interviews.

At its core, the partnership addresses a gap many large organizations are facing. They need scale and speed, but they cannot afford to lose the human judgment that good hiring depends on. Manual processes are too slow. Fully automated systems can feel impersonal and risky. ManpowerGroup’s approach suggests a middle path, where technology handles repetition and structure and recruiters focus on potential and fit.

The move also reflects a broader shift in the workforce industry. AI is no longer being tested on the sidelines. It is being built into the foundation of hiring operations. For established players like ManpowerGroup, the challenge is not whether to adopt AI, but how to do so responsibly and at scale.

By working with Hubert, the company is signaling that the future of recruitment will likely blend structured digital tools with human expertise. In a market defined by talent shortages and rising expectations, that balance may prove critical.

Artificial Intelligence

AI meets AR: How Rokid Glasses bring multilingual, real-time intelligence to smart eyewear globally

Rokid, a Chinese company specializing in AI-powered smart eyewear and human–computer interaction, has rolled out a major software update for the international version of its Rokid Glasses. This update makes it the first smart glasses manufacturer to natively support Google’s Gemini, alongside three other leading large language models: OpenAI’s ChatGPT, Alibaba’s Qwen and DeepSeek.

The integration is powered by Rokid’s device-to-cloud architecture, which enables users to switch between AI models on the fly. In practice, this means a traveler can receive a real-time translation in Japanese using one AI model, then quickly switch to ChatGPT to answer a technical query—without noticeable delay. The system also supports multi-modal inputs like voice and gestures, making interactions more intuitive for everyday use.

This is more than a routine software update. By combining AI models from both U.S. and Chinese developers, Rokid is making its smart glasses relevant to global users, with features that adapt to local languages and preferences while maintaining high performance.  

These technological advancements have directly fueled Rokid’s international growth. Between November 2024 and October 2025, Shangpu Group data shows Rokid Glasses ranked No.1 in global sales for AI glasses with display functionality. Crowdfunding milestones further reflect this momentum: the product became the fastest smart glasses to raise over 100 million Japanese Yen on Japan’s MAKUAKE platform and broke Kickstarter records for smart eyewear.

Taken together, Rokid’s update highlights a shift in the smart glasses space: success increasingly comes from openness, flexibility and localized AI experiences rather than closed, single-platform ecosystems. By giving users choice, integrating global AI capabilities and bridging cultural and linguistic gaps, Rokid is positioning itself as a serious contender in the international AR and AI wearable market.