A rare policy consensus emerges as AI’s impact moves beyond innovation into governance and societal risk
Updated
May 5, 2026 5:42 PM

A mechanical hand reaching for the hand of flesh. PHOTO: UNSPLASH
A new survey from Povaddo, a policy research firm, suggests that concern about artificial intelligence is no longer limited to industry or academia. It is now firmly present within the policy community.
The survey draws on responses from 301 public policy professionals across the United States and Europe, including lawmakers, staffers and analysts involved in shaping and evaluating public policy. A majority of respondents—61%—say governments are falling short in addressing the negative impacts of AI.
There is also broad agreement that regulation needs to increase. In the United States, 92% of respondents support stronger AI regulation, compared to 70% in Europe. At a time when consensus is often difficult, the findings point to a shared view across policy circles that current frameworks are not keeping pace with technological development.
Differences emerge when looking at how AI is affecting national contexts. In the U.S., 57% of policy experts believe AI is already harming the labor market. In Europe, 34% say the same. U.S. respondents are also more likely to see AI as a greater threat to jobs than immigration, with 63% holding that view compared to 47% in Europe.
On misinformation, responses are closely aligned. A large majority of policy experts in both regions expect an AI-driven misinformation crisis within the next one to two years—87% in the U.S. and 82% in Europe. Many also believe that AI-generated or AI-amplified misinformation could affect elections and public health information.
Some respondents frame the risks in more fundamental terms. In the United States, 41% of policy experts say AI poses an existential threat to humanity. In Europe, 29% share that view. U.S. respondents are also more likely to believe that advances in AI could harm global security and stability.
The findings come as policymakers begin to respond more actively. In the U.S., Senators Josh Hawley, Richard Blumenthal and Mark Warner have introduced bipartisan legislation focused on AI accountability, including measures aimed at protecting workers and children.
In Europe, the introduction of the EU AI Act marks a more advanced regulatory approach. The framework sets out rules based on levels of risk and is widely seen as the first comprehensive attempt to govern AI at scale.
William Stewart, President and Founder of Povaddo, said: "What makes these findings so significant is who is saying it. These are the practitioners who work inside the policy process every day, spanning every corner of the policy world from defense to healthcare to finance, not activists or everyday citizens. These findings foreshadow real action. The current path of governments accelerating AI deployment while falling short on governance is not sustainable, and the people who know that best are the ones in this survey. You cannot have nine-in-ten policy insiders demanding more regulation and four-in-ten calling AI an existential threat without that eventually moving the needle in Washington and Brussels in terms of legislative or regulatory action".
Taken together, the survey reflects a shift in how AI is being discussed within policymaking circles. Concern is no longer limited to future risks. It is increasingly tied to current gaps in governance and the pace of deployment.
Keep Reading
A global survey shows robot anxiety drops when people encounter robots in real life
Updated
April 1, 2026 8:55 AM

Ameca the humanoid robot, featuring a grey rubber face. PHOTO: ADOBE STOCK
People often assume robots make people uneasy everywhere. But a new global study suggests something more nuanced. Robot anxiety tends to be highest in places where people rarely see robots in real life. Where robots are more visible, attitudes are often far more positive. That insight comes from a global study by Hexagon AB, which surveyed 18,000 participants across nine major markets. The research explored how adults and children think about robots and how those views change depending on everyday exposure.
In the United Kingdom, anxiety about robots is the highest among the countries studied. Around 52% of adults say they feel worried that something might go wrong when they think about interacting with or working alongside robots. South Korea sits at the other end of the spectrum, with only 29% reporting similar concerns. One factor appears to explain much of the gap: familiarity.
British adults are among the least likely to have encountered robots in real life. Only about 30% say they have seen or used one. In contrast, countries where robots are more visible tend to report greater comfort. China offers the clearest example. Around 75% of adults there say they have seen or interacted with robots. At the same time, 81% say they feel excited about the technology’s future potential.
The study suggests that attitudes toward robots are not fixed. Instead, they shift depending on where people encounter them and what tasks they perform. When robots are seen solving clear, practical problems, confidence tends to rise.
Across the surveyed countries, adults report the highest comfort levels with robots working in factories and warehouses. Around 63% say they are comfortable with robots in those environments. These are settings where tasks are clearly defined and safety standards are well understood. Acceptance drops in more personal spaces. Only 46% say they feel comfortable with robots in the home, while comfort falls further to 39% when robots are imagined in classrooms.
In other words, context matters. People appear more willing to accept robots when they take on physically demanding or dangerous work. Half of the respondents say improved safety is one of the main advantages of robotics in those environments. A similar share point to productivity gains as another benefit. Another finding challenges a common assumption about public fears. Job loss is often described as the biggest concern surrounding robotics. But the study suggests security risk worries people more.
Around 51% of adults say their biggest concern about robots at work is the possibility that the machines could be hacked or misused. That fear outweighs worries about physical malfunction or injury, which stand at 41%. Concerns about being replaced at work appear at the same level.
For many respondents, the issue is not simply whether robots can perform tasks. It is whether the systems controlling them are secure. According to researchers involved in the study, these concerns reflect how people evaluate emerging technologies. Instead of having a single opinion about robotics, people tend to judge each situation individually.
A robot helping assemble products in a factory may feel acceptable. The same technology operating in more sensitive environments can raise different questions. Dr. Jim Everett, an associate professor in moral psychology, says trust in artificial intelligence and robotics is often misunderstood. People are not simply asking whether they trust the technology, he notes. They are thinking about specific tools performing specific roles.
A robot assisting in a classroom or helping in healthcare carries different expectations than an AI system used in defense or surveillance. Even though these technologies are often grouped together in public debates, people evaluate them differently depending on their purpose.
Finally, the study also highlights another important factor shaping public attitudes: experience. When people actually encounter robots, fear often declines. Michael Szollosy, a robotics researcher involved in the project, says reactions tend to change quickly when individuals meet a robot for the first time.
The idea of an autonomous machine can feel intimidating in theory. But when people see a small service robot or an industrial machine performing a straightforward task, the reaction is often much calmer. Exposure can shift perceptions from abstract fears to practical understanding.
That shift matters because robotics is moving steadily into everyday environments. From manufacturing and logistics to healthcare and public services, machines capable of autonomous or semi-autonomous work are becoming more common.
As that happens, the study suggests public confidence may depend less on technical breakthroughs and more on visibility and transparency. Burkhard Boeckem, chief technology officer at Hexagon AB, argues that trust grows when people understand what robots are designed to do and where their limits lie.
Anxiety tends to increase when systems feel invisible or poorly understood. Clear boundaries and clear explanations can have the opposite effect. When people see robots working safely alongside humans, performing well-defined tasks and operating within clear rules, the technology becomes easier to accept.
In that sense, the future of robotics may depend as much on public familiarity as on engineering. The machines themselves are advancing quickly. But the relationship between humans and robots is still being negotiated. For now, the study offers a simple insight: the more people encounter robots in everyday life, the less mysterious they become. And once the mystery fades, the conversation often changes from fear to curiosity.