Artificial Intelligence

Policy Experts Warn Governments Are Falling Behind on AI Regulation

A rare policy consensus emerges as AI’s impact moves beyond innovation into governance and societal risk

Updated

May 5, 2026 5:42 PM

A mechanical hand reaching for the hand of flesh. PHOTO: UNSPLASH

A new survey from Povaddo, a policy research firm, suggests that concern about artificial intelligence is no longer limited to industry or academia. It is now firmly present within the policy community.

The survey draws on responses from 301 public policy professionals across the United States and Europe, including lawmakers, staffers and analysts involved in shaping and evaluating public policy. A majority of respondents—61%—say governments are falling short in addressing the negative impacts of AI.

There is also broad agreement that regulation needs to increase. In the United States, 92% of respondents support stronger AI regulation, compared to 70% in Europe. At a time when consensus is often difficult, the findings point to a shared view across policy circles that current frameworks are not keeping pace with technological development.

Differences emerge when looking at how AI is affecting national contexts. In the U.S., 57% of policy experts believe AI is already harming the labor market. In Europe, 34% say the same. U.S. respondents are also more likely to see AI as a greater threat to jobs than immigration, with 63% holding that view compared to 47% in Europe.

On misinformation, responses are closely aligned. A large majority of policy experts in both regions expect an AI-driven misinformation crisis within the next one to two years—87% in the U.S. and 82% in Europe. Many also believe that AI-generated or AI-amplified misinformation could affect elections and public health information.

Some respondents frame the risks in more fundamental terms. In the United States, 41% of policy experts say AI poses an existential threat to humanity. In Europe, 29% share that view. U.S. respondents are also more likely to believe that advances in AI could harm global security and stability.

The findings come as policymakers begin to respond more actively. In the U.S., Senators Josh Hawley, Richard Blumenthal and Mark Warner have introduced bipartisan legislation focused on AI accountability, including measures aimed at protecting workers and children.

In Europe, the introduction of the EU AI Act marks a more advanced regulatory approach. The framework sets out rules based on levels of risk and is widely seen as the first comprehensive attempt to govern AI at scale.

William Stewart, President and Founder of Povaddo, said: "What makes these findings so significant is who is saying it. These are the practitioners who work inside the policy process every day, spanning every corner of the policy world from defense to healthcare to finance, not activists or everyday citizens. These findings foreshadow real action. The current path of governments accelerating AI deployment while falling short on governance is not sustainable, and the people who know that best are the ones in this survey. You cannot have nine-in-ten policy insiders demanding more regulation and four-in-ten calling AI an existential threat without that eventually moving the needle in Washington and Brussels in terms of legislative or regulatory action".

Taken together, the survey reflects a shift in how AI is being discussed within policymaking circles. Concern is no longer limited to future risks. It is increasingly tied to current gaps in governance and the pace of deployment.

Keep Reading

Artificial Intelligence

Neuron7’s Neuro Brings a New Kind of Intelligence — One That Refuses to Guess

Examining the shift from fast answers to verified intelligence in enterprise AI.

Updated

January 8, 2026 6:33 PM

Startup employee reviewing business metrics on an AI-powered dashboard. PHOTO: FREEPIK

Neuron7.ai, a company that builds AI systems to help service teams resolve technical issues faster, has launched Neuro. It is a new kind of AI agent built for environments where accuracy matters more than speed. From manufacturing floors to hospital equipment rooms, Neuro is designed for situations where a wrong answer can halt operations.

What sets Neuro apart is its focus on reliability. Instead of relying solely on large language models that often produce confident but inaccurate responses, Neuro combines deterministic AI — which draws on verified, trusted data — with autonomous reasoning for more complex cases. This hybrid design helps the system provide context-aware resolutions without inventing answers or “hallucinating”, a common issue that has made many enterprises cautious about adopting agentic AI.

“Enterprise adoption of agentic AI has stalled despite massive vendor investment. Gartner predicts 40% of projects will be canceled by 2027 due to reliability concerns”, said Niken Patel, CEO and Co-Founder of Neuron7. “The root cause is hallucinations. In service operations, outcomes are binary. An issue is either resolved or it is not. Probabilistic AI that is right only 70% of the time fails 30% of your customers and that failure rate is unacceptable for mission-critical service”.

That concern shaped how Neuro was built. “We use deterministic guided fixes for known issues. No guessing, no hallucinations — and reserve autonomous AI reasoning for complex scenarios. What sets Neuro apart is knowing which mode to use. While competitors race to make agents more autonomous, we're focused on making service resolution more accurate and trusted”, Patel explained.

At the heart of Neuro is the Smart Resolution Hub, Neuron7’s central intelligence layer that consolidates service data, knowledge bases and troubleshooting workflows into one conversational experience. This means a technician can describe a problem — say, a diagnostic error in an MRI scanner — and Neuro can instantly generate a verified, step-by-step solution. If the problem hasn’t been encountered before, it can autonomously scan through thousands of internal and external data points to identify the most likely fix, all while maintaining traceability and compliance.

Neuro’s architecture also makes it practical for real-world use. It integrates seamlessly with enterprise systems such as Salesforce, Microsoft, ServiceNow and SAP, allowing companies to embed it within their existing support operations. Early users of Neuron7’s platform have reported measurable improvements — faster resolutions, higher customer satisfaction and reduced downtime — thanks to guided intelligence that scales expert-level problem solving across teams.

The timing of Neuro’s debut feels deliberate. As organizations look to move past the hype of generative AI, trust and accountability have become the new benchmarks. AI systems that can explain their reasoning and stay within verifiable boundaries are emerging as the next phase of enterprise adoption.

“The market has figured out how to build autonomous agents”, Patel said. “The unsolved problem is building accurate agents for contexts where errors have consequences. Neuro fills that gap”.

Neuron7 is building a system that knows its limits — one that reasons carefully, acts responsibly and earns trust where it matters most. In a space dominated by speculation, that discipline may well redefine what “intelligent” really means in enterprise AI.