Artificial Intelligence

Policy Experts Warn Governments Are Falling Behind on AI Regulation

A rare policy consensus emerges as AI’s impact moves beyond innovation into governance and societal risk

Updated

May 5, 2026 5:42 PM

A mechanical hand reaching for the hand of flesh. PHOTO: UNSPLASH

A new survey from Povaddo, a policy research firm, suggests that concern about artificial intelligence is no longer limited to industry or academia. It is now firmly present within the policy community.

The survey draws on responses from 301 public policy professionals across the United States and Europe, including lawmakers, staffers and analysts involved in shaping and evaluating public policy. A majority of respondents—61%—say governments are falling short in addressing the negative impacts of AI.

There is also broad agreement that regulation needs to increase. In the United States, 92% of respondents support stronger AI regulation, compared to 70% in Europe. At a time when consensus is often difficult, the findings point to a shared view across policy circles that current frameworks are not keeping pace with technological development.

Differences emerge when looking at how AI is affecting national contexts. In the U.S., 57% of policy experts believe AI is already harming the labor market. In Europe, 34% say the same. U.S. respondents are also more likely to see AI as a greater threat to jobs than immigration, with 63% holding that view compared to 47% in Europe.

On misinformation, responses are closely aligned. A large majority of policy experts in both regions expect an AI-driven misinformation crisis within the next one to two years—87% in the U.S. and 82% in Europe. Many also believe that AI-generated or AI-amplified misinformation could affect elections and public health information.

Some respondents frame the risks in more fundamental terms. In the United States, 41% of policy experts say AI poses an existential threat to humanity. In Europe, 29% share that view. U.S. respondents are also more likely to believe that advances in AI could harm global security and stability.

The findings come as policymakers begin to respond more actively. In the U.S., Senators Josh Hawley, Richard Blumenthal and Mark Warner have introduced bipartisan legislation focused on AI accountability, including measures aimed at protecting workers and children.

In Europe, the introduction of the EU AI Act marks a more advanced regulatory approach. The framework sets out rules based on levels of risk and is widely seen as the first comprehensive attempt to govern AI at scale.

William Stewart, President and Founder of Povaddo, said: "What makes these findings so significant is who is saying it. These are the practitioners who work inside the policy process every day, spanning every corner of the policy world from defense to healthcare to finance, not activists or everyday citizens. These findings foreshadow real action. The current path of governments accelerating AI deployment while falling short on governance is not sustainable, and the people who know that best are the ones in this survey. You cannot have nine-in-ten policy insiders demanding more regulation and four-in-ten calling AI an existential threat without that eventually moving the needle in Washington and Brussels in terms of legislative or regulatory action".

Taken together, the survey reflects a shift in how AI is being discussed within policymaking circles. Concern is no longer limited to future risks. It is increasingly tied to current gaps in governance and the pace of deployment.

Keep Reading

Artificial Intelligence

How ChainGPT and Secret Network Bring Private, Verifiable AI Coding On-Chain

A step forward that could influence how smart contracts are designed and verified.

Updated

January 8, 2026 6:32 PM

ChainGPT's robot mascot. IMAGE: CHAINGPT

A new collaboration between ChainGPT, an AI company specialising in blockchain development tools and Secret Network, a privacy-focused blockchain platform, is redefining how developers can safely build smart contracts with artificial intelligence. Together, they’ve achieved a major industry first: an AI model trained exclusively to write and audit Solidity code is now running inside a Trusted Execution Environment (TEE). For the blockchain ecosystem, this marks a turning point in how AI, privacy and on-chain development can work together.

For years, smart-contract developers have faced a trade-off. AI assistants could speed up coding and security reviews, but only if developers uploaded their most sensitive source code to external servers. That meant exposing intellectual property, confidential logic and even potential vulnerabilities. In an industry where trust is everything, this risk held many teams back from using AI at all.

ChainGPT’s Solidity-LLM aims to solve that problem. It is a specialised large language model trained on over 650,000 curated Solidity contracts, giving it a deep understanding of how real smart contracts are structured, optimised and secured. And now, by running inside SecretVM, the Confidential Virtual Machine that powers Secret Network’s encrypted compute layer, the model can assist developers without ever revealing their code to outside parties.

“Confidential computing is no longer an abstract concept,” said Luke Bowman, COO of the Secret Network Foundation. “We've shown that you can run a complex AI model, purpose-built for Solidity, inside a fully encrypted environment and that every inference can be verified on-chain. This is a real milestone for both privacy and decentralised infrastructure”.

SecretVM makes this workflow possible by using hardware-backed encryption to protect all data while computations take place. Developers don’t interact with the underlying hardware or cryptography. Instead, they simply work inside a private, sealed environment where their code stays invisible to everyone except them—even node operators. For the first time, developers can generate, test and analyse smart contracts with AI while keeping every detail confidential.

This shift opens new possibilities for the broader blockchain community. Developers gain a private coding partner that can streamline contract logic or catch vulnerabilities without risking leaks. Auditors can rely on AI-assisted analysis while keeping sensitive audit material protected. Enterprises working in finance, healthcare or governance finally have a path to adopt AI-driven blockchain automation without raising compliance concerns. Even decentralised organisations can run smart-contract agents that make decisions privately, without exposing internal logic on a public chain.

The system also supports secure model training and fine-tuning on encrypted datasets. This enables collaborative AI development without forcing anyone to share raw data—a meaningful step toward decentralised and privacy-preserving AI at scale.

By combining specialised AI with confidential computing, ChainGPT and Secret Network are shifting the trust model of on-chain development. Instead of relying on centralised cloud AI services, developers now have a verifiable, encrypted environment where they keep full control of their code, their data and their workflow. It’s a practical solution to one of blockchain’s biggest challenges: using powerful AI tools without sacrificing privacy.

As the technology evolves, the roadmap includes confidential model fine-tuning, multi-agent AI systems and cross-chain use cases. But the core advancement is already clear: developers now have a way to use AI for smart contract development that is fast, private and verifiable—without compromising the security standards that decentralised systems rely on.