Artificial Intelligence

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

A step forward that could influence how smart contracts are designed and verified.

A new collaboration between ChainGPT, an AI company specialising in blockchain development tools and Secret Network, a privacy-focused blockchain platform, is redefining how developers can safely build smart contracts with artificial intelligence. Together, they’ve achieved a major industry first: an AI model trained exclusively to write and audit Solidity code is now running inside a Trusted Execution Environment (TEE). For the blockchain ecosystem, this marks a turning point in how AI, privacy and on-chain development can work together.

For years, smart-contract developers have faced a trade-off. AI assistants could speed up coding and security reviews, but only if developers uploaded their most sensitive source code to external servers. That meant exposing intellectual property, confidential logic and even potential vulnerabilities. In an industry where trust is everything, this risk held many teams back from using AI at all.

ChainGPT’s Solidity-LLM aims to solve that problem. It is a specialised large language model trained on over 650,000 curated Solidity contracts, giving it a deep understanding of how real smart contracts are structured, optimised and secured. And now, by running inside SecretVM, the Confidential Virtual Machine that powers Secret Network’s encrypted compute layer, the model can assist developers without ever revealing their code to outside parties.

“Confidential computing is no longer an abstract concept,” said Luke Bowman, COO of the Secret Network Foundation. “We've shown that you can run a complex AI model, purpose-built for Solidity, inside a fully encrypted environment and that every inference can be verified on-chain. This is a real milestone for both privacy and decentralised infrastructure”.

SecretVM makes this workflow possible by using hardware-backed encryption to protect all data while computations take place. Developers don’t interact with the underlying hardware or cryptography. Instead, they simply work inside a private, sealed environment where their code stays invisible to everyone except them—even node operators. For the first time, developers can generate, test and analyse smart contracts with AI while keeping every detail confidential.

This shift opens new possibilities for the broader blockchain community. Developers gain a private coding partner that can streamline contract logic or catch vulnerabilities without risking leaks. Auditors can rely on AI-assisted analysis while keeping sensitive audit material protected. Enterprises working in finance, healthcare or governance finally have a path to adopt AI-driven blockchain automation without raising compliance concerns. Even decentralised organisations can run smart-contract agents that make decisions privately, without exposing internal logic on a public chain.

The system also supports secure model training and fine-tuning on encrypted datasets. This enables collaborative AI development without forcing anyone to share raw data—a meaningful step toward decentralised and privacy-preserving AI at scale.

By combining specialised AI with confidential computing, ChainGPT and Secret Network are shifting the trust model of on-chain development. Instead of relying on centralised cloud AI services, developers now have a verifiable, encrypted environment where they keep full control of their code, their data and their workflow. It’s a practical solution to one of blockchain’s biggest challenges: using powerful AI tools without sacrificing privacy.

As the technology evolves, the roadmap includes confidential model fine-tuning, multi-agent AI systems and cross-chain use cases. But the core advancement is already clear: developers now have a way to use AI for smart contract development that is fast, private and verifiable—without compromising the security standards that decentralised systems rely on.

Clinically grounded, game-based and always available — MIRDC’s AI system is redefining how children learn to communicate.

Speech and language delays are common, yet access to therapy remains limited. In Taiwan, only about 2,200 licensed speech-language pathologists serve hundreds of thousands of children who need support—especially those with autism spectrum disorders or significant communication challenges. As a result, many children miss crucial periods of language development simply because help isn’t available soon enough.

MIRDC’s new AI-powered interactive speech therapy system aims to close that gap. Instead of focusing solely on articulation, it targets a wider range of language skills that many children struggle with: oral expression, comprehension, sentence building and conversational ability. This makes it a more complete tool for childhood speech and language development.

The system combines game-based learning, AI-driven guidance and automated language assessment into one platform that can be used both in clinics and at home. This integrated design helps children practice more consistently, providing therapists and parents with clearer insight into their progress.

The interactive game modules are built around clinically validated therapy methods. Imitation exercises, picture cards, storybooks and conversational prompts are turned into structured game levels, each aligned with a specific developmental goal. This step-by-step approach helps children move from simple naming tasks to more complex comprehension and response skills, all within a sequenced curriculum.

A key differentiator is the system’s real-time AI speech interpretation. As the child talks, the AI analyzes the response and generates tailored therapeutic cues—such as imitation, modeling, expansion or extension—based on the conversation. These are the same strategies used by speech-language pathologists, but now children can access them continuously, supporting more effective at-home practice and reducing long gaps between sessions.

After each session, the system automatically conducts a data-driven language assessment using 20 objective indicators across semantics, syntax and pragmatics. This provides clinicians and families with measurable, easy-to-understand reports that show how the child is progressing and which skills need more attention—something many traditional tools do not offer.

By offering a personalized, scalable and clinically grounded solution, MIRDC’s AI therapy system helps address the ongoing shortage of speech-language services. It doesn’t replace therapists; instead, it extends their reach, allows for more consistent practice and helps families support their child’s communication at home.

As an added recognition of its impact, the system recently earned two R&D 100 Awards, including the Silver Award for Corporate Social Responsibility. But at its core, the project remains focused on a simple mission: making high-quality speech therapy accessible to every child who needs a voice.

Redefining sensor performance with advanced physical AI and signal processing.

Atomathic, the company once known as Neural Propulsion Systems, is stepping into the spotlight with a bold claim: its new AI platforms can help machines “see the invisible”. With the commercial launch of AIDAR™ and AISIR™, the company says it is opening a new chapter for physical AI, AI sensing and advanced sensor technology across automotive, aviation, defense, robotics and semiconductor manufacturing.

The idea behind these platforms is simple yet ambitious. Machines gather enormous amounts of signal data, yet they still struggle to understand the faint, fast or hidden details that matter most when making decisions. Atomathic says its software closes that gap. By applying AI signal processing directly to raw physical signals, the company aims to help sensors pick up subtle patterns that traditional systems miss, enabling faster reactions and more confident autonomous system performance.

"To realize the promise of physical AI, machines must achieve greater autonomy, precision and real-time decision-making—and Atomathic is defining that future," said Dr. Behrooz Rezvani, Founder and CEO of Atomathic. "We make the invisible visible. Our technology fuses the rigor of mathematics with the power of AI to transform how sensors and machines interact with the world—unlocking capabilities once thought to be theoretical. What can be imagined mathematically can now be realized physically."

This technical shift is powered by Atomathic’s deeper mathematical framework. The core of its approach is a method called hyperdefinition technology, which uses the Atomic Norm and fast computational techniques to map sparse physical signals. In simple terms, it pulls clarity out of chaos. This enables ultra-high-resolution signal visualization in real time—something the company claims has never been achieved at this scale in real-time sensing.

AIDAR and AISIR are already being trialled and integrated across multiple sectors and they’re designed to work with a broad range of hardware. That hardware-agnostic design is poised to matter even more as industries shift toward richer, more detailed sensing. Analysts expect the automotive sensor market to surge in the coming years, with radar imaging, next-gen ADAS systems and high-precision machine perception playing increasingly central roles.

Atomathic’s technology comes from a tight-knit team with deep roots in mathematics, machine intelligence and AI research, drawing talent from institutions such as Caltech, UCLA, Stanford and the Technical University of Munich. After seven years of development, the company is ready to show its progress publicly, starting with demonstrations at CES 2026 in Las Vegas.

Suppose the future of autonomy depends on machines perceiving the world with far greater fidelity. In that case, Atomathic is betting that the next leap forward won’t come from more hardware, but from rethinking the math behind the signal—and redefining what physical AI can do.

HKU professor apologizes after PhD student’s AI-assisted paper cites fabricated sources.

It’s no surprise that artificial intelligence, while remarkably capable, can also go astray—spinning convincing but entirely fabricated narratives. From politics to academia, AI’s “hallucinations” have repeatedly shown how powerful technology can go off-script when left unchecked.

Take Grok-2, for instance. In July 2024, the chatbot misled users about ballot deadlines in several U.S. states, just days after President Joe Biden dropped his re-election bid against former President Donald Trump. A year earlier, a U.S. lawyer found himself in court for relying on ChatGPT to draft a legal brief—only to discover that the AI tool had invented entire cases, citations and judicial opinions. And now, the academic world has its own cautionary tale.

Recently, a journal paper from the Department of Social Work and Social Administration at the University of Hong Kong was found to contain fabricated citations—sources apparently created by AI. The paper, titled “Forty Years of Fertility Transition in Hong Kong,” analyzed the decline in Hong Kong’s fertility rate over the past four decades. Authored by doctoral student Yiming Bai, along with Yip Siu-fai, Vice Dean of the Faculty of Social Sciences and other university officials, the study identified falling marriage rates as a key driver behind the city’s shrinking birth rate. The authors recommended structural reforms to make Hong Kong’s social and work environment more family-friendly.

But the credibility of the paper came into question when inconsistencies surfaced among its references. Out of 61 cited works, some included DOI (Digital Object Identifier) links that led to dead ends, displaying “DOI Not Found.” Others claimed to originate from academic journals, yet searches yielded no such publications.

Speaking to HK01, Yip acknowledged that his student had used AI tools to organize the citations but failed to verify the accuracy of the generated references. “As the corresponding author, I bear responsibility”, Yip said, apologizing for the damage caused to the University of Hong Kong and the journal’s reputation. He clarified that the paper itself had undergone two rounds of verification and that its content was not fabricated—only the citations had been mishandled.

Yip has since contacted the journal’s editor, who accepted his explanation and agreed to re-upload a corrected version in the coming days. A formal notice addressing the issue will also be released. Yip said he would personally review each citation “piece by piece” to ensure no errors remain.

As for the student involved, Yip described her as a diligent and high-performing researcher who made an honest mistake in her first attempt at using AI for academic assistance. Rather than penalize her, Yip chose a more constructive approach, urging her to take a course on how to use AI tools responsibly in academic research.

Ultimately, in an age where generative AI can produce everything from essays to legal arguments, there are two lessons to take away from this episode. First, AI is a powerful assistant, but only that. The final judgment must always rest with us. No matter how seamless the output seems, cross-checking and verifying information remain essential. Second, as AI becomes integral to academic and professional life, institutions must equip students and employees with the skills to use it responsibly. Training and mentorship are no longer optional; they’re the foundation for using AI to enhance, not undermine, human work.

Because in this age of intelligent machines, staying relevant isn’t about replacing human judgment with AI, it’s about learning how to work alongside it.

Examining the shift from fast answers to verified intelligence in enterprise AI.

Neuron7.ai, a company that builds AI systems to help service teams resolve technical issues faster, has launched Neuro. It is a new kind of AI agent built for environments where accuracy matters more than speed. From manufacturing floors to hospital equipment rooms, Neuro is designed for situations where a wrong answer can halt operations.

What sets Neuro apart is its focus on reliability. Instead of relying solely on large language models that often produce confident but inaccurate responses, Neuro combines deterministic AI — which draws on verified, trusted data — with autonomous reasoning for more complex cases. This hybrid design helps the system provide context-aware resolutions without inventing answers or “hallucinating”, a common issue that has made many enterprises cautious about adopting agentic AI.

“Enterprise adoption of agentic AI has stalled despite massive vendor investment. Gartner predicts 40% of projects will be canceled by 2027 due to reliability concerns”, said Niken Patel, CEO and Co-Founder of Neuron7. “The root cause is hallucinations. In service operations, outcomes are binary. An issue is either resolved or it is not. Probabilistic AI that is right only 70% of the time fails 30% of your customers and that failure rate is unacceptable for mission-critical service”.

That concern shaped how Neuro was built. “We use deterministic guided fixes for known issues. No guessing, no hallucinations — and reserve autonomous AI reasoning for complex scenarios. What sets Neuro apart is knowing which mode to use. While competitors race to make agents more autonomous, we're focused on making service resolution more accurate and trusted”, Patel explained.

At the heart of Neuro is the Smart Resolution Hub, Neuron7’s central intelligence layer that consolidates service data, knowledge bases and troubleshooting workflows into one conversational experience. This means a technician can describe a problem — say, a diagnostic error in an MRI scanner — and Neuro can instantly generate a verified, step-by-step solution. If the problem hasn’t been encountered before, it can autonomously scan through thousands of internal and external data points to identify the most likely fix, all while maintaining traceability and compliance.

Neuro’s architecture also makes it practical for real-world use. It integrates seamlessly with enterprise systems such as Salesforce, Microsoft, ServiceNow and SAP, allowing companies to embed it within their existing support operations. Early users of Neuron7’s platform have reported measurable improvements — faster resolutions, higher customer satisfaction and reduced downtime — thanks to guided intelligence that scales expert-level problem solving across teams.

The timing of Neuro’s debut feels deliberate. As organizations look to move past the hype of generative AI, trust and accountability have become the new benchmarks. AI systems that can explain their reasoning and stay within verifiable boundaries are emerging as the next phase of enterprise adoption.

“The market has figured out how to build autonomous agents”, Patel said. “The unsolved problem is building accurate agents for contexts where errors have consequences. Neuro fills that gap”.

Neuron7 is building a system that knows its limits — one that reasons carefully, acts responsibly and earns trust where it matters most. In a space dominated by speculation, that discipline may well redefine what “intelligent” really means in enterprise AI.

From information gaps to global access — how AI is reshaping the pursuit of knowledge.

Encyclopaedias have always been mirrors of their time — from heavy leather-bound volumes in the 19th century to Wikipedia’s community-edited pages online. But as the world’s information multiplies faster than humans can catalogue it, even open platforms struggle to keep pace. Enter Botipedia, a new project from INSEAD, The Business School for the World, that reimagines how knowledge can be created, verified and shared using artificial intelligence.

At its core, Botipedia is powered by proprietary AI that automates the process of writing encyclopaedia entries. Instead of relying on volunteers or editors, it uses a system called Dynamic Multi-method Generation (DMG) — a method that combines hundreds of algorithms and curated datasets to produce high-quality, verifiable content. This AI doesn’t just summarise what already exists; it synthesises information from archives, satellite feeds and data libraries to generate original text grounded in facts.

What makes this innovation significant is the gap it fills in global access to knowledge. While Wikipedia hosts roughly 64 million English-language entries, languages like Swahili have fewer than 40,000 articles — leaving most of the world’s population outside the circle of easily available online information. Botipedia aims to close that gap by generating over 400 billion entries across 100 languages, ensuring that no subject, event or region is overlooked.

"We are creating Botipedia to provide everyone with equal access to information, with no language left behind", says Phil Parker, INSEAD Chaired Professor of Management Science, creator of Botipedia and holder of one of the pioneering patents in the field of generative AI. "We focus on content grounded in data and sources with full provenance, allowing the user to see as many perspectives as possible, as opposed to one potentially biased source".

Unlike many generative AI tools that depend on large language models (LLMs), Botipedia adapts its methods based on the type of content. For instance, weather data is generated using geo-spatial techniques to cover every possible coordinate on Earth. This targeted, multi-method approach helps boost both the accuracy and reliability of what it produces — key challenges in today’s AI-driven content landscape.

Additionally, the innovation is also energy-efficient. Its DMG system operates at a fraction of the processing power required by GPU-heavy models like ChatGPT, making it a sustainable alternative for large-scale content generation.

By combining AI precision, linguistic inclusivity and academic credibility, Botipedia positions itself as more than a digital library — it’s a step toward universal, unbiased access to verified knowledge.

"Botipedia is one of many initiatives of the Human and Machine Intelligence Institute (HUMII) that we are establishing at INSEAD", says Lily Fang, Dean of Research and Innovation at INSEAD. "It is a practical application that builds on INSEAD-linked IP to help people make better decisions with knowledge powered by technology. We want technologies that enhance the quality and meaning of our work and life, to retain human agency and value in the age of intelligence".

By harnessing AI to bridge gaps of language, geography and credibility, Botipedia points to a future where access to knowledge is no longer a privilege, but a shared global resource.

The upgraded CodeFusion Studio 2.0 simplifies how developers design, test and deploy AI on embedded systems.

Analog Devices (ADI), a global semiconductor company, launched CodeFusion Studio™ 2.0 on November 3, 2025. The new version of its open-source development platform is designed to make it easier and faster for developers to build AI-powered embedded systems that run on ADI’s processors and microcontrollers.

“The next era of embedded intelligence requires removing friction from AI development”, said Rob Oshana, Senior Vice President of the Software and Digital Platforms group at ADI. “CodeFusion Studio 2.0 transforms the developer experience by unifying fragmented AI workflows into a seamless process, empowering developers to leverage the full potential of ADI's cutting-edge products with ease so they can focus on innovating and accelerating time to market”.

The upgraded platform introduces new tools for hardware abstraction, AI integration and automation. These help developers move more easily from early design to deployment.

CodeFusion Studio 2.0 enables complete AI workflows, allowing teams to use their own models and deploy them on everything from low-power edge devices to advanced digital signal processors (DSPs).

Built on Microsoft Visual Studio Code, the new CodeFusion Studio offers built-in checks for model compatibility, along with performance testing and optimization tools that help reduce development time. Building on these capabilities, a new modular framework based on Zephyr OS lets developers test and monitor how AI and machine learning models perform in real time. This gives clearer insight into how each part of a model behaves during operation and helps fine-tune performance across different hardware setups.

Additionally, the CodeFusion Studio System Planner has also been redesigned to handle more device types and complex, multi-core applications. With new built-in diagnostic and debugging features — like integrated memory analysis and visual error tracking — developers can now troubleshoot problems faster and keep their systems running more efficiently.

This launch marks a deeper pivot for ADI. Long known for high-precision analog chips and converters, the company is expanding its edge-AI and software capabilities to enable what it calls Physical Intelligence — systems that can perceive, reason, and act locally.  

“Companies that deliver physically aware AI solutions are poised to transform industries and create new, industry-leading opportunities. That's why we're creating an ecosystem that enables developers to optimize, deploy and evaluate AI models seamlessly on ADI hardware, even without physical access to a board”, said Paul Golding, Vice President of Edge AI and Robotics at ADI. “CodeFusion Studio 2.0 is just one step we're taking to deliver Physical Intelligence to our customers, ultimately enabling them to create systems that perceive, reason and act locally, all within the constraints of real-world physics”.

Robots that learn on the job: AgiBot tests reinforcement learning in real-world manufacturing.

Shanghai-based robotics firm AgiBot has taken a major step toward bringing artificial intelligence into real manufacturing. The company announced that its Real-World Reinforcement Learning (RW-RL) system has been successfully deployed on a pilot production line run in partnership with Longcheer Technology.  It marks one of the first real applications of reinforcement learning in industrial robotics.

The project represents a key shift in factory automation. For years, precision manufacturing has relied on rigid setups: robots that need custom fixtures, intricate programming and long calibration cycles. Even newer systems combining vision and force control often struggle with slow deployment and complex maintenance. AgiBot’s system aims to change that by letting robots learn and adapt on the job, reducing the need for extensive tuning or manual reconfiguration.

The RW-RL setup allows a robot to pick up new tasks within minutes rather than weeks. Once trained, the system can automatically adjust to variations, such as changes in part placement or size tolerance, maintaining steady performance throughout long operations. When production lines switch models or products, only minor hardware tweaks are needed. This flexibility could significantly cut downtime and setup costs in industries where rapid product turnover is common.

The system’s main strengths lie in faster deployment, high adaptability and easier reconfiguration. In practice, robots can be retrained quickly for new tasks without needing new fixtures or tools — a long-standing obstacle in consumer electronics production. The platform also works reliably across different factory layouts, showing potential for broader use in complex or varied manufacturing environments.

Beyond its technical claims, the milestone demonstrates a deeper convergence between algorithmic intelligence and mechanical motion.Instead of being tested only in the lab, AgiBot’s system was tried in real factory settings, showing it can perform reliably outside research conditions.

This progress builds on years of reinforcement learning research, which has gradually pushed AI toward greater stability and real-world usability. AgiBot’s Chief Scientist Dr. Jianlan Luo and his team have been at the forefront of that effort, refining algorithms capable of reliable performance on physical machines. Their work now underpins a production-ready platform that blends adaptive learning with precision motion control — turning what was once a research goal into a working industrial solution.

Looking forward, the two companies plan to extend the approach to other manufacturing areas, including consumer electronics and automotive components. They also aim to develop modular robot systems that can integrate smoothly with existing production setups.