A new approach examines how individual cells respond to drugs, aiming to identify risks earlier in development.
Updated
April 15, 2026 6:01 PM

Close up of a capsule blister pack. PHOTO: UNSPLASH
DeepCyte, a startup in the drug development space, is focusing on a long-standing problem: why drugs that appear safe in early testing still fail in clinical trials or are withdrawn later due to toxicity. DeepCyte has launched with US$1.5 million in seed funding to build tools that detect and explain the harmful effects of drugs at much earlier stages.
The startup’s approach focuses on how individual cells respond to a drug. Instead of analysing cells in bulk, it studies them one by one. This helps capture differences in how cells react, which are often missed in traditional testing methods.
Drug toxicity remains one of the main reasons for failure in drug development. Methods such as animal testing and bulk cell analysis do not always reflect how human cells behave. This gap has pushed the industry to look for more reliable and human-relevant ways to test drug safety.
DeepCyte combines cell-level data with artificial intelligence. Its platform, MetaCore, studies what is happening inside individual cells by capturing detailed molecular information. This data is used to build large datasets that can train AI models.
Additionally, the company has developed an AI system called DeeImmuno. It is designed to predict whether a drug could be toxic and identify the biological reasons behind it. In internal testing on 100 drugs, the system identified different types of toxicity and their underlying mechanisms with a reported accuracy of 94 percent.
The focus on explaining why a drug is toxic, not just whether it is, reflects a broader shift in the industry. Regulators such as the U.S. Food and Drug Administration and the European Medicines Agency have been encouraging methods that rely more on human cell data and clearer biological evidence. The seed funding will be used to develop and scale these tools. The company aims to help drug developers make earlier decisions, which could reduce costly failures in later stages. Whether tools like this become widely used will depend on how they perform in real-world settings. For now, DeepCyte’s approach highlights a growing effort to make drug testing more precise by focusing on how drugs affect cells at the most detailed level.
Keep Reading
From information gaps to global access — how AI is reshaping the pursuit of knowledge.
Updated
January 8, 2026 6:33 PM
.jpg)
Paper cut-outs of robots sitting on a pile of books. PHOTO: FREEPIK
Encyclopaedias have always been mirrors of their time — from heavy leather-bound volumes in the 19th century to Wikipedia’s community-edited pages online. But as the world’s information multiplies faster than humans can catalogue it, even open platforms struggle to keep pace. Enter Botipedia, a new project from INSEAD, The Business School for the World, that reimagines how knowledge can be created, verified and shared using artificial intelligence.
At its core, Botipedia is powered by proprietary AI that automates the process of writing encyclopaedia entries. Instead of relying on volunteers or editors, it uses a system called Dynamic Multi-method Generation (DMG) — a method that combines hundreds of algorithms and curated datasets to produce high-quality, verifiable content. This AI doesn’t just summarise what already exists; it synthesises information from archives, satellite feeds and data libraries to generate original text grounded in facts.
What makes this innovation significant is the gap it fills in global access to knowledge. While Wikipedia hosts roughly 64 million English-language entries, languages like Swahili have fewer than 40,000 articles — leaving most of the world’s population outside the circle of easily available online information. Botipedia aims to close that gap by generating over 400 billion entries across 100 languages, ensuring that no subject, event or region is overlooked.
"We are creating Botipedia to provide everyone with equal access to information, with no language left behind", says Phil Parker, INSEAD Chaired Professor of Management Science, creator of Botipedia and holder of one of the pioneering patents in the field of generative AI. "We focus on content grounded in data and sources with full provenance, allowing the user to see as many perspectives as possible, as opposed to one potentially biased source".
Unlike many generative AI tools that depend on large language models (LLMs), Botipedia adapts its methods based on the type of content. For instance, weather data is generated using geo-spatial techniques to cover every possible coordinate on Earth. This targeted, multi-method approach helps boost both the accuracy and reliability of what it produces — key challenges in today’s AI-driven content landscape.
Additionally, the innovation is also energy-efficient. Its DMG system operates at a fraction of the processing power required by GPU-heavy models like ChatGPT, making it a sustainable alternative for large-scale content generation.
By combining AI precision, linguistic inclusivity and academic credibility, Botipedia positions itself as more than a digital library — it’s a step toward universal, unbiased access to verified knowledge.
"Botipedia is one of many initiatives of the Human and Machine Intelligence Institute (HUMII) that we are establishing at INSEAD", says Lily Fang, Dean of Research and Innovation at INSEAD. "It is a practical application that builds on INSEAD-linked IP to help people make better decisions with knowledge powered by technology. We want technologies that enhance the quality and meaning of our work and life, to retain human agency and value in the age of intelligence".
By harnessing AI to bridge gaps of language, geography and credibility, Botipedia points to a future where access to knowledge is no longer a privilege, but a shared global resource.