Artificial Intelligence

What Happens When AI Writes the Wrong References?

HKU professor apologizes after PhD student’s AI-assisted paper cites fabricated sources.

Updated

January 8, 2026 6:33 PM

The University of Hong Kong in Pok Fu Lam, Hong Kong Island. PHOTO: ADOBE STOCK

It’s no surprise that artificial intelligence, while remarkably capable, can also go astray—spinning convincing but entirely fabricated narratives. From politics to academia, AI’s “hallucinations” have repeatedly shown how powerful technology can go off-script when left unchecked.

Take Grok-2, for instance. In July 2024, the chatbot misled users about ballot deadlines in several U.S. states, just days after President Joe Biden dropped his re-election bid against former President Donald Trump. A year earlier, a U.S. lawyer found himself in court for relying on ChatGPT to draft a legal brief—only to discover that the AI tool had invented entire cases, citations and judicial opinions. And now, the academic world has its own cautionary tale.

Recently, a journal paper from the Department of Social Work and Social Administration at the University of Hong Kong was found to contain fabricated citations—sources apparently created by AI. The paper, titled “Forty Years of Fertility Transition in Hong Kong,” analyzed the decline in Hong Kong’s fertility rate over the past four decades. Authored by doctoral student Yiming Bai, along with Yip Siu-fai, Vice Dean of the Faculty of Social Sciences and other university officials, the study identified falling marriage rates as a key driver behind the city’s shrinking birth rate. The authors recommended structural reforms to make Hong Kong’s social and work environment more family-friendly.

But the credibility of the paper came into question when inconsistencies surfaced among its references. Out of 61 cited works, some included DOI (Digital Object Identifier) links that led to dead ends, displaying “DOI Not Found.” Others claimed to originate from academic journals, yet searches yielded no such publications.

Speaking to HK01, Yip acknowledged that his student had used AI tools to organize the citations but failed to verify the accuracy of the generated references. “As the corresponding author, I bear responsibility”, Yip said, apologizing for the damage caused to the University of Hong Kong and the journal’s reputation. He clarified that the paper itself had undergone two rounds of verification and that its content was not fabricated—only the citations had been mishandled.

Yip has since contacted the journal’s editor, who accepted his explanation and agreed to re-upload a corrected version in the coming days. A formal notice addressing the issue will also be released. Yip said he would personally review each citation “piece by piece” to ensure no errors remain.

As for the student involved, Yip described her as a diligent and high-performing researcher who made an honest mistake in her first attempt at using AI for academic assistance. Rather than penalize her, Yip chose a more constructive approach, urging her to take a course on how to use AI tools responsibly in academic research.

Ultimately, in an age where generative AI can produce everything from essays to legal arguments, there are two lessons to take away from this episode. First, AI is a powerful assistant, but only that. The final judgment must always rest with us. No matter how seamless the output seems, cross-checking and verifying information remain essential. Second, as AI becomes integral to academic and professional life, institutions must equip students and employees with the skills to use it responsibly. Training and mentorship are no longer optional; they’re the foundation for using AI to enhance, not undermine, human work.

Because in this age of intelligent machines, staying relevant isn’t about replacing human judgment with AI, it’s about learning how to work alongside it.

Keep Reading

Artificial Intelligence

AI Startup BrainGrid Raises US$1M to Help Non-Technical Founders Plan and Build Software Products

Backed by Menlo Ventures, BrainGrid tackles planning gaps as AI makes software building accessible to more founders.

Updated

April 1, 2026 8:37 AM

A phone screen with app icons. PHOTO: UNPSLASH

As artificial intelligence makes it easier to write code, a different problem is starting to surface. Building software is no longer limited by technical skill alone. Increasingly, the challenge lies in deciding what to build, how to structure it, and how to turn an idea into something that actually works.

That shift sits at the centre of BrainGrid, a startup that has raised $1 million in pre-seed funding led by Menlo Ventures, with participation from Next Tier Ventures and Brainstorm Ventures. The company is building what it describes as an AI-powered planning layer for people who want to create software but may not have a technical background.

The timing reflects a broader change in how products are being built. Tools like Claude Code and Cursor have made it possible to generate working code through simple prompts. For many first-time founders, this has lowered the barrier to entry. But writing code is only one part of the process. Turning that code into a reliable product requires structure, sequencing and clarity—areas where many projects begin to fall apart.

In traditional teams, this responsibility sits with product managers who define what needs to be built and in what order. Without that layer, even well-written code can lead to products that feel disjointed or incomplete. Features may not work together, integrations can break and the final product often does not match the original idea.

BrainGrid is designed to address that gap. Instead of focusing on generating code, it helps users map out the structure of a product before development begins. The aim is to give builders a clearer starting point so that the tools they use—whether human or AI—can produce more consistent results.

The company says more than 500 builders have already used it to create software products across areas like fitness, healthcare and productivity. These range from first-time founders experimenting with new ideas to experienced developers working independently. In many cases, the products are already live and generating revenue, suggesting that the demand is not just for experimentation but for building something that can scale.

For investors, the appeal lies in the evolving role of software development. As AI takes on more of the technical work, the value shifts toward defining the problem and structuring the solution. In that sense, planning becomes less of a background task and more of a core capability.

The US$1 million raise is relatively modest, but it points to a larger trend. As more people gain access to AI tools, the number of potential builders expands. What remains limited is the ability to organise ideas into products that work in the real world. If that shift continues, the next wave of software may not be defined by who can code, but by who can plan.