Hong Kong

Startup Funding in Hong Kong: University Programmes for Student Founders and Early-Stage Startups

If you are building a startup in Hong Kong, your first source of support may be closer than you think.

Updated

May 7, 2026 1:16 PM

Main Building of the University of Hong Kong. PHOTO: ADOBE STOCK

Across Hong Kong’s public universities, entrepreneurship is now part of the campus ecosystem. Many universities offer startup funding, mentorship, training, workspace, investor access and pathways into larger incubation programmes such as Hong Kong Science and Technology Park (HKSTP) and Cyberport.

For student founders, researchers and alumni, this can be a useful place to begin. You may be able to test an idea, build a prototype, form a company or apply for early funding through your own university before looking for external investors.  

The challenge is knowing where to start. Each university has its own startup programmes, eligibility rules and funding structure. Some are designed for student ideas. Others are built for research commercialization, deep tech ventures or startups already preparing to raise investment. Below is a practical guide to startup support and university startup funding at five major publicly funded universities in Hong Kong.

The University of Hong Kong (HKU): Startup support for student founders, deep tech and research commercialization
The Centennial Campus of the University of Hong Kong. PHOTO: ADOBE STOCK

HKU offers a wide range of entrepreneurship support through HKU Techno-Entrepreneurship Core, also known as HKU TEC. Its programmes cover early ideas, deep tech projects, Greater Bay Area (GBA) expansion, research commercialization and investor matching.

HKU is especially relevant for founders working with university research, intellectual property or technology-led business ideas. It also has entry-level support for students and graduates who are still testing an idea.

HKU startup programme Who it is for Funding or investment Key eligibility points Main support
HKU SEED Programme Early-stage student and graduate startup ideas Opportunity to receive up to HK$100,000 through the HKSTP Ideation Programme The principal applicant must be an HKU member with at least 20% ownership. Open to individual, team or Hong Kong company applicants. Three-week entrepreneurship training, coaching, HKSTP Ideation pathway, iDendron membership for awardees, networking and competition nomination.
HKU DeepTech100+ Deep tech projects and research-backed startups Up to HK$1.39 million The person-in-charge must be an HKU member with at least 20% ownership, or the startup must be an HKU IP licensee. One-year HKU TEC and HKSTP co-incubation, training, HKSTP facilities, iDendron membership and fast-track route to HKSTP incubation.
Tech-Up GBA Innovators Programme HKU-linked startups expanding into the Greater Bay Area Up to HK$600,000, including grant and interest-free loan components The startup team must include HKU linkage or HKU IP. Young entrepreneur rules (e.g. the PIC and core team members must be between the ages of 18 and 39) also apply. Up to two years of mentorship, GBA training, workspace in Hong Kong and Shenzhen/Qianhai, professional services and market access support.
TSSSU@HKU (Technology Start-Up Support Scheme for Universities) HKU technology startups moving toward HK$400,000 to HK$1.5 million per year, for up to three years under each track The applying startup must have at least two members. The PIC must be an HKU student, staff member or alumnus. HKU members must hold at least 20% ownership in total. R&D funding, business setup support, iDendron membership, networking and possible Qianhai grant matching.
HKU Entrepreneurship Engine Fund (EEF) HKU-linked startups raising seed to Series A capital Investment partners may invest US$0.5 million to US$5 million At least 20% ownership must be held by HKU members, or the startup must license HKU IP. Connection to EEF investment partners for independent evaluation.
iDendron@HKU HKU founders needing workspace and community support Not applicable HKU-linked founders and eligible startups. Co-working space, hot desks (HK$900 for six months), meeting rooms, mentoring, events and startup community access.

Best fit: HKU works well for student founders, researchers and alumni who want a structured route from idea stage to technology commercialization.

City University of Hong Kong (CityUHK): HK Tech 300 and a clear startup pathway
City University of Hong Kong. PHOTO: ADOBE STOCK

CityUHK’s main startup platform is HK Tech 300. It is one of the clearest university startup pathways in Hong Kong because it is built in stages: training, seed funding, angel investment and access to external funding.

The programme is open to CityUHK students, alumni, research staff and members of the public using CityUHK intellectual property or technology.

CityUHK startup programme Who it is for Funding or investment Key eligibility points Main support
HK Tech 300 Training Teams learning startup basics Sponsored training worth more than HK$10,000 per project team Open to eligible CityUHK-linked teams and external founders using CityUHK IP or technology. Startup basics, business plan development, pitching and team formation.
HK Tech 300 Seed Fund Early teams turning ideas into startups HK$100,000 per successful team Person-in-charge must show association with CityUHK. 6 to 12 months of funding support, product development milestones and preparation for Angel Fund application.
HK Tech 300 Angel Fund Startups ready to validate a business model Up to HK$1 million angel investment Usually for eligible teams after Seed Fund progress or equivalent readiness. Business model validation, MVP development, investor exposure and incubation support.
HK Tech 300 Launching Stage Startups ready for larger support Access to external funds of up to HK$10 million For eligible startups after the incubation phase. Referrals to ITC, HKSTP, Cyberport and other partner programmes.
HK Tech 300 International and National Startup Competitions Startups entering CityUHK’s ecosystem through competitions Competition-linked opportunities, including access to HK Tech 300 support Competition-specific rules apply. Pitching, exposure, business matching and possible funding pathways.

Best fit: CityUHK is a strong choice for founders who want a step-by-step startup journey with clear funding stages.

Hong Kong University of Science and Technology (HKUST): Startup funding for tech founders and research teams
Hong Kong University of Science and Technology. PHOTO: ADOBE STOCK

HKUST has a broad startup ecosystem with support for students, alumni, researchers and faculty. Its entrepreneurship pathway covers idea exploration, prototyping, MVP testing, research commercialization and investment.

The university’s startup support is especially strong for technology companies, deep tech projects and teams commercialising HKUST research.

HKUST startup programme Who it is for Funding or investment Key eligibility points Main support
HHKUST IPIC Incubation - HKUST IPIC Incubation - Stage 1 Ideation (Through Entrepreneurship 101 Training or Entrepreneurship Bootcamp) Students, alumni, researchers and faculty across different startup stages Stage 1 includes HK$3,000 in-kind company registration support HKUST-linked founders. Structured pathway from ideation to prototyping, implementation and commercialization.
Stage 2 Prototyping (Through HKUST Dream Builder) Student-led teams building a proof of concept or MVP Up to HK$100,000 per startup team The main applicant must be a full-time current HKUST student. At least two full-time current HKUST students must play founder or co-founder roles. Funding, training, mentorship, workspace at theBASE and external outreach.
Stage 3 Implementation (Through HKUST x HKSTP Co-Ideation Programme) Early-stage HKUST-linked startups Up to HK$100,000 The team must include at least one HKUST member. HKUST members must hold at least 10% ownership if a company is formed. Six-month programme, three milestones, coaching, HKSTP training and preparation for HKSTP incubation.
Lo Kwee Seong Tech-Ship Fund Faculty-student teams commercializing HKUST research Extra support of up to HK$200,000 is listed through HKUST’s IPIC pathway Faculty research and student entrepreneurship collaboration. Commercialization support for faculty technologies and student startup teams.
Bridge Gap Fund HKUST researchers developing university IP for commercial use Typically up to HK$500,000 for 12 months Must use HKUST IP. PI must be full-time HKUST faculty. Prototype development, market research, customer discovery, IP development and DeepTech Incubation Programme access.
TSSSU@HKUST HKUST technology startups commercializing R&D Up to HK$1.5 million per year under TSSSU-O or TSSSU+ The applying company must be registered in Hong Kong. HKUST members must usually hold at least 10% ownership. Startup setup, R&D, manpower, equipment, promotion and marketing support
HKUST Entrepreneurship Fund(E-Fund) HKUST technology startups raising investment Initial investment up to HK$5 million per startup The applying startup must be at least 10% owned by HKUST faculty, staff, students or alumni and established for no more than seven years. Early-stage investment, co-investment model and long-term capital support.
HKUST Greater Bay Area Youth Entrepreneurship Fund Programme Young HKUST-linked founders building in Hong Kong or the GBA Up to HK$250,000 HKUST-linked founder requirements and youth entrepreneur rules apply. GBA startup funding, mentorship, product development and market expansion support
RAISe+ Scheme via HKUST Research teams with large-scale commercialization potential Scheme-level support can range from HK$10 million to HK$100 million per approved project Research commercialization teams with industry-matching requirements. Large-scale R&D transformation and commercialization support

Best fit: HKUST is especially useful for tech startups, deep tech teams and founders who need a route from prototype to commercialization.

The Hong Kong Polytechnic University (PolyU): Startup funding for product, applied research and GBA expansion
Hong Kong Polytechnic University. PHOTO: ADOBE STOCK

PolyU’s startup support is practical and product-focused. Its programmes cover early ideas, seed-stage teams, Greater Bay Area expansion, translational research and investment.

This makes PolyU a good fit for founders working on engineering, hardware, applied technology, social impact or commercialization of university research.

PolyU Startup programme Who it is for Funding or investment Key eligibility points Main support
Ideation Funding Scheme Student teams with early ideas HK$5,000 basic prize, with possible nomination to other entrepreneurship programmes The team must be formed by PolyU students. The principal applicant must be a current student of the collaborating faculty or school. Early idea validation and entrepreneurship learning
PolyVentures Micro Fund Scheme Seed-stage teams preparing to form a startup or have incorporated companies within 24 months Up to HK$1.41 million in total support from PolyU and HKSTP Ideation or Incubation routes The principal applicant can be a current student, alumnus, staff member, translational startup postdoc or key owner-operator of a PolyU technology licensee. HK$20,000 cash prize for shortlisted teams, HK$100,000 PolyU Seed Fund for awardees, HKSTP pathway and mentorship
PolyU GBA Innovation and Entrepreneurship Incubation Programme Young entrepreneurs entering the Greater Bay Area market HK$600,000 seed funding Funding is granted to the successful applicant’s Hong Kong limited company and released by milestones. Two-year incubation, mentorship, training, expert advice, Hong Kong and mainland co-working spaces and GBA network access
Translational Startup Postdoc Programme Recent PhD graduates commercializing PolyU research Annual remuneration of up to HK$348,000 and project support, including prototyping (a maximum of HK$50,000 per year) and outreach funding (a maximum of HK$15,000 per year) Applicant needs a PolyU academic supervisor and a recent or near-completed doctoral degree. Free workspace at InnoHub, mentorship, KTEO support, investor access and pathways to Micro Fund, Angel Fund and EIF
PolyVentures Angel Fund Scheme / TSSSU route Technology startups needing larger commercialization support Up to HK$800,000 matching fund of up to three years The applying startup team must include PolyU linkage and meet the scheme requirements. Startup setup, R&D, manpower, equipment, marketing and commercialization support
PolyU Entrepreneurship Investment Fund (EIF) PolyU-linked startups raising early-stage investment Up to HK$4 million The applying startup must have at least one PolyU member holding at least 10% equity or must license PolyU IP. Equity, convertible note or SAFE investment, co-investment support, R&D facilities, mentoring and industry networks
RAISe+ Scheme via PolyU (EIF) Research teams commercializing major R&D outcomes Scheme-level support can reach HK$10 million to HK$100 million per approved project Research commercialization and industry matching requirements apply. Large-scale research transformation and commercialization support
ASCEND Tech for Good Programme (EIF) Youth-led tech-for-good startups Up to HK$3 million per successful applicant Hong Kong-registered startup or company with youth-led requirements. Two-year incubation and support for digital equity and social impact ventures

Best fit: PolyU is well suited for product-led startups, applied technology projects, GBA expansion and founders who want industry-facing support.

The Chinese University of Hong Kong (CUHK): Startup support from idea stage to technology commercialization
Chung Chi College, Chinese University of Hong Kong. PHOTO: ADOBE STOCK

CUHK offers support for student founders, researchers and alumni through the Pi Centre and the Knowledge Transfer Office. Its ecosystem covers pre-incubation, TSSSU funding, early translational research, social impact projects and Greater Bay Area entrepreneurship.

CUHK is especially useful for students who want to start with an idea and later move into funding, mentorship or external incubation.

CUHK Startup programme Who it is for Funding or investment Key eligibility points Main support
PILOTS Lite x HKSTP Co-Ideation / Pi Centre CUHK students at the idea or pre-incubation stage Up to HK$130,000 Open to CUHK undergraduate and postgraduate students, full-time or part-time. The principal applicant must be a current CUHK student. Applicants must not already have registered a business for the project. One-year programme, seed funding, workshops, mentoring, networking, free co-working space and fast-track preparation for incubators
TSSSU@CUHK CUHK technology startups commercializing R&D TSSSU-O: up to HK$600,000 per year. TSSSU+: up to HK$1 million per year in matching funds. Both can run for up to three years The PIC must be a current full-time student, current full-time professor or alumnus who graduated within the last 36 months. Technology readiness requirements apply. Financial support, potential HKSTP incubation, investor access, industry partner access and mentorship
IdeaBooster Fund @CUHK CUHK researchers developing early translational projects HKD 100,000 – HKD 200,000 per project Mainly for full-time CUHK academic staff, eligible teaching or research staff and selected postgraduate research students case by case. Early project development, impact-focused research translation and fast-track interview opportunity with HKSTP Co-Ideation
Knowledge Transfer Venture Impact Fund Knowledge Transfer Venture Impact Fund (KT-VIF) @CUHK CUHK academic-led ventures with scalable social impact Up to HK$300,000 for two years Mainly for CUHK professoriate or research academic staff-led teams. Business development consultancy, publicity support and partnership liaison
Knowledge Transfer Impact Project Fund (KT-IPF) @CUHK Late-prototype research projects with social innovation potential Up to HK$200,000 Mainly for full-time CUHK staff on professoriate or research academic ranks. The venture must be CUHK-affiliated and have been incorporated for more than three years. Support for turning research-based prototypes into real-world social-impact solutions
Greater Bay Area Entrepreneurship Scheme / BESGO FoundRise CUHK students and young alumni building GBA-focused ventures Up to HK$600,000 per selected team Students and young alumni, with scheme-specific selection. Two-year funding and incubation, mentorship, expert guidance and GBA venture-building support
RAISe+ via CUHK Research teams with large-scale commercialization potential Scheme-level support can reach HK$10 million to HK$100 million per approved project University research commercialization teams. Large-scale R&D commercialization and industry matching
ASCEND Tech for Good Programme (EIF) Youth-led tech-for-good startups Up to HK$3 million per successful applicant Hong Kong-registered startup or company with youth-led requirements. Two-year incubation and support for digital equity and social impact ventures

Best fit: CUHK is a good starting point for student founders who need pre-incubation support, and for researchers moving early-stage ideas toward commercial use.

Which Hong Kong university startup programme should you choose?

There is no single best programme for every founder. The right choice depends on your stage, your university connection and the type of startup you are building.

Founder stage Good starting points
You have an idea but no company yet HKU SEED, CityUHK HK Tech 300 Training, HKUST Dream Builder, PolyU Ideation Funding Scheme, CUHK Pi Centre
You are building a prototype or MVP HKUST Dream Builder, HKUST x HKSTP Co-Ideation, CityUHK Seed Fund, PolyU Micro Fund, CUHK PILOTS Lite
You are commercializing university research or IP HKU DeepTech100+, HKU TSSSU, HKUST Bridge Gap Fund, HKUST TSSSU, PolyU EIF, CUHK TSSSU
You want Greater Bay Area startup support HKU Tech-Up GBA, PolyU GBA Innovation and Entrepreneurship Incubation Programme, HKUST GBA Youth Entrepreneurship Fund, CUHK GBA Entrepreneurship Scheme
You are ready for investment You are commercializing university research or IP HKU EEF, CityUHK Angel Fund, HKUST E-Fund, PolyU EIF

The bottom line

Hong Kong’s university startup ecosystem is bigger than many founders realize. If you are a student, alumnus, researcher or university-linked founder, your campus may already offer a route into funding, mentorship, workspace and incubation.

The key is to choose a programme that matches your current stage. Some founders should start with idea validation. Others may be ready for seed funding, TSSSU support or investment.

Before applying, check the latest deadline and eligibility rules on the official university page. These programmes change often, and some funding rounds open only once or twice a year.

Keep Reading

Artificial Intelligence

Is LLMs the Future? The Great AI Schism Among Scientists

Brains, bots and the future: Who’s really in control?

Updated

January 8, 2026 6:32 PM

Adoration and disdain, the polarised reactions for generative AI. ILLUSTRATION: YORKE YU

When British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton joked that his ex-girlfriend once used ChatGPT to help her break up with him, he wasn’t exaggerating.  The father of deep learning was pointing to something stranger: how machines built to mimic language have begun to mimic thought — and how even their creators no longer agree on what that means.

In that one quip — part humor, part unease — Hinton captured the paradox at the center of the world’s most important scientific divide. Artificial intelligence has moved beyond code and circuits into the realm of psychology, economics and even philosophy. Yet among those who know it best, the question has turned unexpectedly existential: what, if anything, do large language models truly understand?  

Across the world’s AI labs, that question has split the community into two camps — believers and skeptics, prophets and heretics. One side sees systems like ChatGPT, Claude, and Gemini as the dawn of a new cognitive age. The other insists they’re clever parrots with no grasp of meaning, destined to plateau as soon as the data runs out. Between them stands a trillion-dollar industry built on both conviction and uncertainty.

Hinton, who spent a decade at Google refining the very neural networks that now power generative AI, has lately sounded like a man haunted by his own invention. Speaking to Scott Pelley on the CBS 60 Minutes interview aired October 8, 2023, Hinton said, “I think we're moving into a period when for the first time ever we may have things more intelligent than us.” . He said it not with triumph, but with visible worry.

Yoshua Bengio, his longtime collaborator, sees it differently. Speaking at the All In conference in Montreal, he told TIME that future AI systems "will have stronger and stronger reasoning abilities, more and more knowledge," while cautioning about ensuring they "act according to our norms". And then there’s Gary Marcus, the cognitive scientist and enduring critic, who dismisses the hype outright: “These systems don’t understand the world. They just predict the next word.”    

It’s a rare moment in science when three pioneers of the same field disagree so completely — not about ethics or funding, but about the very nature of progress. And yet that disagreement now shapes how the future of AI will unfold.

In the span of just two years, large language models have gone from research curiosities to corporate cornerstones. Banks use them to summarize reports. Lawyers draft contracts with them. Pharmaceutical firms explore protein structures through them. Silicon Valley is betting that scaling these models — training them on ever-larger datasets with ever-denser computers — will eventually yield something approaching reasoning, maybe even intelligence.

It’s the “bigger is smarter” philosophy, and it has worked — so far. OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini have grown exponentially in capability  . They can write code, explain math, outline business plans, even simulate empathy. For most users, the line between prediction and understanding has already blurred beyond meaning. Kelvin So, who is now conducting AI research in PolyU SPEED, commented  , “AI scientists today are inclined to believe we have learnt a bitter lesson in the advancement from the traditional AI to the current LLM paradigm. That said, scaling law, instead of human-crafted complicated rules, is the ultimate law governing AI.”  

But inside the labs, cracks are showing. Scaling models have become staggeringly expensive, and the returns are diminishing. A growing number of researchers suspect that raw scale alone cannot unlock true comprehension — that these systems are learning syntax, not semantics; imitation, not insight.  

That belief fuels a quiet counter-revolution. Instead of simply piling on data and GPUs, some researchers are pursuing hybrid intelligence   — systems that combine statistical learning with symbolic reasoning, causal inference, or embodied interaction with the physical world. The idea is that intelligence requires grounding — an understanding of cause, consequence, and context that no amount of text prediction can supply.

Yet the results speak for themselves.  In practice, language models are already transforming industries faster than regulation can keep up. Marketing departments run on them. Customer support, logistics and finance teams depend on them. Even scientists now use them to generate hypotheses, debug code and summarize literature. For every cautionary voice, there are a dozen entrepreneurs who see this technology as a force reshaping every industry. That gap — between what these models actually are and what we hope they might become — defines this moment. It’s a time of awe and unease, where progress races ahead even as understanding lags behind.  

Part of the confusion stems from how these systems work. A large language model doesn’t store facts like a database. It predicts what word is most likely to come next in a sequence, based on patterns in vast amounts of text. Behind this seemingly simple prediction mechanism lies a sophisticated architecture. The tokenizer is one of the key innovations behind modern language models. It takes text and chops it into smaller, manageable pieces the AI can understand. These pieces are then turned into numbers, giving the model a way to “read” human language. By doing this, the system can spot context and relationships between words — the building blocks of comprehension.  

Inside the model, mechanisms such as multi-head attention enable the system to examine many aspects of information simultaneously, much as a human reader might track several storylines at once.

Reinforcement learning, pioneered by Richard Sutton, a professor of computing science at the University of Alberta, and Andrew Barto, Professor Emeritus at the University of Massachusetts, mimics human trial-and-error learning. The AI develops “value functions” that predict the long-term rewards of its actions.  Together, these technologies enable machines to recognize patterns, make predictions and generate text that feels strikingly human — yet beneath this technical progress lies the very divide that cuts to the heart of how intelligence itself is defined.

This placement works well because it elaborates on the technical foundations after the article introduces the basic concept of how language models work, and before it transitions to discussing the emergent behaviors and the “black box problem.”

Yet at scale, that simple process begins to yield emergent behavior — reasoning, problem-solving, even flashes of creativity that surprise their creators. The result is something that looks, sounds and increasingly acts intelligent — even if no one can explain exactly why.

That opacity worries not just philosophers, but engineers. The “black box problem” — our inability to interpret how neural networks make decisions — has turned into a scientific and safety concern. If we can’t explain a model’s reasoning, can we trust it in critical systems like healthcare or defense?

Companies like Anthropic are trying to address that with “constitutional AI,” embedding human-written principles into model training to guide behavior. Others, like OpenAI, are experimenting with internal oversight teams and adversarial testing to catch dangerous or misleading outputs. But no approach yet offers real transparency. We’re effectively steering a ship whose navigation system we don’t fully understand.  “We need governance frameworks that evolve as quickly as AI itself,” says Felix Cheung, Founding Chairman of RegTech Association of Hong Kong (RTAHK). “Technical safeguards alone aren't enough — transparent monitoring and clear accountability must become industry standards.”

Meanwhile, the commercial race is accelerating. Venture capital is flowing into AI startups at record speed. OpenAI’s valuation reportedly exceeds US$150 billion; Anthropic, backed by Amazon and Google, isn’t far behind.   The bet is simple: that generative AI will become as indispensable to modern life as the internet itself.

And yet, not everyone is buying into that vision. The open-source movement — championed by players like Meta’s Llama, Mistral in France, and a fast-growing constellation of independent labs — argues that democratizing access is the only way to ensure both innovation and accountability.   If powerful AI remains locked behind corporate walls, they warn, progress will narrow to the priorities of a few firms.

But openness cuts both ways. Publicly available models are harder to police, and their misuse — from disinformation to deepfakes — grows as easily as innovation does. Regulators are scrambling to balance risk and reward. The European Union’s AI Act is the world’s most comprehensive attempt at governance, but even it struggles to define where to draw the line between creativity and control.

This isn’t just a scientific argument anymore. It’s a geopolitical one. The United States, China, and Europe are each pursuing distinct AI strategies: Washington betting on private-sector dominance, Beijing on state-led scaling, Brussels on regulation and ethics. Behind the headlines, compute power is becoming a form of soft power. Whoever controls access to the chips, data, and infrastructure that fuel AI will control much of the digital economy.  

That reality is forcing some uncomfortable math. Training frontier models already consumes energy on the scale of small nations. Data centers now rise next to hydroelectric dams and nuclear plants. Efficiency — once a technical concern — has become an economic and environmental one. As demand grows, so does the incentive to build smaller, smarter, more efficient systems. The industry’s next leap may not come from scale at all, but from constraint.

For all the noise, one truth keeps resurfacing: large language models are tools, not oracles. Their intelligence — if we can call it that — is borrowed from ours. They are trained on human text, human logic, human error. Every time a model surprises us with insight, it is, in a sense, holding up a mirror to collective intelligence.

That’s what makes this schism so fascinating. It’s not really about machines. It’s about what we believe intelligence is — pattern or principle, simulation or soul. For believers like Bengio, intelligence may simply be prediction done right. For critics like Marcus, that’s a category mistake: true understanding requires grounding in the real world, something no model trained on text can ever achieve.

The public, meanwhile, is less interested in metaphysics. To most users, these systems work — and that’s enough. They write emails, plan trips, debug spreadsheets, summarize meetings. Whether they “understand” or not feels academic. But for the scientists, that distinction remains critical, because it determines where AI might ultimately lead.

Even inside the companies building them, that tension shows OpenAI’s Sam Altman has hinted that scaling can’t continue forever. At some point, new architectures — possibly combining logic, memory, or embodied data — will be needed. DeepMind’s Demis Hassabis says something similar: intelligence, he argues, will come not just from prediction, but from interaction with the world.  

It’s possible both are right. The future of AI may belong to hybrid systems — part statistical, part symbolic — that can reason across multiple modes of information: text, image, sound, action. The line between model and agent is already blurring, as LLMs gain the ability to browse the web, run code, and call external tools. The next generation won’t just answer questions; it will perform tasks.

For startups, the opportunity — and the risk — lies in that transition. The most valuable companies in this new era may not be those that build the biggest models, but those that build useful ones: specialized systems tuned for medicine, law, logistics, or finance, where reliability matters more than raw capability. The winners will understand that scale is a means, not an end.

And for society, the challenge is to decide what kind of intelligence we want to live with. If we treat these models as collaborators — imperfect, explainable, constrained — they could amplify human potential on a scale unseen since the printing press. If we chase the illusion of autonomy, they could just as easily entrench bias, confusion, and dependency.

The debate over large language models will not end in a lab. It will play out in courts, classrooms, boardrooms, and living rooms — anywhere humans and machines learn to share the same cognitive space. Whether we call that cooperation or competition will depend on how we design, deploy, and, ultimately, define these tools.

Perhaps Hinton’s offhand remark about being psychoanalyzed by his own creation wasn’t just a joke. It was an omen. AI is no longer something we use; it’s something we’re reflected in. Every model trained on our words becomes a record of who we are — our reasoning, our prejudices, our brilliance, our contradictions. The schism among scientists mirrors the one within ourselves: fascination colliding with fear, ambition tempered by doubt.

In the end, the question isn’t whether LLMs are the future. It’s whether we are ready for a future built in their image.