We bring you concise, up-to-the-minute coverage of the founders, funding rounds, and technologies shaping tomorrow. Expect clear explains, deal roundups, and stories that cut through the noise—so you can spot the next big move in tech, fast.
Market Trends
From TV to YouTube, the Oscars’ global shift reveals how entertainment, access and platforms are reshaping cultural institutions.
The Oscars are moving to YouTube. Beginning in 2029, the Academy of Motion Picture Arts and Sciences has signed a multi-year agreement that makes YouTube the exclusive global home of the Oscars through 2033. From the ceremony itself to red carpet coverage, behind-the-scenes access and the Governors Ball, the entire experience will live on a platform most people already open every day.
On the surface, it looks like a distribution shift. In reality, it signals a broader strategic reset. For decades, television delivered scale for cultural institutions. Today, reach and discovery live on platforms, not channels. By choosing YouTube, the Academy is quietly acknowledging that cultural relevance today is built where audiences already are. In that context, YouTube is no longer just a place to watch clips but an emerging piece of cultural infrastructure.
What also stands out is how the Oscars are being reframed. This partnership is not limited to one night a year. Alongside the ceremony, YouTube will host year-round Academy programming through the Oscars YouTube channel. That includes nominations announcements, the Governors Awards, the Student Academy Awards, the Scientific and Technical Awards, filmmaker interviews, podcasts and education programs. Instead of a single broadcast moment, the Oscars are turning into an always-on ecosystem.
Accessibility is another central pillar of the deal. The Oscars will be free to watch globally, supported by closed captioning and audio tracks in multiple languages. This is less about nice-to-have features and more about staying relevant in a global, digital-first world. Younger audiences and viewers outside traditional Western markets expect access by default. The Academy is clearly building with that expectation in mind.
There is also a deeper exchange happening between heritage and technology. YouTube gains cultural weight by hosting one of the world’s most established creative institutions. The Academy, in turn, gains technological legitimacy and a clearer path into the future.
That balance extends to how the transition is being handled. The Academy’s domestic broadcast partnership with Disney ABC will continue through the 100th Oscars in 2028 and the international arrangement with Disney’s Buena Vista International remains in place until then. This is not an abrupt break from legacy media but a carefully phased shift. Change is being managed without burning bridges.
“We are thrilled to enter into a multifaceted global partnership with YouTube to be the future home of the Oscars and our year-round Academy programming,” said Academy CEO Bill Kramer and Academy President Lynette Howell Taylor. “The Academy is an international organization and this partnership will allow us to expand access to the work of the Academy to the largest worldwide audience possible — which will be beneficial for our Academy members and the film community. This collaboration will leverage YouTube’s vast reach and infuse the Oscars and other Academy programming with innovative opportunities for engagement while honoring our legacy. We will be able to celebrate cinema, inspire new generations of filmmakers and provide access to our film history on an unprecedented global scale.”
From YouTube’s side, the partnership places the platform firmly in the center of global cultural moments. “The Oscars are one of our essential cultural institutions, honoring excellence in storytelling and artistry,” said Neal Mohan, CEO, YouTube. “Partnering with the Academy to bring this celebration of art and entertainment to viewers all over the world will inspire a new generation of creativity and film lovers while staying true to the Oscars’ storied legacy.”
Google Arts & Culture extends the partnership beyond the ceremony. Select Academy Museum exhibitions and materials from the Academy’s 52-million-item collection will be made digitally accessible worldwide, bringing film history and education onto the same platform.
Taken together, the deal is less about where the Oscars will stream and more about how cultural institutions are adapting to the changing landscape. The Academy is positioning itself to be present year-round, globally accessible and aligned with the platforms that shape everyday viewing.
Strategy & Leadership
Inside a partnership showing how open-source platforms and startups are scaling autonomous driving beyond the lab.
Autonomous driving is often discussed in terms of futuristic cars and distant timelines. This investment is about something more immediate. Japan-based TIER IV has invested in Turing Drive, a Taiwan startup that builds autonomous driving systems designed for controlled, everyday environments such as factories, ports, airports and industrial campuses. The investment establishes a capital and business alliance between the two companies, with a shared focus on developing autonomous driving technology and expanding operations across Asia.
Rather than targeting open roads and city traffic, Turing Drive’s work centres on places where vehicles follow fixed routes and move at low speeds. These include logistics hubs, manufacturing facilities and commercial sites where automation is already part of daily operations. According to the release, Turing Drive has deployments across Taiwan, Japan and other regions and works closely with vehicle manufacturers to integrate autonomous systems into special-purpose vehicles.
The investment also connects Turing Drive more closely with Autoware, an open-source autonomous driving software ecosystem supported by TIER IV. Turing Drive joined the Autoware Foundation in September 2024 and develops its systems using this shared software framework. TIER IV’s own Pilot.Auto platform, which is built around Autoware, is used across applications such as factory transport, public transit, freight movement and autonomous mobility services.
Through the alliance, TIER IV plans to work with Turing Drive to further develop autonomous driving systems for these controlled environments, while strengthening its presence in Taiwan and the broader Asia-Pacific region. The collaboration brings together software development and on-the-ground deployment experience within markets where autonomous driving is already being tested in real operational settings.
“This partnership with Turing Drive represents a significant step forward in accelerating the deployment of autonomous driving across Asia”, said TIER IV CEO Shinpei Kato. “At TIER IV, our mission has always been to make autonomous driving accessible to all. By collaborating with Turing Drive, which has demonstrated remarkable achievements in real-world deployments in Taiwan, we aim to deliver autonomous driving that enables a safer, more sustainable and more inclusive society”.
“We are thrilled to establish this strategic alliance with TIER IV, a global leader in open-source autonomous driving”, said Weilung Chen, chairman of Turing Drive. “In Taiwan, autonomous driving deployment is gaining significant momentum, particularly across logistics hubs, ports, airports and industrial campuses. By combining our field expertise with TIER IV's world-class Pilot.Auto platform, we aim to accelerate the development of practical, commercially viable mobility services powered by autonomous driving”. Overall, the investment highlights how autonomous driving in Asia is being shaped by operational needs and gradual integration, rather than headline-grabbing demonstrations.
Funding & Deals
From pre-orders to market entry, Rokid’s Taiwan campaign reflects how AI hardware is being introduced to consumers today.
Rokid has reached a significant crowdfunding milestone in Taiwan. Its Rokid Glasses campaign surpassed NT$62 million in pre-order funding on zeczec, Taiwan’s creative-oriented crowdfunding platform. The campaign ranked No. 1 across all categories on the platform in 2025 and entered the Top 10 funded campaigns in zeczec’s history, setting new records for AI and XR-related projects.
The campaign launched on October 28 and became one of the platform’s most prominent technology initiatives of the year. According to the company, the outcome followed growing visibility for Rokid Glasses after product showcases in New York, Berlin, Singapore and Paris, positioning the Taiwan campaign within a broader international rollout.
The crowdfunding achievement coincided with Rokid’s official market entry in Taiwan. On December 10, the company debuted Rokid Glasses locally, introducing the product to media, partners and early users in the region. The Taiwan launch mirrored earlier international events and connected the online crowdfunding campaign with a physical market presence.
Rokid Glasses combine augmented reality displays with built-in AI functions, including real-time multilingual translation, live transcription, navigation, object recognition and voice assistance. These capabilities were central to how the product was presented during both the crowdfunding campaign and the Taiwan launch, without framing the project as a traditional consumer electronics release.
The Taiwan campaign builds on Rokid’s prior crowdfunding history. The company previously raised more than US$4 million on Kickstarter, where Rokid Glasses became the highest-funded XR wearable project on the platform. The zeczec campaign extends that track record into one of Asia’s most established consumer electronics markets.
“Taiwan has one of the world's most mature and discerning consumer electronics markets”, said Said Justo Chang, Head of Global Channels at Rokid. “Reaching the top of Taiwan's crowdfunding platform is a great commercial achievement. We are excited to finally introduce Rokid Glasses to Taiwan”.
More broadly, the campaign highlights how crowdfunding platforms continue to function as launch and distribution channels for emerging AI and XR hardware. In Rokid’s case, product rollout, market entry and public participation converged within a single campaign, marking a notable moment for AI-enabled wearables in Taiwan’s technology landscape.
Artificial Intelligence
The hidden cost of scaling AI: infrastructure, energy, and the push for liquid cooling.
As artificial intelligence models grow larger and more demanding, the quiet pressure point isn’t the algorithms themselves—it’s the AI infrastructure that has to run them. Training and deploying modern AI models now requires enormous amounts of computing power, which creates a different kind of challenge: heat, energy use and space inside data centers. This is the context in which Supermicro and NVIDIA’s collaboration on AI infrastructure begins to matter.
Supermicro designs and builds large-scale computing systems for data centers. It has now expanded its support for NVIDIA’s Blackwell generation of AI chips with new liquid-cooled server platforms built around the NVIDIA HGX B300. The announcement isn’t just about faster hardware. It reflects a broader effort to rethink how AI data center infrastructure is built as facilities strain under rising power and cooling demands.
At a basic level, the systems are designed to pack more AI chips into less space while using less energy to keep them running. Instead of relying mainly on air cooling—fans, chillers and large amounts of electricity, these liquid-cooled AI servers circulate liquid directly across critical components. That approach removes heat more efficiently, allowing servers to run denser AI workloads without overheating or wasting energy.
Why does that matter outside a data center? Because AI doesn’t scale in isolation. As models become more complex, the cost of running them rises quickly, not just in hardware budgets, but in electricity use, water consumption and physical footprint. Traditional air-cooling methods are increasingly becoming a bottleneck, limiting how far AI systems can grow before energy and infrastructure costs spiral.
This is where the Supermicro–NVIDIA partnership fits in. NVIDIA supplies the computing engines—the Blackwell-based GPUs designed to handle massive AI workloads. Supermicro focuses on how those chips are deployed in the real world: how many GPUs can fit in a rack, how they are cooled, how quickly systems can be assembled and how reliably they can operate at scale in modern data centers. Together, the goal is to make high-density AI computing more practical, not just more powerful.
The new liquid-cooled designs are aimed at hyperscale data centers and so-called AI factories—facilities built specifically to train and run large AI models continuously. By increasing GPU density per rack and removing most of the heat through liquid cooling, these systems aim to ease a growing tension in the AI boom: the need for more computers without an equally dramatic rise in energy waste.
Just as important is speed. Large organizations don’t want to spend months stitching together custom AI infrastructure. Supermicro’s approach packages compute, networking and cooling into pre-validated data center building blocks that can be deployed faster. In a world where AI capabilities are advancing rapidly, time to deployment can matter as much as raw performance.
Stepping back, this development says less about one product launch and more about a shift in priorities across the AI industry. The next phase of AI growth isn’t only about smarter models—it’s about whether the physical infrastructure powering AI can scale responsibly. Efficiency, power use and sustainability are becoming as critical as speed.
Artificial Intelligence
Humanoids are moving from research labs into real industries — and capital is finally catching up.
Humanoid robots are shifting from sci-fi speculation to engineering reality, and the pace of progress is prompting investors to reassess how the next decade of physical automation will unfold. ALM Ventures has launched a new US$100 million early-stage fund aimed squarely at this moment—one where advances in robot control, embodied AI and spatial intelligence are beginning to converge into something commercially meaningful.
ALM Ventures Fund I, is designed for the earliest stages of company formation, targeting seed and pre-seed teams building the foundations of humanoid deployment. It’s a concentrated fund that seeks to take early ownership in a sector that many now consider the next major technological frontier.
For Founder and General Partner Modar Alaoui, the timing is not accidental. “After years of research, humanoids are finally entering a phase where performance, reliability and cost are converging toward commercial viability”, he said. “What the category needs now is focused capital and deep technical diligence to turn prototypes into scalable, enduring companies”.
That framing captures a shift happening across robotics: the field is moving out of the lab and into early commercial readiness. Improvements in perception systems, model-based reasoning and motion control are accelerating the transition. Advances in simulation are also lowering the complexity and cost of integrating humanoid platforms into real environments. As these systems become more capable, the gap between research prototypes and market-ready products is narrowing.
ALM Ventures is positioning itself at this inflection point. Fund I’s thesis centers on the core technologies required to scale humanoids safely and economically. This includes next-generation robot platforms, spatial reasoning engines, embodied intelligence models, world-modeling systems and the infrastructure needed for early deployment. Rather than chasing every robotics trend, the fund is concentrating on the essential layers that will determine whether humanoids can work reliably outside controlled settings.
The firm isn’t starting from zero. During the fund’s formation, ALM Ventures made ten early investments that directly align with its investment focus. The portfolio includes companies building at different layers of the humanoid stack, such as Sanctuary AI, Weave Robotics, Emancro, High Torque Robotics, MicroFactory, Mbodi, Adamo, Haptica Robotics, UMA and O-ID. The list reflects a broad but intentional spread, from hardware to intelligence to manufacturing approaches, all oriented toward enabling scalable physical AI.
Beyond capital, ALM Ventures has been shaping the ecosystem through its global Humanoids Summit series in Silicon Valley, London and Tokyo. The series gives the firm early visibility into emerging technologies, pre-incorporation teams and the senior leaders steering the global robotics landscape. That vantage point has helped the firm identify where commercialization is truly taking root and where bottlenecks still exist.
The rise of humanoids is often compared to the early days of self-driving cars: a long arc of research suddenly meeting an acceleration point. What separates this moment is that advances in embodied AI and spatial intelligence are giving robots a more intuitive understanding of the physical world, making them easier to deploy, teach and scale. ALM Ventures’ Fund I is an attempt to capture that transition while shaping the companies that could define the next technological era.
With US$100 million dedicated to the earliest builders in the space, ALM Ventures is signaling its belief that humanoids are not just another robotics cycle—they may be the next major platform shift in AI.
Fintech & Payments
As global financial landscapes shift, Noah outlines a new AI-first approach to helping families protect and grow their wealth.
Noah Holdings, one of Asia’s leading wealth management firms serving global Chinese high-net-worth families, hosted its annual Black Diamond Summit in Macau from December 7–11. The city has become a significant gathering place for Noah’s community, where clients, partners, and experts converge each year to explore how global trends are transforming wealth and family life. This year’s theme, “AI Together, Co-Generating the Future”, set the tone for a conversation about how modern wealth management must adapt in an age defined by artificial intelligence.
More than 3,000 attendees joined discussions that connected technology, global mobility, and long-term family planning. The Summit built on earlier sessions held in Shanghai, creating a continuous dialogue around one central question: how can families prepare for a world that is becoming more digital, more complex and more interconnected?
A major moment came when Noah introduced “Noya”, its new AI Relationship Manager. Noya is now part of the upgraded iARK Hong Kong and Singapore apps. It is built to support licensed human advisors, not replace them. The goal is simple: combine human judgment with AI intelligence to help clients understand their wealth more clearly and manage it across borders. Noya offers real-time insights, deeper personalisation, cleaner access to global financial information, smoother coordination between regions, and end-to-end execution through Noah’s global booking centres.
The Summit’s tone shifted toward long-term thinking when Co-Founder and Chairwoman Norah Wang delivered her keynote, “From Chaos to Clarity: Building a Global Operating System for Wealth Management”. She reflected on twenty years of serving more than 400,000 clients and explained that families today face new pressures. As she put it, “The real pain point for Chinese families today is not investment performance, but navigating the growing complexities of a global lifestyle”. Her message was straightforward: wealth is no longer just about returns. It is about managing uncertainty in a world where technology, geopolitics, and mobility collide.
Wang described how two major shifts have shaped modern wealth—first the Internet Era, which changed how people built wealth, and now what she calls the AI Civilisation Era, which is changing how people must protect it. She outlined the forces that influence today’s decisions: geopolitical shifts, persistent inflation, the rising importance of security and supply-chain technologies, the spread of AI, and the need for stronger family governance across generations. Each of these factors adds complexity, and families need tools that help them see the bigger picture.
To respond to this reality, Noah presented its integrated global wealth infrastructure. It is built on three pillars:
Together, these pillars function as an AI-supported system designed to simplify global complexity and help families preserve long-term stability.
One of the most discussed conversations featured Noah’s CEO, Zander Yin, and Tony Shale, Co-Founder & Chairman of Asian Private Banker China. They spoke about how AI is transforming private banking in Asia. Their view was that wealth management is moving from a product-centred model to one led by insight, trust, and human-tech collaboration. AI may accelerate analysis, but human expertise will continue to guide judgment, relationships, and long-term strategy.
The closing message of the Summit centred on redefining what prosperity means in an AI-driven age. For Noah, wealth is no longer a destination. It is an ongoing journey through a world that is increasingly fast-moving and unpredictable. As Wang noted, “With AI reshaping the very foundations of civilisation, wealth and financial freedom represent not a static endpoint, but a continuous journey. Here, we find our purpose: to help global Chinese investors navigate an increasingly complex world and achieve true prosperity, supported by resilient wealth management infrastructure and deep human expertise”.
The Summit ended on that note—a reminder that the future of wealth is not only about financial assets, but about clarity, confidence and the ability to adapt as the world transforms.
Deep Tech
Can SPhotonix’s optical memory technology protect data better than today’s storage?
SPhotonix, a young deep-tech startup, is working on something unexpected for the data storage world: tiny, glass-like crystals that can hold enormous amounts of information for extremely long periods of time. The company works where light and data meet, using photonics—the science of shaping and guiding light—to build optical components and explore a new form of memory called “5D optical storage”.
It’s based on research that began more than twenty years ago, when Professor Peter Kazansky showed that a small crystal could preserve data—from the human genome to the entire Wikipedia—essentially forever.
Their new US$4.5 million pre-seed round, led by Creator Fund and XTX Ventures, is meant to turn that science into real products. And the timing aligns with a growing problem: the world is generating far more digital data than current storage systems can handle. Most of it isn’t needed every day, but it can’t be thrown away either. This long-term, rarely accessed cold data is piling up faster than existing storage infrastructure can manage and maintaining giant warehouses of servers just to keep it all alive is becoming expensive and environmentally unsustainable.
This is the problem SPhotonix is stepping in to solve. They want to store huge amounts of information in a stable format that doesn’t degrade, doesn’t need electricity to preserve data and doesn’t require constant swapping of hardware. Instead of racks of spinning drives, the idea is a durable optical crystal storage system that could last for generations.
The company’s underlying technology—called FemtoEtch™—uses ultrafast lasers to engrave microscopic patterns inside fused silica. These precisely etched structures can function as high-performance optical components for fields like aerospace, microscopy and semiconductor manufacturing. But the same ultra-controlled process can also encode information in five dimensions within the crystal, transforming the material into a compact, long-lasting archive capable of holding massive amounts of information in a very small footprint.
The new funding allows SPhotonix to expand its engineering team, grow its R&D facility in Switzerland and prepare the technology for real-world deployment. Investors say the opportunity is significant: global data generation has more than doubled in recent years and traditional storage systems—drives, disks, tapes—weren’t designed for the scale or longevity modern data demands.
While the company has been gaining attention in research circles (and even made an appearance in the latest Mission Impossible film), its next step is all about practical adoption. If the technology reaches commercial viability, it could offer an alternative to the energy-hungry, short-lived storage hardware that underpins much of today’s digital infrastructure.
As digital information continues to multiply, preserving it safely and sustainably is becoming one of the biggest challenges in modern computing. SPhotonix’s work points toward a future where long-lasting, low-maintenance optical data storage becomes a practical alternative to today’s fragile systems. It offers a more resilient way to preserve knowledge for the decades ahead.
Artificial Intelligence
Rethinking 3D modelling for a world that generates too much, too quickly.
MicroCloud Hologram Inc. (NASDAQ: HOLO), a technology service provider recognized for its holography and imaging systems, is now expanding into a more advanced realm: a quantum-driven 3D intelligent model. The goal is to generate detailed 3D models and images with far less manual effort — a need that has only grown as industries flood the world with more visual data every year.
The concept is straightforward, even if the technology behind it isn’t. Traditional 3D modeling workflows are slow, fragmented and depend on large teams to clean datasets, train models, adjust parameters and fine-tune every output. HOLO is trying to close that gap by combining quantum computing with AI-powered 3D modeling, enabling the system to process massive datasets quickly and automatically produce high-precision 3D assets with much less human involvement.
To achieve this, the company developed a distributed architecture comprising of several specialized subsystems. One subsystem collects and cleans raw visual data from different sources. Another uses quantum deep learning to understand patterns in that data. A third converts the trained model into ready-to-use 3D assets based on user inputs. Additional modules manage visualization, secure data storage and system-wide protection — all supported by quantum-level encryption. Each subsystem runs in its own container and communicates through encrypted interfaces, allowing flexible upgrades and scaling without disrupting the entire system.
Why this matters: Industries ranging from gaming and film to manufacturing, simulation and digital twins are rapidly increasing their reliance on 3D content. The real bottleneck isn’t creativity — it’s time. Producing accurate, high-quality 3D assets still requires a huge amount of manual processing. HOLO’s approach attempts to lighten that workload by utilizing quantum tools to speed up data processing, model training, generation and scaling, while keeping user data secure.
According to the company, the system’s biggest advantages include its ability to handle massive datasets more efficiently, generate precise 3D models with fewer manual steps, and scale easily thanks to its modular, quantum-optimized design. Whether quantum computing will become a mainstream part of 3D production remains an open question. Still, the model shows how companies are beginning to rethink traditional 3D workflows as demand for high-quality digital content continues to surge.
Artificial Intelligence
Where smarter storage meets smarter logistics.
E-commerce keeps growing and with it, the number of products moving through warehouses every day. Items vary more than ever — different shapes, seasonal packaging, limited editions and constantly updated designs. At the same time, many logistics centers are dealing with labour shortages and rising pressure to automate.
But today’s image-recognition AI isn’t built for this level of change. Most systems rely on deep-learning models that need to be adjusted or retrained whenever new products appear. Every update — whether it’s a new item or a packaging change — adds extra time, energy use and operational cost. And for warehouses handling huge product catalogs, these retraining cycles can slow everything down.
KIOXIA, a company known for its memory and storage technologies, is working on a different approach. In a new collaboration with Tsubakimoto Chain and EAGLYS, the team has developed an AI-based image recognition system that is designed to adapt more easily as product lines grow and shift. The idea is to help logistics sites automatically identify items moving through their workflows without constantly reworking the core AI model.
At the center of the system is KIOXIA’s AiSAQ software paired with its Memory-Centric AI technology. Instead of retraining the model each time new products appear, the system stores new product data — images, labels and feature information — directly in high-capacity storage. This allows warehouses to add new items quickly without altering the original AI model.
Because storing more data can lead to longer search times, the system also indexes the stored product information and transfers the index into SSD storage. This makes it easier for the AI to retrieve relevant features fast, using a Retrieval-Augmented Generation–style method adapted for image recognition.
The collaboration will be showcased at the 2025 International Robot Exhibition in Tokyo. Visitors will see the system classify items in real time as they move along a conveyor, drawing on stored product features to identify them instantly. The demonstration aims to illustrate how logistics sites can handle continuously changing inventories with greater accuracy and reduced friction.
Overall, as logistics networks become increasingly busy and product lines evolve faster than ever, this memory-driven approach provides a practical way to keep automation adaptable and less fragile.
Artificial Intelligence
Brains, bots and the future: Who’s really in control?
When British-Canadian cognitive psychologist and computer scientist Geoffrey Hinton joked that his ex-girlfriend once used ChatGPT to help her break up with him, he wasn’t exaggerating. The father of deep learning was pointing to something stranger: how machines built to mimic language have begun to mimic thought — and how even their creators no longer agree on what that means.
In that one quip — part humor, part unease — Hinton captured the paradox at the center of the world’s most important scientific divide. Artificial intelligence has moved beyond code and circuits into the realm of psychology, economics and even philosophy. Yet among those who know it best, the question has turned unexpectedly existential: what, if anything, do large language models truly understand?
Across the world’s AI labs, that question has split the community into two camps — believers and skeptics, prophets and heretics. One side sees systems like ChatGPT, Claude, and Gemini as the dawn of a new cognitive age. The other insists they’re clever parrots with no grasp of meaning, destined to plateau as soon as the data runs out. Between them stands a trillion-dollar industry built on both conviction and uncertainty.
Hinton, who spent a decade at Google refining the very neural networks that now power generative AI, has lately sounded like a man haunted by his own invention. Speaking to Scott Pelley on the CBS 60 Minutes interview aired October 8, 2023, Hinton said, “I think we're moving into a period when for the first time ever we may have things more intelligent than us.” . He said it not with triumph, but with visible worry.
Yoshua Bengio, his longtime collaborator, sees it differently. Speaking at the All In conference in Montreal, he told TIME that future AI systems "will have stronger and stronger reasoning abilities, more and more knowledge," while cautioning about ensuring they "act according to our norms". And then there’s Gary Marcus, the cognitive scientist and enduring critic, who dismisses the hype outright: “These systems don’t understand the world. They just predict the next word.”
It’s a rare moment in science when three pioneers of the same field disagree so completely — not about ethics or funding, but about the very nature of progress. And yet that disagreement now shapes how the future of AI will unfold.
In the span of just two years, large language models have gone from research curiosities to corporate cornerstones. Banks use them to summarize reports. Lawyers draft contracts with them. Pharmaceutical firms explore protein structures through them. Silicon Valley is betting that scaling these models — training them on ever-larger datasets with ever-denser computers — will eventually yield something approaching reasoning, maybe even intelligence.
It’s the “bigger is smarter” philosophy, and it has worked — so far. OpenAI’s GPT-4, Anthropic’s Claude, and Google’s Gemini have grown exponentially in capability . They can write code, explain math, outline business plans, even simulate empathy. For most users, the line between prediction and understanding has already blurred beyond meaning. Kelvin So, who is now conducting AI research in PolyU SPEED, commented , “AI scientists today are inclined to believe we have learnt a bitter lesson in the advancement from the traditional AI to the current LLM paradigm. That said, scaling law, instead of human-crafted complicated rules, is the ultimate law governing AI.”
But inside the labs, cracks are showing. Scaling models have become staggeringly expensive, and the returns are diminishing. A growing number of researchers suspect that raw scale alone cannot unlock true comprehension — that these systems are learning syntax, not semantics; imitation, not insight.
That belief fuels a quiet counter-revolution. Instead of simply piling on data and GPUs, some researchers are pursuing hybrid intelligence — systems that combine statistical learning with symbolic reasoning, causal inference, or embodied interaction with the physical world. The idea is that intelligence requires grounding — an understanding of cause, consequence, and context that no amount of text prediction can supply.
Yet the results speak for themselves. In practice, language models are already transforming industries faster than regulation can keep up. Marketing departments run on them. Customer support, logistics and finance teams depend on them. Even scientists now use them to generate hypotheses, debug code and summarize literature. For every cautionary voice, there are a dozen entrepreneurs who see this technology as a force reshaping every industry. That gap — between what these models actually are and what we hope they might become — defines this moment. It’s a time of awe and unease, where progress races ahead even as understanding lags behind.
Part of the confusion stems from how these systems work. A large language model doesn’t store facts like a database. It predicts what word is most likely to come next in a sequence, based on patterns in vast amounts of text. Behind this seemingly simple prediction mechanism lies a sophisticated architecture. The tokenizer is one of the key innovations behind modern language models. It takes text and chops it into smaller, manageable pieces the AI can understand. These pieces are then turned into numbers, giving the model a way to “read” human language. By doing this, the system can spot context and relationships between words — the building blocks of comprehension.
Inside the model, mechanisms such as multi-head attention enable the system to examine many aspects of information simultaneously, much as a human reader might track several storylines at once.
Reinforcement learning, pioneered by Richard Sutton, a professor of computing science at the University of Alberta, and Andrew Barto, Professor Emeritus at the University of Massachusetts, mimics human trial-and-error learning. The AI develops “value functions” that predict the long-term rewards of its actions. Together, these technologies enable machines to recognize patterns, make predictions and generate text that feels strikingly human — yet beneath this technical progress lies the very divide that cuts to the heart of how intelligence itself is defined.
This placement works well because it elaborates on the technical foundations after the article introduces the basic concept of how language models work, and before it transitions to discussing the emergent behaviors and the “black box problem.”
Yet at scale, that simple process begins to yield emergent behavior — reasoning, problem-solving, even flashes of creativity that surprise their creators. The result is something that looks, sounds and increasingly acts intelligent — even if no one can explain exactly why.
That opacity worries not just philosophers, but engineers. The “black box problem” — our inability to interpret how neural networks make decisions — has turned into a scientific and safety concern. If we can’t explain a model’s reasoning, can we trust it in critical systems like healthcare or defense?
Companies like Anthropic are trying to address that with “constitutional AI,” embedding human-written principles into model training to guide behavior. Others, like OpenAI, are experimenting with internal oversight teams and adversarial testing to catch dangerous or misleading outputs. But no approach yet offers real transparency. We’re effectively steering a ship whose navigation system we don’t fully understand. “We need governance frameworks that evolve as quickly as AI itself,” says Felix Cheung, Founding Chairman of RegTech Association of Hong Kong (RTAHK). “Technical safeguards alone aren't enough — transparent monitoring and clear accountability must become industry standards.”
Meanwhile, the commercial race is accelerating. Venture capital is flowing into AI startups at record speed. OpenAI’s valuation reportedly exceeds US$150 billion; Anthropic, backed by Amazon and Google, isn’t far behind. The bet is simple: that generative AI will become as indispensable to modern life as the internet itself.
And yet, not everyone is buying into that vision. The open-source movement — championed by players like Meta’s Llama, Mistral in France, and a fast-growing constellation of independent labs — argues that democratizing access is the only way to ensure both innovation and accountability. If powerful AI remains locked behind corporate walls, they warn, progress will narrow to the priorities of a few firms.
But openness cuts both ways. Publicly available models are harder to police, and their misuse — from disinformation to deepfakes — grows as easily as innovation does. Regulators are scrambling to balance risk and reward. The European Union’s AI Act is the world’s most comprehensive attempt at governance, but even it struggles to define where to draw the line between creativity and control.
This isn’t just a scientific argument anymore. It’s a geopolitical one. The United States, China, and Europe are each pursuing distinct AI strategies: Washington betting on private-sector dominance, Beijing on state-led scaling, Brussels on regulation and ethics. Behind the headlines, compute power is becoming a form of soft power. Whoever controls access to the chips, data, and infrastructure that fuel AI will control much of the digital economy.
That reality is forcing some uncomfortable math. Training frontier models already consumes energy on the scale of small nations. Data centers now rise next to hydroelectric dams and nuclear plants. Efficiency — once a technical concern — has become an economic and environmental one. As demand grows, so does the incentive to build smaller, smarter, more efficient systems. The industry’s next leap may not come from scale at all, but from constraint.
For all the noise, one truth keeps resurfacing: large language models are tools, not oracles. Their intelligence — if we can call it that — is borrowed from ours. They are trained on human text, human logic, human error. Every time a model surprises us with insight, it is, in a sense, holding up a mirror to collective intelligence.
That’s what makes this schism so fascinating. It’s not really about machines. It’s about what we believe intelligence is — pattern or principle, simulation or soul. For believers like Bengio, intelligence may simply be prediction done right. For critics like Marcus, that’s a category mistake: true understanding requires grounding in the real world, something no model trained on text can ever achieve.
The public, meanwhile, is less interested in metaphysics. To most users, these systems work — and that’s enough. They write emails, plan trips, debug spreadsheets, summarize meetings. Whether they “understand” or not feels academic. But for the scientists, that distinction remains critical, because it determines where AI might ultimately lead.
Even inside the companies building them, that tension shows OpenAI’s Sam Altman has hinted that scaling can’t continue forever. At some point, new architectures — possibly combining logic, memory, or embodied data — will be needed. DeepMind’s Demis Hassabis says something similar: intelligence, he argues, will come not just from prediction, but from interaction with the world.
It’s possible both are right. The future of AI may belong to hybrid systems — part statistical, part symbolic — that can reason across multiple modes of information: text, image, sound, action. The line between model and agent is already blurring, as LLMs gain the ability to browse the web, run code, and call external tools. The next generation won’t just answer questions; it will perform tasks.
For startups, the opportunity — and the risk — lies in that transition. The most valuable companies in this new era may not be those that build the biggest models, but those that build useful ones: specialized systems tuned for medicine, law, logistics, or finance, where reliability matters more than raw capability. The winners will understand that scale is a means, not an end.
And for society, the challenge is to decide what kind of intelligence we want to live with. If we treat these models as collaborators — imperfect, explainable, constrained — they could amplify human potential on a scale unseen since the printing press. If we chase the illusion of autonomy, they could just as easily entrench bias, confusion, and dependency.
The debate over large language models will not end in a lab. It will play out in courts, classrooms, boardrooms, and living rooms — anywhere humans and machines learn to share the same cognitive space. Whether we call that cooperation or competition will depend on how we design, deploy, and, ultimately, define these tools.
Perhaps Hinton’s offhand remark about being psychoanalyzed by his own creation wasn’t just a joke. It was an omen. AI is no longer something we use; it’s something we’re reflected in. Every model trained on our words becomes a record of who we are — our reasoning, our prejudices, our brilliance, our contradictions. The schism among scientists mirrors the one within ourselves: fascination colliding with fear, ambition tempered by doubt.
In the end, the question isn’t whether LLMs are the future. It’s whether we are ready for a future built in their image.