Startup Profiles

Startup Applied Brain Research Raises Seed Funding to Develop On-Device Voice AI

Why investors are backing Applied Brain Research’s on-device voice AI approach.

Updated

January 28, 2026 5:53 PM

Plastic model of a human's brain. PHOTO: UNSPLASH

Applied Brain Research (ABR), a Canada-based startup, has closed its seed funding round to advance its work in “on-device voice AI”. The round was led by Two Small Fish Ventures, with its general partner Eva Lau joining ABR’s board, reflecting investor confidence in the company’s technical direction and market focus.

The round was oversubscribed, meaning more investors wanted to participate than the company had planned for. That response reflects growing interest in technologies that reduce reliance on cloud-based AI systems.

ABR is focused on a clear problem in voice-enabled products today. Most voice features depend on cloud servers to process speech, which can cause delays, increase costs, raise privacy concerns and limit performance on devices with small batteries or limited computing power.

ABR’s approach is built around keeping voice AI fully on-device. Instead of relying on cloud connectivity, its technology allows devices to process speech locally, enabling faster responses and more predictable performance while reducing data exposure.

Central to this approach is the company’s TSP1 chip, a processor designed specifically for handling time-based data such as speech. Built for real-time voice processing at the edge, TSP1 allows tasks like speech recognition and text-to-speech to run on smaller, power-constrained devices.

This specialization is particularly relevant as voice interfaces become more common across emerging products. Many edge devices such as wearables or mobile robotics cannot support traditional voice AI systems without compromising battery life or responsiveness. The TSP1 addresses this limitation by enabling these capabilities at significantly lower power levels than conventional alternatives. According to the company, full speech-to-text and text-to-speech can run at under 30 milliwatts of power, which is roughly 10 to 100 times lower than many existing alternatives. This level of efficiency makes advanced voice interaction feasible on devices where power consumption has long been a limiting factor.

That efficiency makes the technology applicable across a wide range of use cases. In augmented reality glasses, it supports responsive, hands-free voice control. In robotics, it enables real-time voice interaction without cloud latency or ongoing service costs. For wearables, it expands voice functionality without severely impacting battery life. In medical devices, it allows on-device inference while keeping sensitive data local. And in automotive systems, it enables consistent voice experiences regardless of network availability.

For investors, this combination of timing and technology is what stands out. Voice interfaces are becoming more common, while reliance on cloud infrastructure is increasingly seen as a limitation rather than a strength. ABR sits at the intersection of those two shifts.

With fresh funding in place, ABR is now working with partners across AR, robotics, healthcare, automotive and wearables to bring that future closer. For startup watchers, it’s a reminder that some of the most meaningful AI advances aren’t about bigger models but about making intelligence fit where it actually needs to live.

Keep Reading

Artificial Intelligence

Are Workplace Chats Becoming the Next Layer of AI Memory?

As workplace knowledge spreads across chats, AI firms are building systems that can structure, retrieve and preserve it over time.

Updated

May 11, 2026 5:24 PM

A messaging app on a phone. PHOTO: ADOBE STOCK

Votee AI, an enterprise AI company headquartered in Hong Kong, has partnered with its Toronto-based research lab Beever AI to launch Beever Atlas. The new platform is designed to turn workplace chats into searchable knowledge that AI systems can retrieve and understand.

The release focuses on a growing issue inside organisations. Much of today’s workplace knowledge now exists inside chat platforms such as Slack, Microsoft Teams, Discord and Telegram. Important discussions, project decisions and technical information often disappear into long message histories that are difficult to search later.

Beever AI developed the platform to organise those conversations into a structured system for AI assistants. The software connects with Telegram, Discord, Mattermost, Microsoft Teams and Slack, then converts conversations into linked records of people, projects, files and decisions.

The collaboration combines Votee AI’s enterprise infrastructure work with Beever AI’s research around AI memory systems. The companies are releasing two versions of the product. The open-source edition is aimed at individual developers, researchers and creators. The enterprise edition is designed for banks, government agencies and larger organisations with stricter security requirements.

The release also reflects a broader shift happening across the AI industry. Companies are increasingly looking at how AI systems store and retrieve long-term knowledge, rather than relying solely on large context windows or search-based retrieval.

Earlier this year, OpenAI founding member and former director of AI at Tesla  Andrej Karpathy discussed the growing need for what he described as “LLM Knowledge Bases.” He argued that AI systems need structured and evolving memory rather than depending only on context windows and vector search.

Beever Atlas approaches that problem through workplace communication. Instead of focusing mainly on uploaded files, the system is designed around conversations that happen daily across team chat platforms. It can also process images, PDFs, voice notes and video files within the same searchable system.

The companies say the software is designed to work directly with AI assistants and coding tools such as Cursor, AWS Kiro and Qwen Code. Integrations for OpenClaw and Hermes Agent are expected later in 2026.

Pak-Sun Ting, Co-Founder and CEO of Votee AI  said: "Hong Kong has always been known for property and finance. Beever Atlas is proof that world-class AI infrastructure can emerge from an HK-headquartered company and be shared openly with the world. Every growing organization faces the same silent liability: conversational knowledge loss. Beever Atlas turns this perishable resource into a compounding organizational asset."

A large part of the enterprise version focuses on privacy and access control. The system mirrors permissions from Slack and Microsoft Teams so users can only retrieve information they are already authorised to access. Permission updates are reflected automatically when access changes inside company systems.

The enterprise edition also includes audit logs, encryption controls and data retention settings for organisations handling sensitive internal data. Companies can run the software entirely inside their own infrastructure using Docker and connect it to their preferred AI models through LiteLLM.

The companies argue that organising information is more useful than simply storing chat archives. Jacky Chan Co-Founder and CTO of Votee AI said: "The key technical decision was to treat agent memory as a knowledge engineering problem, not a retrieval problem. Structure beats similarity — a typed graph of who works on what is more useful to an AI than vector search over a Slack archive."

The software also includes protections against prompt injection attacks and systems designed to reduce hallucinated responses. According to the companies, the AI is designed to return “I don't know” with citations when confidence is low instead of generating unsupported answers.

As workplace communication becomes increasingly fragmented across chat platforms, companies are beginning to treat internal conversations as information that AI systems can organise, retrieve and build on. Beever Atlas reflects a broader push to turn everyday workplace communication into long-term organisational memory.