As workplace knowledge spreads across chats, AI firms are building systems that can structure, retrieve and preserve it over time.
Updated
May 11, 2026 5:24 PM

A messaging app on a phone. PHOTO: ADOBE STOCK
Votee AI, an enterprise AI company headquartered in Hong Kong, has partnered with its Toronto-based research lab Beever AI to launch Beever Atlas. The new platform is designed to turn workplace chats into searchable knowledge that AI systems can retrieve and understand.
The release focuses on a growing issue inside organisations. Much of today’s workplace knowledge now exists inside chat platforms such as Slack, Microsoft Teams, Discord and Telegram. Important discussions, project decisions and technical information often disappear into long message histories that are difficult to search later.
Beever AI developed the platform to organise those conversations into a structured system for AI assistants. The software connects with Telegram, Discord, Mattermost, Microsoft Teams and Slack, then converts conversations into linked records of people, projects, files and decisions.
The collaboration combines Votee AI’s enterprise infrastructure work with Beever AI’s research around AI memory systems. The companies are releasing two versions of the product. The open-source edition is aimed at individual developers, researchers and creators. The enterprise edition is designed for banks, government agencies and larger organisations with stricter security requirements.
The release also reflects a broader shift happening across the AI industry. Companies are increasingly looking at how AI systems store and retrieve long-term knowledge, rather than relying solely on large context windows or search-based retrieval.
Earlier this year, OpenAI founding member and former director of AI at Tesla Andrej Karpathy discussed the growing need for what he described as “LLM Knowledge Bases.” He argued that AI systems need structured and evolving memory rather than depending only on context windows and vector search.
Beever Atlas approaches that problem through workplace communication. Instead of focusing mainly on uploaded files, the system is designed around conversations that happen daily across team chat platforms. It can also process images, PDFs, voice notes and video files within the same searchable system.
The companies say the software is designed to work directly with AI assistants and coding tools such as Cursor, AWS Kiro and Qwen Code. Integrations for OpenClaw and Hermes Agent are expected later in 2026.
Pak-Sun Ting, Co-Founder and CEO of Votee AI said: "Hong Kong has always been known for property and finance. Beever Atlas is proof that world-class AI infrastructure can emerge from an HK-headquartered company and be shared openly with the world. Every growing organization faces the same silent liability: conversational knowledge loss. Beever Atlas turns this perishable resource into a compounding organizational asset."
A large part of the enterprise version focuses on privacy and access control. The system mirrors permissions from Slack and Microsoft Teams so users can only retrieve information they are already authorised to access. Permission updates are reflected automatically when access changes inside company systems.
The enterprise edition also includes audit logs, encryption controls and data retention settings for organisations handling sensitive internal data. Companies can run the software entirely inside their own infrastructure using Docker and connect it to their preferred AI models through LiteLLM.
The companies argue that organising information is more useful than simply storing chat archives. Jacky Chan Co-Founder and CTO of Votee AI said: "The key technical decision was to treat agent memory as a knowledge engineering problem, not a retrieval problem. Structure beats similarity — a typed graph of who works on what is more useful to an AI than vector search over a Slack archive."
The software also includes protections against prompt injection attacks and systems designed to reduce hallucinated responses. According to the companies, the AI is designed to return “I don't know” with citations when confidence is low instead of generating unsupported answers.
As workplace communication becomes increasingly fragmented across chat platforms, companies are beginning to treat internal conversations as information that AI systems can organise, retrieve and build on. Beever Atlas reflects a broader push to turn everyday workplace communication into long-term organisational memory.
Keep Reading
The quiet infrastructure shift powering the next generation of data centers
Updated
February 12, 2026 1:21 PM

Peripheral Component Interconnect Express (PCIe) port on a motherboard, coloured yellow. PHOTO: UNSPLASH
Modern data centers operate on a simple yet fundamental principle: computers require the ability to share data extremely quickly. As AI and cloud systems grow, servers are no longer confined to a single rack. They are spread across many racks, sometimes across entire rooms. When that happens, moving data quickly and cleanly becomes harder.
Montage Technology, a Shanghai-based semiconductor company, builds the chips and connection systems that help servers exchange data without delays. This week, the company announced a new Active Electrical Cable (AEC) solution based on PCIe 6.x and CXL 3.x — two important standards used to connect CPUs, GPUs, network cards and storage inside modern data centers.
In simple terms, Montage’s new AEC product helps different parts of a data center “talk” to each other faster and more reliably, even when those parts are physically far apart.
As data centers grow to support AI and cloud workloads, their architecture is changing. Instead of everything sitting inside one rack, systems now stretch across multiple racks and even multiple rows. This creates a new problem: the longer the distance between machines, the harder it is to keep data signals clean and fast.
This is where Active Electrical Cables come in. Unlike regular copper cables, AECs include small electronic components inside the cable itself. These components strengthen and clean up the data signal as it travels, so information can move farther without getting distorted or delayed.
Montage’s solution uses its own retimer chip based on PCIe 6.x and CXL 3.x. A “retimer” refreshes the data signal so it arrives accurately at the other end. This allows servers, GPUs, storage devices and network cards to stay tightly connected even across longer distances inside large data centers.
The company also uses high-density cable designs and built-in monitoring tools so operators can track performance and fix issues faster. That makes large data centers easier to deploy and maintain.
According to Montage, the solution has already passed interoperability tests with CPUs, xPUs, PCIe switches and network cards. It has also been jointly developed with cable manufacturers in China and validated at the system level.
What makes this development important is not just speed. It is about scale. AI models, cloud services and real-time applications demand massive amounts of data to move continuously between machines. If that movement slows down, everything else slows with it.
By improving how machines connect across racks, Montage’s AEC solution supports the kind of infrastructure that next-generation AI and cloud systems depend on.
Looking ahead, the company plans to expand its high-speed interconnect products further, including work on PCIe 7.0 and Ethernet retimer technologies.
Quietly, in the background of every AI system and cloud service, there is a network of cables and chips doing the hard work of moving data. Montage’s latest launch focuses on making that hidden layer faster, cleaner and ready for the scale that modern computing now demands.