As workplace knowledge spreads across chats, AI firms are building systems that can structure, retrieve and preserve it over time.
Updated
May 11, 2026 5:24 PM

A messaging app on a phone. PHOTO: ADOBE STOCK
Votee AI, an enterprise AI company headquartered in Hong Kong, has partnered with its Toronto-based research lab Beever AI to launch Beever Atlas. The new platform is designed to turn workplace chats into searchable knowledge that AI systems can retrieve and understand.
The release focuses on a growing issue inside organisations. Much of today’s workplace knowledge now exists inside chat platforms such as Slack, Microsoft Teams, Discord and Telegram. Important discussions, project decisions and technical information often disappear into long message histories that are difficult to search later.
Beever AI developed the platform to organise those conversations into a structured system for AI assistants. The software connects with Telegram, Discord, Mattermost, Microsoft Teams and Slack, then converts conversations into linked records of people, projects, files and decisions.
The collaboration combines Votee AI’s enterprise infrastructure work with Beever AI’s research around AI memory systems. The companies are releasing two versions of the product. The open-source edition is aimed at individual developers, researchers and creators. The enterprise edition is designed for banks, government agencies and larger organisations with stricter security requirements.
The release also reflects a broader shift happening across the AI industry. Companies are increasingly looking at how AI systems store and retrieve long-term knowledge, rather than relying solely on large context windows or search-based retrieval.
Earlier this year, OpenAI founding member and former director of AI at Tesla Andrej Karpathy discussed the growing need for what he described as “LLM Knowledge Bases.” He argued that AI systems need structured and evolving memory rather than depending only on context windows and vector search.
Beever Atlas approaches that problem through workplace communication. Instead of focusing mainly on uploaded files, the system is designed around conversations that happen daily across team chat platforms. It can also process images, PDFs, voice notes and video files within the same searchable system.
The companies say the software is designed to work directly with AI assistants and coding tools such as Cursor, AWS Kiro and Qwen Code. Integrations for OpenClaw and Hermes Agent are expected later in 2026.
Pak-Sun Ting, Co-Founder and CEO of Votee AI said: "Hong Kong has always been known for property and finance. Beever Atlas is proof that world-class AI infrastructure can emerge from an HK-headquartered company and be shared openly with the world. Every growing organization faces the same silent liability: conversational knowledge loss. Beever Atlas turns this perishable resource into a compounding organizational asset."
A large part of the enterprise version focuses on privacy and access control. The system mirrors permissions from Slack and Microsoft Teams so users can only retrieve information they are already authorised to access. Permission updates are reflected automatically when access changes inside company systems.
The enterprise edition also includes audit logs, encryption controls and data retention settings for organisations handling sensitive internal data. Companies can run the software entirely inside their own infrastructure using Docker and connect it to their preferred AI models through LiteLLM.
The companies argue that organising information is more useful than simply storing chat archives. Jacky Chan Co-Founder and CTO of Votee AI said: "The key technical decision was to treat agent memory as a knowledge engineering problem, not a retrieval problem. Structure beats similarity — a typed graph of who works on what is more useful to an AI than vector search over a Slack archive."
The software also includes protections against prompt injection attacks and systems designed to reduce hallucinated responses. According to the companies, the AI is designed to return “I don't know” with citations when confidence is low instead of generating unsupported answers.
As workplace communication becomes increasingly fragmented across chat platforms, companies are beginning to treat internal conversations as information that AI systems can organise, retrieve and build on. Beever Atlas reflects a broader push to turn everyday workplace communication into long-term organisational memory.
Keep Reading
The IT services firm strengthens its collaboration with Google Cloud to help enterprises move AI from pilot projects to production systems
Updated
March 17, 2026 1:02 AM

Google Cloud building. PHOTO: ADOBE STOCK
Enterprise interest in AI has moved quickly from experimentation to execution. Many organizations have tested generative tools, but turning those tools into systems that can run inside daily operations remains a separate challenge. Cognizant, an IT services firm, is expanding its partnership with Google Cloud to help enterprises move from AI pilots to fully deployed, production-ready systems.
Cognizant and Google Cloud are deepening their collaboration around Google’s Gemini Enterprise and Google Workspace. Cognizant is deploying these tools across its own workforce first, using them to support internal productivity and collaboration. The idea is simple: test and refine the systems internally, then package similar capabilities for clients.
The focus of the partnership is what Cognizant calls “agentic AI.” In practical terms, this refers to AI systems that can plan, act and complete tasks with limited human input. Instead of generating isolated outputs, these systems are designed to fit into business workflows and carry out structured tasks.
To make that workable at scale, Cognizant is building delivery infrastructure around the technology. The company is setting up a dedicated Gemini Enterprise Center of Excellence and formalizing an Agent Development Lifecycle. This framework covers the full process, from early design and blueprinting to validation and production rollout. The aim is to give enterprises a clearer path from the AI concept to a deployed system.
Cognizant also plans to introduce a bundled productivity offering that combines Gemini Enterprise with Google Workspace. The targeted use cases are operational rather than experimental. These include collaborative content creation, supplier communications and other workflow-heavy processes that can be standardized and automated.
Beyond productivity tools, Cognizant is integrating Gemini into its broader service platforms. Through Cognizant Ignition, enabled by Gemini, the company supports early-stage discovery and prototyping while helping clients strengthen their data foundations. Its Agent Foundry platform provides pre-configured and no-code capabilities for specific use cases such as AI-powered contact centers and intelligent order management. These tools are designed to reduce the amount of custom development required for each deployment.
Scaling is another element of the strategy. Cognizant, a multi-year Google Cloud Data Partner of the Year award winner, says it will rely on a global network of Gemini-trained specialists to deliver these systems. The company is also expanding work tied to Google Distributed Cloud and showcasing capabilities through its Google Experience Zones and Gen AI Studios.
For Google Cloud, the partnership reinforces its enterprise AI ecosystem. Cloud providers can offer models and infrastructure, but enterprise adoption often depends on service partners that can integrate tools into existing systems and manage ongoing operations. By aligning closely with Cognizant, Google strengthens its ability to move Gemini from platform capability to production deployment.
The announcement does not introduce a new AI model. Instead, it reflects a shift in emphasis. The core question is no longer whether AI tools exist, but how they are implemented, governed and scaled across large organizations. Cognizant’s expanded role suggests that execution frameworks, internal deployment and structured delivery models are becoming central to how enterprises approach AI.
In that sense, the partnership is less about new technology and more about operational maturity. It highlights how AI is moving from isolated pilots to managed systems embedded in business processes — a transition that will likely define the next phase of enterprise adoption.