Back to Blog

The Company Where Every Team Shares One Brain

When support, engineering, and sales share the same AI layer, information stops getting trapped in silos. Here's what that looks like — and why almost nobody is doing it yet.

Kenny Vaneetvelde
Kenny Vaneetvelde Apr 1, 2026 · 18 min read
The Company Where Every Team Shares One Brain

I keep thinking about the same scenario. It's become a bit of an obsession, actually. So let me paint it for you.

Monday morning. A support agent gets a message from a customer: a feature in the billing module is returning wrong totals on invoices with multiple discount codes. The agent opens a chat with Claude.

"Customer is reporting incorrect invoice totals when multiple discount codes are applied. They sent this screenshot."

Claude searches through the system, figures out which plan the customer is on by querying for it, but also asks a few follow-up questions: which discount codes were applied? how long has this been happening? The agent relays the answers.

Then Claude does something no human support agent could do in five minutes. It checks Jira for known issues related to the billing module. Nothing matches. It pulls up the ticket creation guidelines from the company Notion to understand what engineering needs in a bug report. It reads the relevant billing module code to identify where discount stacking is handled. It finds the likely culprit: a function that applies discounts sequentially instead of on the original subtotal, meaning discounts compound in ways they shouldn't.

Claude creates a Jira ticket. Not a vague "billing is broken" ticket, but one with reproduction steps, the specific function and file reference, a severity assessment, etc. The support agent reviews it, confirms the details, and submits.

From there, an automated workflow kicks in that triggers when new tickets are created. The company likes having a plan of action containing a root cause analysis and a proposed fix prepared for new tickets in the technical space of the company Notion (but this could just as well be Confluence, Obsidian, etc.), so an instance of Claude checks the codebase, comes up with a plan of action based on the actual code, and puts it in the company Notion exactly according to the company guidelines.

An assigned developer determines this to be a solid plan of action, but has a few adjustments to make. These can be done manually in the Notion directly, or Claude can be asked to modify it. Once happy, the dev decides the ticket is small enough to be developed without human assistance, so a Claude agent equipped with a development environment picks up the new ticket, pulls the surrounding code context, reviews the prepared and approved plan of action, and gets to work.

In this case, the fix is straightforward: a single function that needs to apply discounts to the original subtotal instead of compounding them. Claude drafts the code change and opens a pull request. A developer reviews it alongside a second AI agent that checks for regressions and edge cases. The tech lead has the final say before it gets merged. No AI-generated code ships without human sign-off. The AI proposes, humans decide. That's how you get the speed without tanking code quality.

That same afternoon, a salesperson is prepping for a client meeting. They ask Claude to pull together a meeting brief: account history, recent support interactions, product roadmap updates relevant to this client's use case. Claude compiles the brief, but adds a flag at the top: "The billing module has a known bug affecting multi-discount invoices (BILL-2847). A fix is in progress but not yet deployed. I would avoid demoing discount stacking until the patch ships."

The salesperson adjusts the demo plan. The client never sees the broken feature. Nobody had to send a Slack message, attend a standup, or forward an email. Support's knowledge reached engineering's backlog and sales' meeting prep through one shared layer.

Every component of that scenario is available as of March 2026. The MCP servers connecting Claude to Jira, Notion, and the codebase are production-ready. The skills encoding how each team works are shareable across the organization. The infrastructure exists and is deployed at scale.

The real question is why almost nobody is doing it yet. That's what this article is about, and it's what I think about every day at BrainBlend AI, where we help companies build exactly this kind of cross-team AI integration.

What Organizational Silos Actually Cost

The cost of organizational silos

Before I get into the how, let me make the case for the why. Because when I talk to companies about connecting their teams through AI, the first question is always "why bother? Our current setup works fine."

It doesn't. You just can't see the cost.

Poor data quality alone costs the US economy an estimated $3.1 trillion per year, according to IBM. At the individual level, poor workplace communication costs roughly $12,506 per employee per year, according to a Grammarly/Harris Poll study. Employees waste 5.3 hours every week waiting for data from colleagues or recreating information that already exists somewhere in the company.

As of 2025, 90% of IT leaders say data silos create business challenges, and 83% say integration delays equate directly to lost revenue, according to MuleSoft's Connectivity Benchmark Report. Everyone knows this is a problem. Nobody seems to fix it. Companies have tried ERPs, shared databases, Slack channels, cross-functional meetings. Those helped at the margins. But they all require humans to actively push information across boundaries. Someone has to remember to forward the email, mention it in the standup, or post the update in the right channel.

And here's what frustrates me: AI adoption is making it worse. According to a 2025 enterprise AI adoption survey by Writer, 68% of executives report that generative AI has actually created tension between IT and other departments. Marketing has one AI tool, sales has another, engineering has three. Nobody's tracking what works. There's no shared context, no unified system, no way for one department to benefit from what another has learned.

Most companies now are replicating the silo problem inside their AI strategy.

Why Individual AI Tools Make Things Worse

The Jevons Paradox in AI adoption

As of early 2026, the conversation about AI in companies is stuck on individual productivity. Faster writing. Faster coding. Faster analysis. Every team gets their own AI tool, every worker becomes a bit more efficient, and the company declares an AI strategy.

I think that's the wrong frame entirely.

A study tracking 443 million hours of digital workplace activity across 163,000 employees found something most AI optimists don't want to hear. After AI adoption, time spent on email went up 104%. Chat and messaging went up 145%. Focused work sessions dropped 9%. Weekend work increased by 46-58%. The study's conclusion: "The data is unambiguous: AI does not reduce workloads."

This is the Jevons Paradox playing out in real time. When steam engines got more efficient, Britain used more coal, not less. When AI makes workers faster, companies assign more work, not less. 77% of employees in an Upwork study say AI tools have added to their workload. 81% of C-suite leaders admit they've increased worker demands in response. One executive captured the dynamic perfectly: "I got the eight hours to two hours, but now I can get 20 hours of work."

One developer on Reddit described going from 80 commits per month on one repo to 1,400+ commits across 39 repos, managing 17 AI agents running around the clock. His work flipped from "80% coding, 20% thinking" to "80% thinking, reviewing, deciding." His conclusion: "I didn't lose my job. I got the job of ten people."

Giving individuals better tools without changing how the organization works just creates overwhelmed individuals. The answer is not more individual AI tools. It's making AI work between teams, not just within them.

The Jevons trap hits when AI makes individuals faster and the organization responds by piling on more individual work. Cross-team AI changes the equation because it reduces coordination overhead itself: the meetings, the Slack threads, the email chains that exist only to move information from one team to another. You're not making each person do more. You're eliminating the relay work that nobody wanted to do in the first place.

This is what I think most people miss. When teams share the same AI infrastructure, and that infrastructure has access to each team's tools, support's bug report becomes engineering's action plan becomes sales' meeting prep. Not because someone forwarded an email, but because the system connects them. And almost nobody is framing it this way. The conversation in boardrooms, at conferences, and in developer communities is almost entirely about individual productivity, team management, or job replacement.

How This Actually Works

MCP infrastructure connecting enterprise tools

The technical foundation is a protocol called MCP (Model Context Protocol). It's a standardized protocol that lets any tool plug into your AI without a custom integration for each one. Instead of building bespoke connections between your AI and every tool your company uses, MCP provides a common interface.

As of March 2026, the out-of-the-box ecosystem is substantial. There are 38+ pre-built connectors spanning every major enterprise tool category: Jira, Slack, Notion, Salesforce, HubSpot, GitHub, Google Workspace, Figma, Snowflake, Amplitude, and dozens more. MCP adoption has gone mainstream: OpenAI, Microsoft, Google, Amazon, and Vercel have all integrated MCP support.

The key insight, and the thing I keep coming back to when I work with companies: MCP servers are the shared infrastructure. Skills define how the AI does things. MCP servers define what it can reach. Skills can be built on top of MCP connections, and they're portable across every surface. When every team's tools are connected through the same MCP layer, the AI can pull context from any team's domain. Support's Jira tickets, engineering's codebase, sales' HubSpot data, product's Notion docs, all accessible through one shared layer.

Block (the company behind Square) is the clearest example of what this looks like at scale. They deployed MCP across 12,000 employees in 15 job functions within two months. They built 100+ internal MCP servers connecting Snowflake, GitHub, Jira, Slack, Google Drive, and 50+ internal platforms. Engineers, salespeople, fraud analysts, designers, and non-technical employees all use the same shared AI layer. One employee analyzed 80,000 sales leads in one hour instead of days.

Cognizant is deploying the same pattern to up to 350,000 employees globally. Cowork, which launched in January 2026, brought MCP-powered AI to non-technical users with department-specific plugins and private plugin marketplaces for organizations to share skills internally. And the Microsoft Copilot Cowork partnership extends this into the Microsoft 365 ecosystem. This is where enterprise infrastructure is heading.

What This Looks Like Across the Whole Company

The support-to-engineering-to-sales scenario in the opening is the one I think about most, but it's just one thread. When the infrastructure is in place, the same pattern plays out across every department. Let me walk through a few.

HR + Engineering. A new hire starts. The onboarding workflow triggers. Claude checks which team they're joining, pulls the tech stack from the engineering wiki, generates a personalized onboarding doc, and creates the relevant accounts. The engineering lead gets a summary of the new hire's background and where they'll need the most ramp-up support. The new hire can ask Claude questions from day one, and instead of getting generic answers, Claude reads the team's documentation and conventions to give answers that match how the team actually works. The MCP connectors for all of this exist. The missing piece is organizational: someone has to connect the systems and define the flow.

Product + Marketing + Sales. Product ships a feature. The release notes in GitHub trigger Claude to draft a customer-facing announcement. Marketing's skill adjusts the messaging for different channels. Sales gets a one-pager on how to position the feature in upcoming calls. Right now, sales-marketing misalignment is estimated to cost businesses more than $1 trillion each year, and SiriusDecisions (now Forrester) found that 60-70% of B2B marketing content goes completely unused by sales. The information exists. It just doesn't flow.

Finance + Operations. End-of-quarter reporting. Claude pulls data from Snowflake, cross-references with project timelines from Linear, and identifies discrepancies between budgeted and actual spend. The CFO gets a summary with flagged anomalies. Operations gets an early warning about projects running over budget, before the formal review where everyone pretends they didn't know.

Codified organizational knowledge. As of early 2026, there's a pattern emerging that I find really promising. Teams using Claude Code commit a file called CLAUDE.md to their repositories. It contains conventions, standards, common mistakes to avoid, and instructions for how the team works. Boris Cherny, the creator of Claude Code, described how his own team at Anthropic does this: "Our team shares a single CLAUDE.md for the Claude Code repo. We check it into git, and the whole team contributes multiple times a week."

This is codified team knowledge that compounds over time. Claude Code has a built-in hierarchy for it: enterprise-level (for org-wide policies), project-level (team-shared), and personal overrides. Scale this up and every team's AI knows how that team works. A marketing person using Cowork asks about a feature's limitations, and Claude reads the engineering team's CLAUDE.md to give an accurate answer instead of hallucinating one. That's organizational knowledge flowing through the AI layer without anyone having to explain anything to anyone.

What Happens to the People

The identity crisis of AI adoption

This is the section where most AI articles get dishonest. They either promise "nobody will lose their job" (not credible) or predict mass unemployment (not supported by current data). I'm not going to do either.

A Harvard Business School study published in March 2026, analyzing nearly all U.S. job postings from 2019 through early 2025, found that automation-heavy roles shrank 13% after ChatGPT's launch, while augmentation-friendly roles grew 20%. The labor market isn't collapsing. It's shifting.

Anthropic's own Economic Index from January 2026 shows 52% of work-related Claude conversations are augmented (the user learns, iterates, gets feedback) versus 45% that are fully automated. MIT Sloan research found that AI adoption correlates with higher revenue, profits, and headcount at the company level. At Davos 2026, tech executives argued that AI excels at tasks but can't outperform humans at "entire jobs."

But it would be dishonest to leave it there. Anthropic's own CEO Dario Amodei warned that AI could displace half of all entry-level white-collar jobs in the next 1-5 years. Junior developer hiring has been declining steadily since 2022. The bottom rung of the career ladder is getting pulled out right as AI makes it harder for juniors to learn by doing.

And the identity crisis is real. One engineering director described his team of 8 developers, who had worked together for over 9 years, experiencing an existential crisis after heavy AI adoption. Engineers who took pride in writing clean, tight, maintainable code were asking: "What is my job exactly?" Some of them will leave the field entirely, not because they can't do the work, but because the work no longer feels like theirs.

The craft is changing. That's not nothing. Pretending it's painless would be insulting to the people living through it.

But here's where I think the cross-team framing actually helps. The individual productivity model is what creates the most pain: one person doing the work of ten, drowning in review tasks, losing the craft that gave them meaning. The organizational model distributes that differently. People keep their roles. AI handles the information relay between them. The support agent still talks to customers. The developer still writes and reviews code. The salesperson still builds relationships. What changes is that information stops getting trapped in silos and starts flowing to where it's needed.

The teams that are navigating this well are the ones where engineers moved into more architectural and product ownership responsibilities. They spend more time writing, thinking, and discussing, and less time doing the rote implementation that AI now handles. That's augmentation. Not "do more with less." Do better with support.

The Hard Parts

The hard parts of cross-team AI

If this all sounds too clean, it's because I haven't talked about the hard parts yet. And there are real ones.

Accountability. When AI agents start creating tickets, drafting pull requests, and preparing documents across teams, who's responsible when something goes wrong? When an AI agent touches a production system, the question isn't whether the code works. It's whether anyone actually reviewed the decision to run it.

If AI creates a Jira ticket that leads to a deploy that breaks production, who owns that? The answer has to involve human-in-the-loop checkpoints for high-stakes actions, clear audit trails, and defined ownership. In the cross-team scenario I described, that means defining who approves what at each stage. The support agent signs off on the ticket. The tech lead approves the code change. The salesperson decides whether to adjust the demo. AI handles the information relay, but a human owns each decision point.

The service desk trap. AI service desks report impressive-looking numbers: high ticket deflection, halved resolution times. The dashboards look great. But employees are often more frustrated, because the chatbot resolves a ticket in 3 seconds by sending a knowledge base article the employee already tried. Ticket closed, metrics look great, but unfortunately the laptop still won't connect to the VPN.

This is what happens when AI optimizes a single team's metrics without cross-organizational awareness. The service desk "wins" on its dashboard while everyone else loses productivity. It's the silo problem replicated inside the AI implementation itself. The fix: measure outcomes across teams, not just within them.

Complexity. Connecting every team through shared AI infrastructure sounds like a massive project. It doesn't have to be, if you're incremental about it. MCP is a standardized protocol with dozens of pre-built connectors. You don't have to connect everything on day one. But let's not pretend the maintenance is zero. Managing MCP server configurations, handling permissions across teams, dealing with edge cases, that's real ongoing work.

Stale information. If AI treats shared information as ground truth and that information is outdated, errors propagate faster and further than they would through human communication. A wrong assumption in one team's documentation becomes "fact" across the whole organization because AI serves it with confidence. Obsidian Security data shows AI agents are routinely granted 10x more access than they actually use, and a proof-of-concept attack demonstrated how a malicious support ticket could use an AI agent to exfiltrate internal data through prompt injection. These are real risks that need real governance.

None of these problems are unsolvable. But they're real, and any company that ignores them will get burned.

How to Actually Get There

The practical path to cross-team AI

So how do you go from "we use ChatGPT sometimes" to the scenario I described in the opening? Having thought about this a lot, both in my own work and in conversations with companies at various stages, here's what I think the path looks like.

Find your champions and your resistors. You need both. Champions are the people who will experiment, build the first integrations, and show others what's possible. But you also need the people who are most skeptical. BCG research found that teams with managers who actively use AI themselves see up to 4x higher adoption rates. And the biggest skeptic who gets convinced? They become your most credible internal advocate, because they surface the real concerns about reliability, security, and workflow disruption that make the implementation actually stick.

Start with a pain point, not a vision. Do not try to "transform the company with AI." Find one specific, measurable problem where information doesn't flow between teams. Ticket handoff from support to engineering. Feature announcements from product to marketing. Status updates from engineering to leadership. Build one integration. Measure the result. Expand from there. Go from team to team and ask them what they actually need to succeed. The answer is almost never "an AI strategy." It's usually something concrete like "I need to know when engineering ships a fix for the bug my customer reported."

Unify the stack. Get teams on the same AI platform. One set of MCP connections. One shared skills repository. This is harder than it sounds (platform decisions involve procurement, IT, and leadership buy-in), but it's the only path that avoids rebuilding silos inside the AI layer. Cowork already supports private plugin marketplaces and org-wide skill provisioning as of February 2026. The alternative is every team picking their own tools, and you're right back where you started.

Build organizational memory incrementally. Start with CLAUDE.md files in each team's repos. Document conventions, standards, common questions. Add shared MCP connections one at a time. Jira first. Then Notion. Then Slack. You don't need to wire up the whole company on day one.

Invest in organizational readiness, not just technology. According to RAND, by some estimates more than 80% of AI projects fail, roughly twice the rate of IT projects that don't involve AI. That's not a technology problem. It's an organizational problem: data quality, process clarity, team alignment. Enterprises with a formal AI strategy report 80% success rates versus 37% for those without one.

I should be transparent here: this is exactly what my company BrainBlend AI does. We help organizations do the assessment, the infrastructure setup, and the team alignment that makes the technology actually stick. So yes, I have skin in this game. But I'm writing this article because I genuinely believe this is where enterprise AI needs to go, not because it's a sales pitch. The scenario in the opening is what gets me excited about this work.

Give teams space to figure it out. The companies where AI adoption actually works give teams freedom to experiment. The ones that mandate daily AI usage tied to performance metrics end up with developers prompting nonsense for five minutes to hit quota. Freedom and trust produce results. Mandates produce theater.

Most of these steps require organizational authority. If you're not in a position to drive that decision yourself, the most valuable move is to document one specific cross-team information failure, concretely, with data, and bring it to someone who can.

The Company That Knows What It Knows

The promise here is not that AI will run your company. It's that your company will finally know what it knows.

As of early 2026, critical information is still locked in people's heads, buried in Slack threads, scattered across tools that don't talk to each other. Someone in support knows about a bug that sales doesn't. Someone in engineering knows about a limitation that product hasn't accounted for. Someone in HR knows about a team capacity problem that the project manager can't see.

A shared AI layer doesn't replace any of those people. It makes sure their knowledge reaches the people who need it, when they need it.

The technology is not the bottleneck. It hasn't been for a while. The bottleneck is the same one it has always been: organizational willingness to change how information flows. That's hard. It requires finding champions and convincing skeptics and starting small and building trust incrementally.

But the companies that figure it out will operate at a speed that siloed organizations simply cannot match. Not because their people are faster, but because their knowledge actually flows.

That's what I'm building toward. And I think it's worth building toward.


Kenny Vaneetvelde is the creator of Atomic Agents, an open-source multi-agent AI framework, and co-founder of BrainBlend AI, where the focus is on building the cross-team AI systems this article describes. Get in touch if you want to talk about what that looks like for your organization.