The Talent Equation
In the spring of 2023, somewhere between the euphoric hype of ChatAI tooling and the quiet desperation of enterprises that couldn't hire fast enough, a small team in Berlin began operating on a premise so counterintuitive it bordered on contrarian: that the scarcest resource in the artificial intelligence revolution was not compute, not data, not even capital — but the specific human beings who could make any of it work. Not software engineers in the general sense. Not "data scientists" with Coursera certificates. The truly rare ones. The researchers who had published at NeurIPS or ICML, the machine learning engineers who had shipped production models at scale, the PhDs who understood transformer architectures not as abstractions but as systems they had personally broken and rebuilt. The people who could look at a company's data infrastructure and, within weeks, tell you whether your AI ambitions were viable or delusional.
Brainpool AI — the company built to aggregate, deploy, and monetize this human scarcity — is a study in the economics of expertise at the frontier. It is not a staffing agency, though it places people. It is not a consulting firm, though it sells advisory hours. It is not a marketplace, though it matches supply with demand. It is something more precisely calibrated to the particular distortions of the AI talent market: a curated network of elite AI experts, deployed on-demand to enterprises that need specialized intelligence they cannot build internally, wrapped in a platform layer that makes the matchmaking repeatable and the quality verifiable.
The bet is elegant in its specificity. While every major tech company hoards AI talent through seven-figure compensation packages, and while every enterprise outside the top twenty scrambles to compete, Brainpool positioned itself as the intermediary — the entity that could access the world's best AI minds without requiring them to leave their academic posts, their research labs, or their existing companies. A fractional model for the most expensive talent category on earth.
By the Numbers
Brainpool AI at a Glance
500+Vetted AI experts in the network
2019Year founded in Berlin
200+Enterprise clients served
48hrsTarget expert-matching turnaround
30+Countries represented in expert network
87%Repeat engagement rate (estimated)
€5M+Estimated cumulative funding raised
The numbers are still early-stage. The revenue is not yet in the hundreds of millions. The headcount is measured in dozens, not thousands. But the structural insight underneath Brainpool — that AI expertise is a commodity-in-name-only, that the distance between the 90th percentile ML engineer and the 99th percentile is the distance between a prototype and a production system — is the kind of insight that either stays niche forever or becomes the basis for an enormous platform business. The question is which.
Berlin, Before the Gold Rush
The founding story of Brainpool begins not with a single eureka moment but with accumulated frustration. Stefan Perkovic and his co-founders — operating in the Berlin startup ecosystem in the late 2010s — watched the AI talent crisis unfold in real time. German industry, the Mittelstand and the DAX corporations alike, were pouring billions into digitalization strategies, and every strategy deck ended with the same bottleneck: "We need AI talent we cannot find."
Perkovic had the profile of someone who understood both sides of the equation. A background that straddled technology and business development, with the particular combination of technical fluency and commercial instinct that defines the best marketplace founders. He wasn't an AI researcher himself — and this matters, because the company he built would depend on the credibility of people who were. The insight was positional, not technical: he saw the gap between supply and demand not from the supply side (the researchers) or the demand side (the enterprises) but from the clearing mechanism between them. There was no efficient clearing mechanism. That was the entire opportunity.
Brainpool was incorporated in 2019, which in the AI talent timeline placed it squarely in the pre-ChatGPT era — a period when demand for AI expertise was already intense but hadn't yet become the defining corporate obsession it would become after November 2022. The timing was simultaneously early and prescient. Early, because the total addressable market for on-demand AI expertise was still measurable in the low billions. Prescient, because the explosion of generative AI would turn that market into something vastly larger, and Brainpool would already have the network in place when the wave hit.
The initial model was deceptively simple: recruit the best AI minds in the world — PhDs, postdocs, senior researchers at top labs — and offer them project-based work with enterprises that needed their specific expertise. Not full-time employment. Not relocation. Not the golden handcuffs of a Google or Meta compensation package. Instead, Brainpool offered flexibility, intellectual variety, and income supplementation — a proposition that turned out to resonate powerfully with a class of experts who valued autonomy above nearly everything else.
The world's best AI talent doesn't want to be employed by a corporation. They want to work on interesting problems. Our job is to make that possible at scale.
— Stefan Perkovic, CEO of Brainpool AI
The Shape of Scarcity
To understand why Brainpool exists, you have to understand the specific topology of the AI talent market — not the generic "there's a shortage of engineers" narrative that has been true in technology for three decades, but the particular, severe, and structurally unique shortage at the frontier of machine learning.
The numbers are stark. By most credible estimates, there are fewer than 50,000 people on earth who possess the combination of theoretical depth and practical engineering skill required to design, train, and deploy large-scale AI systems. Of those, perhaps 10,000 are at the true frontier — the researchers and engineers who publish in top venues, who have led teams that built production systems serving millions of users, who understand not just the math but the infrastructure, the failure modes, the subtle art of getting a model to actually work in the messy real world.
This population is concentrated overwhelmingly in a handful of institutions: Google DeepMind, OpenAI, Meta FAIR, Microsoft Research, Anthropic, and perhaps twenty top universities (Stanford, MIT, CMU, ETH Zurich, Tsinghua, University of Toronto, University College London). The geographic clustering is intense — the San Francisco Bay Area alone houses a disproportionate share, with London, Beijing, and a handful of other cities accounting for most of the rest.
For a German automotive manufacturer trying to build autonomous driving capabilities, or a Swiss pharmaceutical company trying to apply machine learning to drug discovery, or a Nordic energy company trying to optimize grid operations with AI — the talent simply isn't available through conventional hiring. The compensation arms race has pushed senior ML researcher salaries above $500,000 in base pay at top companies, with total compensation packages regularly exceeding $1 million. For companies outside the tech oligopoly, competing on these terms is not just expensive but structurally impossible.
This is the gap Brainpool occupies. Not staffing in the traditional sense — which implies fungible labor — but something closer to what McKinsey does for management talent or what top-tier investment banks do for capital markets expertise: a mechanism for accessing scarce, high-value human capital on terms that work for both sides of the equation.
The Curation Machine
The core of Brainpool's value proposition — and the thing that separates it from the dozens of AI freelancing platforms that emerged in the same period — is curation. Not curation in the vague startup-marketing sense of "we pick good people," but a rigorous, multi-stage vetting process that is designed to solve the specific information asymmetry problem that plagues the AI talent market.
The problem is this: when an enterprise needs AI expertise, the people making the purchasing decision — typically a CTO, VP of Engineering, or Head of Data Science — often lack the technical depth to evaluate whether a candidate truly possesses frontier-level capability. The difference between someone who can fine-tune a pre-trained model and someone who can architect a novel training pipeline from scratch is enormous, but it is invisible to a non-specialist. Credentials help but are insufficient — a PhD from a top program is necessary but not remotely sufficient. Publication records help but can be gamed or misleading. The only reliable signal is peer evaluation: does this person's work earn the respect of others operating at the same level?
Brainpool built its vetting process around this insight. Every expert in the network is evaluated by other experts already in the network — a peer-review mechanism that mirrors academic review processes but is oriented toward practical capability rather than theoretical contribution. The evaluation considers not just what someone knows but what they've shipped: production systems, deployed models, real-world applications that survived contact with messy data and organizational complexity.
The acceptance rate — reportedly below 10% of applicants — is designed to create a quality signal that enterprises can trust. When Brainpool matches a client with an expert, the implicit guarantee is that this person has been vetted by people who actually understand what frontier AI work looks like. This is the moat in miniature: the network itself becomes the credentialing mechanism, and the credentialing mechanism becomes the reason both sides — experts and enterprises — choose Brainpool over alternatives.
How Brainpool screens AI experts
Stage 1Application review: credentials, publication record, professional history screened by internal team
Stage 2Technical assessment: domain-specific evaluation of practical capabilities, not just theoretical knowledge
Stage 3Peer review: existing network members evaluate candidates within their specialization
Stage 4Reference validation: track record of real-world deployments and outcomes verified
Stage 5Onboarding: matched to initial engagements, with quality feedback loops informing future matching
The result is a network that grows slowly by design. Five hundred experts is not a large number — it is deliberately small. The density of quality in that network is the product, and diluting it would destroy the very thing that makes Brainpool work.
The Demand Side: Enterprises in the Fog
If the supply side of Brainpool's equation is elite AI talent, the demand side is something more amorphous and, in many ways, more interesting: enterprises that know they need AI but are profoundly uncertain about what they need, how much they need, and whether their existing data infrastructure can support what they think they want.
This uncertainty is not a failure of corporate intelligence. It is a rational response to a technology whose capabilities are evolving faster than any organization's ability to develop institutional knowledge about it. A Fortune 500 company's board reads about generative AI in the Financial Times, the CEO announces an "AI-first" strategy, the CTO is tasked with execution, and suddenly the organization needs to answer questions that require expertise it does not possess and cannot acquire through conventional means.
The questions are specific and consequential. Should we fine-tune an open-source large language model or build on top of a commercial API? What is the realistic accuracy ceiling for our computer vision use case given the quality of our labeled data? Can we actually deploy this recommendation engine without violating GDPR? Is the vendor telling us the truth about what their platform can do, or are we about to spend $20 million on infrastructure that won't deliver?
These are not questions a generalist consultant can answer. They require someone who has personally built the thing being discussed — who knows, from experience, where the bodies are buried in production ML systems. And they require that person to be available for weeks or months, not permanently, because the enterprise's need is project-specific: build the initial system, transfer knowledge to the internal team, and move on.
Brainpool's engagement model maps precisely to this pattern. A typical engagement begins with a scoping phase — Brainpool's team works with the client to define the problem precisely, often reformulating it in the process (one of the most valuable services: telling a company that the problem they think they have is not the problem they actually have). Then, from the network, one to five experts are matched based on domain specialization, technical requirements, and practical fit. The engagement runs for weeks to months, with Brainpool managing the relationship, the deliverables, and the quality assurance.
The economics are compelling on both sides. The enterprise gets access to talent it could never hire full-time, at a fraction of the cost of a permanent senior ML hire (no equity, no benefits, no multi-year commitment). The expert gets interesting work at premium hourly rates, with the flexibility to maintain their primary position — a professorship, a research role, an existing company — while earning supplemental income on problems they find intellectually engaging.
The Post-ChatGPT Inflection
And then, on November 30, 2022, everything changed.
The release of ChatGPT didn't create the AI talent shortage. It detonated it. Within months, every company in the world — not just tech companies, not just enterprises with existing data science teams, but every company — decided it needed an AI strategy. The demand for AI expertise went from intense to insane. McKinsey estimated that by 2023, AI-related job postings had increased by over 300% compared to pre-ChatGPT levels. Salaries spiked further. The already-thin pool of frontier talent was stretched to its absolute limit.
For Brainpool, this was the moment of maximum leverage — and maximum risk. Maximum leverage because the company had spent three years building exactly the network that the world now desperately needed. The 500+ vetted experts, the matching infrastructure, the client relationships, the reputation for quality — all of this was already in place when the wave hit. Brainpool didn't have to start building; it had to start scaling.
Maximum risk because the same forces that created overwhelming demand also created overwhelming competition. Every consulting firm — McKinsey, BCG, Bain, Accenture — launched AI practices. Every staffing firm pivoted to "AI talent solutions." New platforms emerged weekly: Toptal deepened its AI bench, Andela expanded beyond Africa into global AI talent, dozens of niche competitors appeared in the generative AI space specifically. The question was whether Brainpool's curation advantage — its carefully built network of peer-reviewed experts — would hold against the sheer volume of money being thrown at the problem by larger, better-funded competitors.
The early evidence suggested it would, at least for the specific market segment Brainpool targeted. The consulting firms could offer strategy decks but not hands-on technical execution. The staffing firms could provide bodies but not quality assurance. The generic freelancing platforms could offer scale but not the peer-reviewed credentialing that gave enterprise buyers confidence. Brainpool sat in a specific niche — elite, vetted, fractional AI expertise deployed to complex enterprise problems — that was hard to replicate without years of network building.
We don't compete with staffing agencies. We compete with the idea that you can solve AI problems without the people who actually understand AI.
— Brainpool company communications, 2023
The European Advantage
Brainpool's Berlin origins matter more than geography might suggest. The company is positioned at the intersection of two structural advantages that are specific to the European market — and that create defensibility against American competitors in ways that are not immediately obvious.
The first is regulatory. The European Union's AI Act, which entered into force in stages beginning in 2024, created an entirely new category of enterprise demand: compliance-oriented AI expertise. Companies operating in Europe — or selling to European customers — suddenly needed to understand not just how to build AI systems but how to build AI systems that could survive regulatory scrutiny. Risk assessment, bias auditing, transparency requirements, documentation standards — the AI Act created a compliance surface area that required deep technical knowledge married to regulatory fluency. Brainpool's European roots and its network of European AI researchers gave it a natural advantage in serving this demand.
The second is cultural. European enterprises — particularly in Germany, the Nordics, and Switzerland — tend toward more cautious, evidence-based technology adoption than their American counterparts. They are less likely to throw money at a vendor based on a demo and more likely to want an independent expert to evaluate whether a proposed solution will actually work. This purchasing behavior favors Brainpool's model — the trusted expert who provides an honest assessment — over the vendor-driven sales motion that dominates the American market.
The combination creates a beachhead that is genuinely defensible. An American AI staffing firm trying to serve a German automotive OEM faces not just geographic friction but regulatory ignorance, cultural mismatch, and the absence of the local academic relationships that Brainpool has spent years building. These are not insurmountable barriers, but they are real, and they buy time.
Platform Ambitions and the Matching Problem
The existential question for Brainpool — the one that determines whether it becomes a large platform company or remains a profitable niche services business — is whether it can transform the art of expert matching into the science of expert matching. Whether the institutional knowledge currently held by a small team of humans who understand both the supply side (what each expert is truly good at) and the demand side (what each enterprise actually needs) can be encoded into a platform layer that enables matching at scale without degrading quality.
This is the classic marketplace transition challenge, and Brainpool's leadership appears to understand it clearly. The early-stage company relied on high-touch, relationship-driven matching — the team knew the experts personally, understood their capabilities in nuanced ways that no database could capture, and could make judgment calls about fit that went beyond keyword matching. This approach produces excellent outcomes at small scale but cannot support ten thousand clients and five thousand experts.
The platform layer being built — details of which are sparse, as befitting a company at this stage — is designed to solve this problem without sacrificing the quality signal. The key insight is that matching AI experts to enterprise problems is itself a problem amenable to AI-assisted solutions, though the irony of this is not lost on anyone involved. Structured data about experts (publications, domain specializations, industry experience, language capabilities, past engagement outcomes) combined with structured data about client needs (technical requirements, timeline, budget, industry vertical) can be used to generate candidate matches that human account managers then refine.
The risk is the one that haunts every marketplace transitioning from curated to platform: that the platform layer introduces enough noise to dilute the quality signal that attracted both sides in the first place. If enterprises start receiving expert matches that are "pretty good" instead of "exactly right," the premium pricing model breaks down, and Brainpool becomes just another staffing platform competing on price and speed rather than quality.
The Talent Wants to Be Free
Perhaps the most underappreciated element of Brainpool's model is the supply-side dynamic: why the experts participate. The assumption might be that it's simply about money — and money matters, certainly. But the deeper driver is something more specific to the peculiar sociology of elite AI researchers.
These are people who, almost universally, chose academic or research careers because they valued intellectual freedom over compensation. They took the professorship at ETH Zurich instead of the offer from Google not because they couldn't get the Google offer but because they wanted to choose their own problems. They stayed in research labs because the work was more interesting than production engineering, even when the compensation differential was enormous.
Brainpool's model respects this preference structure. It doesn't ask experts to leave their primary positions. It doesn't impose corporate structure on their work. It offers them, instead, a curated stream of interesting problems from industry — problems that are often more practically grounded than what they encounter in their academic work, that expose them to real-world data at scale, and that provide both financial compensation and the satisfaction of seeing their expertise applied to consequential systems.
The result is a supply-side loyalty that is difficult for competitors to replicate. An expert who has worked with Brainpool on three engagements, who has been well-matched each time, who has been paid promptly and treated professionally, is unlikely to switch to a competitor platform offering marginally better rates. The switching cost is not financial — it's informational. Brainpool knows what they're good at, and that knowledge means better matches, which means more interesting work, which means continued participation. This is a network effect operating on the supply side, and it compounds.
I've turned down full-time offers from two major tech companies. With Brainpool, I get the interesting industry problems without giving up my research. That's the whole point.
— Anonymous Brainpool network member, AI researcher at a European university
The Competitive Landscape: Navigating Between Giants
Brainpool operates in a competitive landscape that is simultaneously crowded and stratified. Understanding where it sits requires mapping the market by quality tier rather than business model.
At the top of the market — the strategy layer — sit McKinsey, BCG, and the other traditional consulting firms, all of which have built substantial AI practices. McKinsey's QuantumBlack, BCG's GAMMA, and Bain's AI capabilities serve the C-suite with high-level strategy and organizational transformation. These firms command premium fees ($500–$1,000+ per hour for senior partners), have unmatched corporate relationships, and carry the imprimatur of institutional credibility. What they lack — and what creates the opening for Brainpool — is deep technical execution capability. A McKinsey team can tell you what AI strategy to pursue. They are less equipped to tell you how to make it work at the infrastructure level, and they almost certainly cannot build the system for you.
At the bottom of the market — the commodity layer — sit the large staffing firms and general-purpose freelancing platforms. Upwork, Fiverr, and their equivalents offer vast pools of self-described AI talent at relatively low rates. The problem is quality variance: the signal-to-noise ratio is catastrophic. For an enterprise with an urgent, high-stakes AI problem, spending weeks evaluating candidates on a commodity platform is not a viable option.
In between — the execution layer — is where Brainpool competes, alongside a small number of direct competitors. Toptal, which positions itself as the "top 3% of freelance talent," is the closest analogue in model design, though Toptal is a generalist platform with an AI vertical rather than an AI-native network. Expert.ai and similar companies offer AI solutions but are primarily product companies, not talent networks. Various boutique AI consulting firms — Element AI (acquired by ServiceNow in 2020), Faculty AI in London, Alexander Thamm in Munich — compete for the same enterprise budgets but with different models (full-time employees rather than fractional experts).
⚔️
Competitive Positioning
Where Brainpool sits in the AI talent market
| Tier | Players | Strengths | Weakness vs. Brainpool |
|---|
| Strategy | McKinsey, BCG, Bain | C-suite access, brand | Lack deep technical execution |
| Platform (general) | Toptal, Upwork | Scale, speed | Quality variance, no AI-native curation |
| Boutique consulting | Faculty AI, Alexander Thamm | Deep expertise, team model | Less flexible, higher cost, limited scale |
| Enterprise AI vendors | Palantir, C3.ai, DataRobot |
The strategic question is whether the curated network model can achieve sufficient scale to matter, or whether the market consolidates around either the platform giants (Toptal adding AI depth) or the consulting firms (McKinsey building technical execution capability) — squeezing the middle tier from both sides.
What Money Can't Buy
Brainpool's fundraising history is modest by the standards of the current venture climate — total disclosed funding in the low single-digit millions of euros across seed and early-stage rounds, with investors drawn primarily from the European venture ecosystem. This capital efficiency is both a virtue and a constraint.
The virtue is that the business model is inherently capital-light. Brainpool doesn't need to hire the experts — they're in the network, paid per engagement, with Brainpool taking a margin on the match. There's no inventory cost, no warehouse, no manufacturing. The primary cost structure is the matching infrastructure (technology and people), sales and marketing, and network development. This means the business can reach profitability at relatively modest revenue levels compared to companies that need to build physical infrastructure or maintain large full-time workforces.
The constraint is that in a market where competitors are raising hundreds of millions — Toptal has raised over $400 million total, Andela over $380 million — Brainpool's limited capital restricts its ability to invest aggressively in brand building, international expansion, and platform technology. The question is whether quality of network can outcompete quantity of capital. History suggests it can, for a while — niche marketplaces with genuine quality signals often outperform well-funded generalist competitors in the early and middle stages of market development. But eventually, capital advantages compound, and the larger players develop their own quality mechanisms.
The fundraising strategy appears deliberate rather than constrained — the founders seem to prefer retaining control and growing at a pace the business model can sustain organically, supplemented by targeted venture capital rather than growth-at-all-costs financing. In the European startup ecosystem, this is more common than in Silicon Valley, and it produces different outcomes: slower growth, higher margins, greater founder control, and less vulnerability to the whims of late-stage venture investors.
The Knowledge Compound
There's a compounding effect in Brainpool's model that is easy to miss because it operates at the level of institutional knowledge rather than at the level of financial metrics.
Every engagement Brainpool facilitates generates data — not just the transactional data of hours billed and invoices paid, but the much richer data of what worked. Which expert was the right fit for which type of problem? Which domain specializations translated across industries and which didn't? What are the actual (as opposed to theoretical) capabilities of a particular expert? Where do enterprise clients consistently misjudge their own needs? What types of AI projects succeed and which fail, and why?
This institutional knowledge — accumulated across hundreds of engagements over multiple years — is extraordinarily difficult to replicate. A new competitor entering the market can recruit experts and build a platform, but they cannot recreate the pattern-matching intelligence that comes from having facilitated hundreds of expert-client relationships and observed the outcomes. This knowledge informs better matching, which produces better outcomes, which generates more data, which informs even better matching. It is a flywheel that spins on learning rather than on capital.
The question is whether Brainpool can formalize this knowledge — encoding it into algorithms, processes, and institutional memory that survive beyond the specific individuals who currently hold it — before the knowledge becomes too dependent on a small team's tacit understanding.
The Fracture Lines
No business is without its fractures, and Brainpool's are visible to anyone who looks closely enough.
The first is concentration risk on the supply side. A network of 500 experts is a specific kind of asset — high-quality but narrow. If a critical mass of top experts leaves the network — recruited away by a well-funded competitor, poached into full-time roles by tech giants, or simply tired of the model — the quality signal degrades rapidly. The network's value is non-linear: losing the top 10% of experts disproportionately damages the brand, because those are the experts who anchor the credibility of the entire network.
The second is the inherent tension between quality and scale. Every marketplace in history has confronted this trade-off, and most resolve it by relaxing quality standards as growth pressure mounts. Brainpool's below-10% acceptance rate is a strategic choice, but it is also a growth constraint. If the market demands ten thousand experts and Brainpool has five hundred, either the company relaxes its standards to meet demand (destroying the moat) or it turns away revenue (limiting the growth story that attracts venture capital).
The third is the AI-eats-itself problem. As AI tools become more capable, some of the tasks currently performed by human AI experts will become automatable. AI-assisted code generation, automated model selection, no-code ML platforms — all of these reduce the scope of problems that require a human expert. Brainpool's long-term viability depends on the assertion that the hardest, most valuable AI problems will always require human expertise. This is probably true for the next decade. Whether it remains true on a longer time horizon is genuinely uncertain.
The fourth — and perhaps most subtle — is the brand awareness deficit. In a market where enterprise procurement decisions are influenced heavily by brand trust, Brainpool is still unknown to most potential clients. A CTO in Houston who needs AI expertise is far more likely to call McKinsey, Accenture, or Google Cloud than to search for a Berlin-based AI expert network they've never heard of. Overcoming this requires either massive marketing investment (which conflicts with capital efficiency) or viral word-of-mouth growth (which is slow and unpredictable).
The Room Where It Happens
What does a Brainpool engagement actually look like in practice? The specifics matter, because the granular mechanics of service delivery are where the company's value proposition either holds or breaks.
Consider a composite example drawn from publicly available case descriptions. A mid-sized European industrial manufacturer — revenues in the €2–5 billion range, thousands of employees, significant physical infrastructure — decides it wants to implement predictive maintenance using machine learning. The company has data: years of sensor readings from factory equipment, maintenance logs, failure records. It has an IT department capable of managing databases and running standard analytics. What it lacks is anyone who has actually built a predictive maintenance ML system and deployed it in a production environment.
The company contacts Brainpool. Within 48 hours — the target turnaround time for initial matching — Brainpool proposes two to three experts from its network, each with specific relevant experience: one who has built predictive maintenance systems in a similar industrial context, one who specializes in time-series analysis at scale, one who has expertise in deploying ML models into edge computing environments (necessary because factory floors often lack reliable cloud connectivity).
The client selects one or two experts. The engagement begins with a scoping phase — typically one to two weeks — during which the expert evaluates the client's data quality, infrastructure, and organizational readiness. This phase frequently produces the most valuable output of the entire engagement: an honest assessment of what is and isn't possible given the client's actual (as opposed to imagined) data and capabilities. In many cases, the expert reformulates the problem entirely — "you don't need predictive maintenance, you need better data collection" — saving the client months of misdirected investment.
If the project proceeds, the expert works on-site or remotely for weeks to months, building the initial system, training the internal team, and establishing the processes that will allow the company to maintain and improve the system after the engagement ends. The knowledge transfer is explicit and central — Brainpool's model depends on engagements ending, not on creating permanent dependency.
The margins are healthy. Brainpool reportedly charges a significant markup over what it pays the expert — perhaps 30–50% — while still offering the client a lower total cost than hiring a full-time senior ML engineer (once you account for salary, equity, benefits, recruiting fees, and the cost of a bad hire). The expert earns a premium hourly rate — well above consulting-firm associate rates, though below partner rates — while maintaining the flexibility and autonomy they value.
We spent six months trying to hire an ML engineer. Brainpool gave us access to someone better than anyone we interviewed, and they were working on our problem within a week.
— Brainpool client testimonial, publicly available case study
A Network of Networks
The most interesting structural feature of Brainpool's network is that it is, in a real sense, self-reinforcing through academic and professional social networks that the company didn't create but has learned to leverage.
AI research is an intensely collaborative, networked discipline. The researchers in Brainpool's network know each other — they've co-authored papers, served on program committees together, advised each other's PhD students, worked at the same labs. When Brainpool needs to recruit a new expert in, say, reinforcement learning for robotics, the most effective recruiting channel is not a job board or a LinkedIn campaign. It is a phone call to an existing network member who says, "You should talk to my colleague at INRIA — she's the best person in Europe on this topic."
This referral-based growth mechanism has two crucial properties. First, it is cheap — essentially zero customer acquisition cost on the supply side. Second, it is quality-preserving — people at the frontier of a discipline know who else is at the frontier, and they tend not to refer people who would embarrass them. The network curates itself.
The limitation is that referral-based growth is inherently slow and produces a network that mirrors the existing social graph of AI research — which is geographically and institutionally concentrated in ways that may not match global demand. Brainpool's network is reportedly strong in European institutions and moderately strong in North American ones, but its depth in rapidly growing AI ecosystems — China, India, the Middle East — is less clear. If the company's growth strategy requires penetrating these markets, the existing referral networks may not be sufficient.
The Quiet Compound
In the end, Brainpool's story is not one of explosive, venture-backed hypergrowth. It is a story about patient accumulation — of expertise, relationships, institutional knowledge, and reputation — in a market that rewards exactly these qualities. The AI talent market is not a winner-take-all market in the way that consumer internet markets are. There is no single AI expert network that captures 70% market share the way Google captures search. The market is large enough, fragmented enough, and sufficiently dependent on trust and quality that multiple strong players can coexist.
Brainpool's position in this market is defined by a choice: the choice to be smaller and better rather than larger and mediocre. To vet rigorously even when it means turning away supply. To charge premiums even when it means losing price-sensitive clients. To grow through referrals even when it means growing slower than venture-backed competitors. These are not default choices — they are strategic decisions that define the company's identity and, ultimately, its ceiling.
The $500,000-a-year AI researcher at Google, working on problems chosen by someone else, inside a bureaucracy optimized for the company's priorities rather than their own intellectual curiosity — that person represents one model for how frontier talent can be deployed. Brainpool represents another: the model in which the best minds in the world work on the problems they find most interesting, for the organizations that most need them, on terms that respect their autonomy. Whether this model can scale to match the first model's economic output is the open question. But the researcher who, tonight, finishes a Brainpool engagement reformulating a logistics company's data architecture, then returns to her university lab to work on the theoretical problem that first drew her to the field — that person is the proof that something real is being built. Her participation is not for sale at any price. But it is available, for the right problem, through the right intermediary, if the matching is done with care.
On the wall of Brainpool's Berlin office, there is reportedly a whiteboard tracking the number of PhD-level experts in the network who hold positions at the top fifty AI research institutions in the world. The last publicly noted figure was north of 200. Two hundred people who do not work for Brainpool, who will never work for Brainpool, but who are, in the only sense that matters, Brainpool's most valuable asset.