The $6 Billion Dashboard
In the fourth quarter of 2024, a single enterprise software company processed more than 100 trillion observability events per day — not per year, not per quarter, per day — a number so large it loses all intuitive meaning until you realize it represents, in aggregate, the operational nervous system of nearly every consequential internet application on Earth. Datadog's annualized revenue run rate crossed $2.86 billion that quarter, up 26% year-over-year, with free cash flow margins hovering near 33%. The company was, by most measures, the fastest-growing infrastructure software platform of its generation that also happened to be wildly profitable. And yet the thing that made Datadog unusual — the thing that separated it from the graveyard of monitoring startups that preceded it — was not the scale of its telemetry ingestion or the elegance of its dashboards. It was a deceptively simple architectural wager made more than a decade earlier: that the three pillars of observability — metrics, traces, and logs — which every incumbent treated as separate products, separate databases, separate P&Ls, belonged together in a single, unified platform. That bet, which seemed almost naively ambitious in 2010, would come to define modern cloud infrastructure monitoring and produce one of the most capital-efficient compounding machines in enterprise software history.
The company's market capitalization fluctuated around $50 billion through early 2025, which placed it in rarefied territory: roughly the size of ServiceNow when ServiceNow was being called the next Salesforce, larger than Splunk ever got before Cisco swallowed it for $28 billion, and significantly more valuable than any pure-play observability competitor. Datadog had become, functionally, the operating console for the cloud — the screen that every on-call engineer stared at when something went wrong at 3 a.m., and increasingly the platform that development teams, security teams, and business stakeholders used during the other twenty-three hours.
By the Numbers
Datadog at Scale
$2.68BFY2024 revenue
~26%Year-over-year revenue growth (Q4 2024)
29,200+Customers (as of Q4 2024)
3,490+Customers with ARR ≥ $100K
~33%Free cash flow margin
$100T+Observability events processed per day
22+Products on the platform
~$50BApproximate market capitalization (early 2025)
The financial profile alone does not explain the fascination. What explains it is the
how — the specific sequence of product decisions, go-to-market innovations, and cultural commitments that allowed two French-born engineers with no prior CEO experience to build what is arguably the most successful bottoms-up enterprise platform since
Slack, if not since AWS itself.
Two Engineers, One Obsession
Olivier Pomel and Alexei Lê-Quôc met at IBM Research in the mid-2000s, two infrastructure engineers swimming in the deep end of distributed systems at a moment when the world had not yet decided that distributed systems would be the only systems. Pomel — compact, direct, with the particular intensity of someone who has been frustrated by bad tooling for so long that building better tooling has become a moral project — had grown up in Paris and studied at École Centrale, one of France's grandes écoles. Lê-Quôc, quieter, more technically obsessive, had a similar pedigree and a complementary temperament: where Pomel would eventually become the company's strategic and commercial brain, Lê-Quôc would become the engineering conscience, the person who ensured the platform's architecture remained coherent even as it expanded into two dozen product areas.
They left IBM for Wireless Generation, an education technology company in New York, where they found themselves managing infrastructure for a growing web application — and hating every monitoring tool available to them. The tools were siloed. Metrics lived in one system. Logs in another. Application performance data in a third. When something broke, the first thirty minutes of any incident weren't spent fixing the problem; they were spent correlating data across three different interfaces to understand the problem. It was, Pomel would later reflect, like trying to diagnose an illness by sending the patient's blood work to one hospital, their X-rays to another, and their medical history to a third, then asking the doctors to coordinate by email.
We kept switching between tools, trying to correlate what was happening in the infrastructure with what was happening in the application, and it was incredibly frustrating. That frustration was the origin of Datadog.
— Olivier Pomel, CEO, Datadog — Interview, 2019
In 2010, they incorporated Datadog, Inc. in New York City. The name — part whimsy, part obscure internal joke — came from a term used at a previous job for a cross-functional data workflow. They were not the first people to realize monitoring was broken. Nagios had been around since 1999. Splunk since 2003. New Relic since 2008. The market was littered with incumbents, each owning one pillar and defending it fiercely. What Pomel and Lê-Quôc proposed was not a better point solution but the abolition of point solutions — a unified platform where every form of telemetry lived in a single data store, correlated automatically, queried through a single interface.
This was not just a product vision. It was an architectural commitment with profound implications for how the company would be built. A unified platform meant a unified data model, which meant the first product couldn't be a quick
MVP bolted to a generic database — it had to be built on a purpose-designed storage and query engine capable of handling metrics at massive scale, with the flexibility to eventually ingest logs, traces, and whatever new telemetry types the cloud era would invent. The foundation had to be right, or the entire strategy would collapse under its own weight.
The Cloud Bet Before the Cloud Won
The timing was simultaneously terrible and perfect. In 2010, most enterprise infrastructure still ran on-premises. AWS was five years old and growing fast, but the vast majority of Fortune 500 workloads still sat in corporate data centers monitored by corporate monitoring tools — tools that assumed servers were physical, numbered, persistent. You monitored this server, that database, those network switches. The monitoring paradigm was inherently static.
Pomel and Lê-Quôc bet on a different world. They bet that cloud adoption would accelerate, that infrastructure would become ephemeral — containers, serverless functions, auto-scaling groups where individual instances lived for minutes before being terminated and replaced — and that monitoring tools built for static infrastructure would be structurally incapable of handling this new reality. Datadog was designed, from the first line of code, for the cloud. Its agent was lightweight, designed to be installed on virtual machines and containers that might exist for sixty seconds. Its data model was tag-based rather than host-based, which meant you didn't monitor "server-47" but rather "all instances tagged production, us-east-1, checkout-service" — a seemingly small distinction that turned out to be the foundational insight of the entire company.
☁️
The Tag-Based Data Model
Datadog's foundational architectural decision
Unlike traditional monitoring tools that organized telemetry by host or server, Datadog built its data model around arbitrary key-value tags. Every metric, every log line, every trace span could be tagged with any combination of attributes — environment, service, team, region, version, customer tier. This meant:
- Dynamic infrastructure was first-class. When an auto-scaling group spun up fifty new instances, they inherited tags automatically and appeared in existing dashboards without configuration.
- Cross-cutting queries were trivial. "Show me the p99 latency of the checkout service in production in us-east-1 for customers on the enterprise tier" was a single query, not a multi-tool treasure hunt.
- Correlation was built-in. Because metrics, logs, and traces shared the same tag namespace, you could click from a spike in a metric to the exact log lines and traces from the same service, same time window, same deployment.
This architecture was expensive to build and required enormous engineering discipline. It also created a compounding moat: the more data a customer sent through Datadog, the richer the tag graph became, and the harder it was to replicate the contextual intelligence elsewhere.
The early years were lean. Datadog raised a $1.5 million seed round in 2010 and a $6.3 million Series A in 2012, modest amounts even by early-2010s standards. The first product was infrastructure monitoring — a SaaS dashboard for cloud server metrics — and the initial customers were the only people who understood why this mattered: DevOps engineers at cloud-native startups. These were not enterprises. They were teams of five or ten engineers at companies most people had never heard of, paying a few hundred dollars a month to watch their AWS instances.
But here was the crucial dynamic that would define Datadog's go-to-market motion for the next decade: those engineers loved the product. They loved it the way developers loved GitHub, the way designers loved Figma. It was fast, it was elegant, it required almost no configuration, and it just worked in a domain where every other tool required a PhD in YAML. The Net Promoter Scores were astronomical. Engineers who used Datadog at one company demanded it at their next company. The bottoms-up flywheel was spinning before anyone at Datadog had hired an enterprise sales rep.
The Platform Play That Everyone Talked About and Nobody Executed
Between 2012 and 2016, Datadog did something rare in enterprise software: it resisted the temptation to go wide too early. The infrastructure monitoring product was refined, scaled, and hardened. The storage engine was rebuilt. The query language was extended. The agent was optimized for every conceivable deployment environment — bare metal, VMs, Docker containers, Kubernetes pods, AWS Lambda functions. By 2016, the product was mature enough, the customer base was large enough, and the data architecture was flexible enough to execute the platform expansion that had been the plan all along.
Application Performance Monitoring (APM) — distributed tracing for microservices — launched in 2017. Log Management followed in 2018. Each new product was not a separate system bolted onto the side; it was a new data type flowing into the same unified store, sharing the same tag namespace, queryable through the same interface. The experience of using Datadog's APM felt like using Datadog's infrastructure monitoring because it was the same product, extended.
2010Datadog founded by Olivier Pomel and Alexei Lê-Quôc in New York City.
2012Series A funding ($6.3M). Infrastructure monitoring product reaches early traction with cloud-native startups.
2014Series B funding ($31M) led by Index Ventures. Customer count passes 1,000.
2016Series C funding ($94.5M) at ~$600M valuation. Company begins platform expansion beyond core infrastructure monitoring.
2017Launches APM (Application Performance Monitoring) — distributed tracing as a second major product pillar.
2018Launches Log Management — the third pillar of observability, unified with metrics and traces.
2019IPO on NASDAQ (DDOG) at $27/share, valuing the company at ~$7.8B. Revenue: $363M.
2020Launches Synthetics, Real User Monitoring (RUM), Network Performance Monitoring. Revenue: $603M.
2021Launches CI Visibility, Database Monitoring, Cloud Security Management. Revenue: $1.03B, crossing $1B.
This cadence — two to four new products per year, each integrated into the existing platform, each expanding the TAM and deepening the customer relationship — was not accidental. It was the execution of a product strategy that Pomel described as "following the engineer." Where did the engineer go after they saw a metric spike? They went to the logs. After the logs? To the traces. After the traces? To the code deployment that caused the issue. After the deployment? To the CI/CD pipeline that built it. After the pipeline? To the security scan that should have caught the vulnerability. Datadog followed this thread, relentlessly, product by product, until the platform encompassed the entire lifecycle of a cloud application from code commit to customer experience.
By 2024, Datadog offered more than 22 products spanning infrastructure monitoring, APM, log management, real user monitoring, synthetic monitoring, network monitoring, database monitoring, security (cloud SIEM, cloud security posture management, application security management), CI/CD visibility, cloud cost management, incident management, and — increasingly — AI-powered analysis of all of the above. The average large customer used six or more products. Customers using four or more products accounted for roughly 50% of ARR.
We now have over 3,490 customers with ARR of $100,000 or more. These customers represent the majority of our ARR, and they are using more and more products on the platform every quarter.
— Olivier Pomel, Q4 2024 Earnings Call
The Consumption Trap
There is a paradox at the heart of Datadog's business model, and it is the same paradox that haunts every usage-based software company: the thing that makes revenue grow also makes revenue unpredictable.
Datadog charges primarily on consumption — the volume of metrics ingested, the number of infrastructure hosts monitored, the gigabytes of logs indexed, the number of APM spans analyzed. When customers grow — when they deploy more services, generate more traffic, expand into new regions — their Datadog bill grows automatically, without a sales rep picking up the phone. This is the magic of usage-based pricing: revenue expands with the customer's business. Datadog's net revenue retention rate consistently exceeded 120% for most of its history as a public company, meaning the average customer's spending grew by more than 20% annually before accounting for any new customers.
But the mechanism that compounds revenue in good times compresses it in bad times. In 2022 and early 2023, as the post-pandemic cloud spending boom turned to hangover, Datadog's growth decelerated sharply. Revenue growth fell from 74% year-over-year in Q1 2022 to 33% in Q4 2022 to 25% in Q2 2023. Customers weren't churning — few companies rip out their monitoring stack — but they were optimizing. Engineers were writing fewer logs. Reducing retention periods. Downsampling metrics. Turning off monitors for non-critical services. Every optimization was rational from the customer's perspective and painful from Datadog's perspective.
The stock, which had peaked above $180 in late 2021, cratered to below $70 by late 2022. The narrative shifted overnight from "Datadog is the next great platform" to "usage-based pricing is a structural vulnerability." Wall Street, with its characteristic subtlety, re-rated the entire company based on two quarters of deceleration.
Pomel's response was characteristically unsentimental. On the Q3 2022 earnings call, he acknowledged the optimization headwinds but refused to alter the long-term strategy. No pivot to seat-based pricing. No restructuring charges. No layoffs — Datadog was one of remarkably few tech companies to avoid significant layoffs during the 2022–2023 downturn. Instead, the company leaned harder into the playbook that had worked: ship more products, expand the platform, and trust that the secular migration to cloud would resume.
It did. By Q4 2024, revenue growth had reaccelerated to 26%, the customer base had expanded to over 29,200, and the number of large customers (≥$100K ARR) had grown to 3,490. The optimization headwinds had not disappeared, but they had been overwhelmed by the sheer volume of new workloads — and new product adoption — flowing through the platform. The consumption trap, it turned out, was symmetrical: it compressed growth on the way down and amplified it on the way up.
The Land-and-Expand Machine
The go-to-market engine that Datadog built is one of the most efficient in enterprise software, and understanding it requires appreciating how radically different it is from the traditional enterprise sales playbook.
The classic enterprise software sale begins with a top-down mandate: a CIO or VP of Engineering evaluates vendors, selects one, negotiates an enterprise license agreement, and deploys it across the organization. The sales cycle is six to twelve months. The average contract value is high. The cost of customer acquisition is enormous. And the product, more often than not, is something that gets "implemented" over months by a team of consultants rather than something that individual engineers voluntarily adopt.
Datadog inverted this. The initial entry point is almost always a single engineering team — sometimes a single engineer — who signs up for a free trial, installs the agent on a few servers, and starts seeing dashboards within minutes. The product is designed to deliver value in under fifteen minutes, which is not a marketing claim but an engineering constraint baked into the onboarding flow. There is no implementation phase. There is no consultant. There is a developer who is frustrated, a Google search, a brew install datadog-agent, and a dashboard that lights up with real-time data.
From there, the expansion is organic and relentless. The initial team adopts infrastructure monitoring. They tell the adjacent team. The adjacent team starts using APM. Someone in security hears about Cloud SIEM. The DevOps team adopts CI Visibility. Six months later, what started as a $500/month experiment is a $50,000/month platform relationship, and the first enterprise sales rep enters the picture — not to sell Datadog, but to formalize an adoption that has already happened.
This motion — bottoms-up adoption followed by top-down expansion — is what produces Datadog's extraordinary net revenue retention. The initial land is small, often trivially small. The expand is massive and multi-year. A customer that starts with $10,000 in ARR might be at $500,000 three years later, not because a sales rep convinced them to buy more but because their cloud footprint grew and their engineers kept finding new Datadog products that solved real problems.
📊
The Multi-Product Adoption Curve
How customers deepen their platform usage over time
| Products Used | % of ARR (est.) | Typical Customer Profile |
|---|
| 1 product | ~10% | New customer, single team, early adoption |
| 2–3 products | ~25% | Expanding within engineering org |
| 4–5 products | ~30% | Cross-functional (DevOps + Security) |
| 6+ products | ~35% | Platform standard, enterprise-wide deployment |
The efficiency shows up in the numbers. Datadog's sales and marketing expense as a percentage of revenue has consistently declined even as absolute spending has grown — from roughly 35% of revenue in 2019 to around 26% in 2024. The company acquires customers cheaply through bottoms-up adoption, then expands them efficiently through product-led growth and targeted enterprise sales. The payback period on customer acquisition is estimated at twelve to eighteen months, among the shortest in enterprise software.
The Engineering Culture as Competitive Weapon
One of the most underappreciated aspects of Datadog's success is the velocity of its product development, and that velocity is a direct function of organizational design.
Pomel and Lê-Quôc run the company as an engineering-first organization — not in the performative Silicon Valley sense where every CEO claims to be "an engineer at heart," but in the structural sense where product development is the primary strategic function and everything else is organized to serve it. Engineering and product teams are organized in small, autonomous pods — typically five to eight engineers — each responsible for a product area from inception through deployment and customer feedback. These pods have enormous autonomy and are expected to ship features at a cadence that would make most enterprise software companies uncomfortable.
The result is visible in the output: Datadog shipped more than twenty major products and hundreds of features between 2017 and 2024. The pace of new product introductions — typically two to four per year — has not slowed as the company has scaled from 500 to over 5,000 employees. This is unusual. Most enterprise software companies experience a dramatic slowdown in product velocity as they grow, as organizational complexity, technical debt, and coordination costs accumulate. Datadog has, so far, defied this pattern.
We think of ourselves as a company that builds products. Everything else — sales, marketing, go-to-market — exists to support that. The day we stop shipping great products is the day we start losing.
— Olivier Pomel, DASH 2023 Conference Keynote
Part of this is attributable to the unified platform architecture. Because all products share a common data model, storage engine, and tag namespace, new products don't start from zero. The infrastructure for data ingestion, storage, querying, alerting, and visualization already exists. A new product — say, Database Monitoring — needs to build the data collection (agents that understand PostgreSQL or MySQL wire protocols) and the product-specific UX, but it inherits the entire platform layer. This is the compounding advantage of the platform bet: each new product is cheaper to build than the last, and each new product makes the platform more valuable because it adds another data type to the unified graph.
Part of it is cultural. Datadog's engineering culture is notably intense — long hours, high standards, a perfectionism about latency and reliability that borders on the obsessive. The company has been transparent about this. The hiring bar is exceptionally high, the interview process is famously rigorous, and the expectation is that engineers will operate with startup-level urgency regardless of the company's scale. This culture is not universally loved — Glassdoor reviews reflect the predictable tension between ambition and work-life balance — but it has produced results that are difficult to argue with.
The Security Pivot
If the first decade of Datadog's life was defined by observability — the art of understanding what your software is doing — the second decade is increasingly defined by a more ambitious and more competitive bet: security.
The logic is elegant, almost inevitable. Datadog already collects the data that security teams need — logs, network flows, infrastructure configurations, application traces, deployment histories. The difference between "monitoring for performance" and "monitoring for threats" is, at the data layer, primarily a difference in the rules applied to the data, not the data itself. A spike in API errors might be a performance issue or it might be an attack. An unauthorized configuration change might be an operational mistake or it might be a compromise. The telemetry is the same; the interpretation differs.
Starting in 2021, Datadog began launching security products in rapid succession: Cloud Security Posture Management (CSPM), Cloud Workload Security, Application Security Management, Cloud SIEM, Sensitive Data Scanner. By 2024, the security suite had expanded to include Software Composition Analysis, Cloud Security Management (a unified product), and AI-powered threat detection. The company reported that its security products were adopted by thousands of customers and were growing significantly faster than the overall business.
This is where the competitive landscape gets violent. Cloud security is a massive, fragmented market — estimated at $40–60 billion in TAM depending on the analyst — and it is already crowded with well-funded incumbents: Palo Alto Networks, CrowdStrike, Wiz (which turned down a $23 billion acquisition offer from Google), and dozens of point solutions. Datadog's entry into security is a direct assault on these companies' margins and growth trajectories.
The bull case for Datadog in security is that the unified data model gives it a structural advantage. Security analysts using Datadog can pivot from a security alert directly into the application trace that generated the suspicious behavior, directly into the infrastructure metrics of the host, directly into the deployment event that introduced the vulnerability — all in a single interface, without context-switching between tools. This is not just a convenience; it dramatically reduces mean time to resolution, which is the metric that matters most in incident response.
The bear case is that security is a trust market. CISOs are paid to be paranoid, and they tend to buy from companies whose entire identity is security. Convincing a CISO that the monitoring company is also a serious security platform requires overcoming deep institutional skepticism. Palo Alto Networks has forty thousand employees focused on security. CrowdStrike has built its brand on stopping breaches. Datadog has to convince these buyers that a platform approach — where security is one of twenty-two products rather than the only product — is superior.
The resolution of this debate will define Datadog's next chapter.
The AI Opportunity (and the AI Paradox)
When the generative AI wave broke in late 2022, it created a peculiar asymmetry in the observability market. On one hand, every company building AI applications needed to monitor them — LLM inference latency, token usage, model accuracy, hallucination rates, prompt injection attempts, cost per query. This was a greenfield observability problem, and Datadog moved fast, launching LLM Observability in 2023 and expanding it throughout 2024. The product allows teams to trace the full lifecycle of an AI request — from the initial prompt through retrieval-augmented generation, model inference, and response — with the same granularity that Datadog brought to traditional application monitoring.
On the other hand, AI threatened to disrupt Datadog's own product. If an AI agent could look at metrics, logs, and traces and tell you what was wrong — automatically, in natural language, without a human staring at a dashboard — then the dashboard itself might become less central. Datadog's response was to lean into this aggressively. Bits AI, the company's AI assistant, launched in 2023 and was integrated across the platform by 2024. It could summarize incidents, suggest root causes, auto-generate monitors, and answer natural-language questions about system state. The goal was not to resist AI but to embed it so deeply into the Datadog experience that the AI was the product — that the intelligence layer running on top of the unified data store was the moat, not the dashboards.
AI-native customers are among our fastest-growing cohorts. They have complex, expensive infrastructure, and they need observability from day one. This is a significant tailwind for us.
— Olivier Pomel, Q2 2024 Earnings Call
The company reported that customers building AI applications — the "AI-native" cohort — were growing significantly faster than the broader customer base and were among the highest-spending customers per engineer. This makes intuitive sense: AI workloads are GPU-intensive, latency-sensitive, expensive, and difficult to debug without purpose-built tooling. The model inference stack is, in many ways, the most complex software infrastructure humans have ever built, and it needs monitoring commensurate with that complexity.
But the AI paradox cuts deeper than new workloads. The fundamental question is whether AI — specifically, autonomous AI agents that can manage infrastructure without human intervention — will eventually reduce the need for observability tooling by reducing the number of humans who need to understand what's happening. If an AI agent can auto-remediate issues before any human even knows they occurred, does anyone need a dashboard? Datadog's bet is that the answer is no — that AI increases the complexity of systems faster than it reduces the need to understand them, that the demand for observability grows with the sophistication of the technology, not in spite of it.
That bet is not yet proven. But if the last fifteen years of cloud computing are any guide — during which the proliferation of monitoring tools accelerated alongside the automation of infrastructure — it is a reasonable one.
The Acquisition Question
Datadog has been notably disciplined in its approach to M&A, which is worth examining because the temptation to acquire has been enormous. The observability and security markets are littered with interesting startups, and most platform companies in Datadog's position — flush with cash, high stock price, expanding TAM — go on acquisition binges. Think Salesforce buying Slack, or Palo Alto Networks' string of security acquisitions, or ServiceNow's steady accumulation of AI companies.
Datadog has done a handful of small acquisitions — Madkudu (predictive lead scoring), Sqreen (application security), Ozcode (debugging), CoScreen (collaborative troubleshooting), Cloudcraft (infrastructure diagramming) — but nothing transformative, nothing that required more than a few hundred million dollars. The company has consistently chosen to build rather than buy, even in categories where acquisitions could have accelerated time-to-market by years.
This is a direct reflection of Pomel's philosophy. Building internally preserves architectural coherence — every product shares the same data model, the same UX patterns, the same underlying infrastructure. Acquisitions introduce technical debt, cultural friction, and integration costs that threaten the very thing that makes the platform work. Pomel has spoken repeatedly about the importance of architectural purity, and the acquisition record suggests he means it.
The risk is speed. While Datadog builds, competitors like CrowdStrike and Palo Alto Networks are assembling security platforms through rapid acquisition. Wiz grew from zero to $500 million in ARR in under four years, partly through aggressive hiring and talent acquisition. The question is whether Datadog's build-first approach produces a better product over a five-year horizon — which seems likely, based on the company's track record — or whether the market moves too fast for organic development.
The Quiet Balance Sheet
For all the attention paid to Datadog's product strategy and growth rate, its financial discipline is perhaps the most underrated aspect of the story.
The company ended 2024 with approximately $3.4 billion in cash, cash equivalents, and marketable securities.
Free cash flow for the full year was approximately $880 million — a 33% margin, extraordinary for a company growing at 26%. Operating margins have expanded steadily, from roughly breakeven at the 2019 IPO to mid-20s by 2024. Gross margins hover near 80%, characteristic of pure SaaS businesses but remarkable for a company ingesting and processing 100 trillion events per day — the infrastructure costs alone are staggering, and the ability to maintain 80% gross margins at that scale reflects deep optimization of the storage and compute stack.
The company has been essentially self-financing since its IPO. It raised approximately $200 million in total venture capital before going public and has not issued significant equity since. The dilution from stock-based compensation — the bane of every SaaS investor — has been relatively modest by industry standards, running around 5–7% of revenue annually. Datadog has even begun buying back stock, authorizing a $1 billion repurchase program in early 2025.
This financial profile — high growth, high margins, low dilution, massive cash generation — is vanishingly rare in enterprise software. The comparable set is short: maybe ServiceNow, maybe Veeva Systems in its prime, maybe Atlassian on its best day. It is the profile of a company that has found a durable growth engine and is running it with genuine operational discipline.
The Pomel Premium
In enterprise software, the CEO premium is real but rarely discussed. Companies led by technical founders who remain as CEO through the scaling phase tend to make better long-term product decisions — often at the cost of short-term commercial optimization — because they instinctively understand that in infrastructure software, the product is the strategy. The product is the moat. Everything else — sales, marketing, partnerships — is leverage applied to a product advantage.
Pomel is the archetype. He has no MBA. He has no previous CEO experience. He does not give keynotes with the polished fluency of a
Marc Benioff or a
Satya Nadella. What he does is sit in product reviews, debate architectural decisions with engineering leads, and make allocation decisions about where to invest the next hundred engineers with the precision of someone who
personally understands the technical tradeoffs. When Datadog decided to build its own storage engine rather than use off-the-shelf databases — a decision that consumed years of engineering effort and delayed feature development — it was Pomel who made the call, understanding that control over the storage layer was essential for the platform's long-term economics and performance.
He has maintained approximately 10% ownership of the company through its IPO and subsequent dilution, which aligns his incentives with shareholders in a way that hired-gun CEOs rarely achieve. Lê-Quôc, as CTO, maintains a similar alignment. Together, they control enough of the company to resist short-term pressure — from Wall Street, from customers demanding feature requests, from the temptation to sacrifice architectural coherence for quarterly revenue.
The Pomel premium shows up in the numbers. It shows up in the 22-product platform that actually works as a platform. It shows up in the engineering velocity. It shows up in the refusal to do dilutive acquisitions. And it shows up in the stock price, which — despite the 2022 drawdown — has compounded at roughly 25% annually since the IPO, turning the company's initial $7.8 billion valuation into a $50 billion franchise.
100 Trillion Signals a Day
There is a moment in the evolution of every platform company where the data becomes more valuable than the software. Google's search algorithms matter, but the accumulated data about human intent — trillions of queries, refined over decades — is what makes the algorithms work. Facebook's code is copyable; Facebook's social graph is not.
Datadog is approaching this inflection. The 100 trillion events per day flowing through its platform represent, in aggregate, a real-time model of how the world's cloud infrastructure behaves — under load, under failure, under attack, under normal conditions. No other company has this data at this scale and this breadth. New Relic has a fraction of it. Splunk (now Cisco) has a different fraction. The cloud providers — AWS, Azure, GCP — have their own telemetry, but it is siloed by cloud, unable to see across multi-cloud environments, and lacks the application-layer context that Datadog captures.
This data advantage compounds with AI. The ability to train models on 100 trillion daily events — to learn what "normal" looks like across millions of services, to detect anomalies that no human-written rule would catch, to predict failures before they happen — is a capability that cannot be replicated by a startup with clever algorithms and no data. It is the deep moat that Datadog is building beneath the visible moat of product breadth and customer relationships.
The company does not talk about this much. Pomel is not given to sweeping pronouncements about the data flywheel or the AI singularity. But the strategic logic is visible in every product decision: ingest more data, from more sources, with more granularity, and build the intelligence layer on top of the data. The dashboards are the visible surface. The data is the iceberg.
On an average weekday in early 2025, somewhere between two and three million Datadog agents were running on infrastructure around the world — on servers in AWS's us-east-1, on Kubernetes pods in a fintech's European cluster, on edge compute nodes processing autonomous vehicle telemetry, on GPUs running inference for the latest foundation models. Each agent sent metrics, logs, and traces into Datadog's ingestion pipeline, where they were tagged, indexed, correlated, and stored. In aggregate, those signals formed the most comprehensive real-time map of global cloud infrastructure ever assembled — a map that grew denser and more valuable with every passing second, every new customer, every new product that added another layer of telemetry to the unified data store. Somewhere in New York, an engineer refreshed a dashboard.