Estimated Read Time: 8 min
At the heart of private equity underwriting is the Leveraged Buyout (LBO) model. It helps investors project returns on invested capital and assess risk. At Jamestown, we review multiple LBO models daily. This article combines our private market experience with public market data to help searchers build realistic, credible models. We assume readers already have a basic understanding of LBOs, and we focus specifically on organic models—excluding more complex inorganic scenarios.
The LBO model itself is not a blend of art and science—it’s pure science. Our internal LBO engine is rigorously implemented in the Rust programming language and is accurate down to the penny—if the inputs are correct. The real challenge is that the inputs are estimates, and because they’re forward-looking, they will never be perfect. Still, we can aim to make them as accurate and grounded as possible. Below is a table outlining the major inputs that go into the model.
Input | Source Of Estimate | Accuracy | Impact |
---|---|---|---|
Purchase Price | LOI/APA | HIGH | HIGH |
Debt Terms | Loan Docs | HIGH | HIGH |
Equity Terms | Term Sheet/Legal Docs | HIGH | HIGH |
Deal Costs | Attorney/DD firm | MEDIUM | LOW |
Current EBITDA | CIM/QoE/Due Diligence | HIGH | MEDIUM |
Future EBITDA | ??? | LOW | HIGH |
Exit Multiple | Comparable Transactions | MEDIUM | MEDIUM |
As the table shows, most inputs come with a high degree of certainty because they can be pulled directly from legal agreements. But a few key variables deserve closer attention:
Deal Costs: These can be tricky to estimate, especially under hourly billing arrangements. The good news is that they tend to be relatively small—typically 1–2% of the purchase price—so we classify them as “medium uncertainty, low impact.”
Exit Multiple: We can often bound this input using comps from transaction databases and broker data. Your entry multiple may also serve as a good proxy—especially if the deal was competitive. Exit multiple is a non-compounding input because it only affects returns at sale, it’s a “medium impact” factor.
Future EBITDA: This is by far the most sensitive input. It’s hard to predict and has massive consequences. If EBITDA underperforms, you may not be able to service debt. If it outperforms, the leverage can supercharge returns.
Because of its uncertainty and significance, future EBITDA is where most models tend to break down. It’s tempting to assume 20% annual EBITDA growth and plug in that number. With those inputs, any deal will look attractive. But experienced investors focus on realized IRR—and who they’re backing. Overly optimistic models can damage credibility and quickly disqualify a searcher from further consideration.
The standard approach to modeling future EBITDA is to start with trailing twelve-month (TTM) EBITDA and apply a compound annual growth rate (CAGR). Alternatively, you can model revenue and margin growth separately, which allows for more transparency and granularity. We find these approaches to be reasonable, but the question remains, what number should you pick for CAGR?
Public equities are required to report earnings data quarterly. We can use this data to give us some indications of what private markets look like. While public equities and private equities operate at different scales, the two are inherently linked and exposed to the same macro-economic forces. The goal here is not to get a perfect answer but to at least use the available data to get us into the ballpark.
The chart below shows the empirical year-over-year distribution of revenue growth among public companies. The average is 5.1%, with a standard deviation of 17.7%. So if you're projecting +20% revenue growth for a single year, you're effectively saying the business will outperform the average by about one standard deviation—a bold claim.
Flipping a coin and getting heads once has a 50% probability. Getting heads 5 times in a row has 3% probability. The math behind multi-year CAGR projections works similarly. While a single year of 20% growth is already ambitious, sustaining that level for five consecutive years—as we often see in pitch decks—is extraordinarily rare. It happens, but it shouldn’t be your base case.
To better model multi-year scenarios, we extrapolated the one-year empirical data using stochastic methods. (For the technically inclined: we model growth using log-returns, not raw percentage change, since revenue can't go below zero.) The chart below shows the resulting five-year forecast—displayed as the probability of underperformance, not just density by CAGR.
Let us note some key observations:
0% CAGR (green line): This is typically thought of as extremely pessimistic, but there is still a 26% chance that the deal realizes a CAGR worse than this.
5% CAGR (yellow line): Still the break even point. At this value there is a 50/50 chance that the deal will under/over perform.
20% CAGR (red line): This has a 95% probability of underperformance, said differently, there is only a 5% chance that this deal will meet or exceed expectations.
In short, if your model assumes 20% CAGR for five years straight, you should have exceptionally strong evidence to support it.
Humans are great at spotting patterns—sometimes even when they don’t exist. Imagine you’re reviewing a CIM showing consistent 15% annual revenue growth for the past five years. It feels intuitive to project that trend forward in your model. And we see this kind of projection all the time.
But how reliable is that pattern? We can measure this directly using autocorrelation—how strongly past growth predicts future growth. Unfortunately, the data shows that autocorrelation is fairly weak. Below, we chart several companies to show just how unpredictable revenue growth is from year to year.
Ticker | Autocorrelation |
---|---|
AAPL | -17% |
AMZN | 90% |
TM | -10% |
WMT | 65% |
XOM | 11% |
Across all companies in the dataset, the average revenue growth autocorrelation is just 19.3%. That’s remarkably low. Based on this, we reject any model that forecasts exceptional future growth solely because of exceptional past growth. The data just doesn’t support that assumption—at least not in the general case.
AMZN stands out as a clear outlier. The company has demonstrated consistent revenue growth year after year for decades. While it's tempting to attribute this to “survivorship bias” or some other statistical explanation, the reality is more straightforward: Amazon has built a service that offers a compelling value proposition for consumers and a business model protected by a formidable moat. They’ve strategically positioned themselves in a market with a legitimately expanding total addressable market (TAM) and no meaningful competition. Look for these properties in a CIM, not simply revenue growth.
Up to this point, we've focused on public company data. It's useful for establishing guardrails—but there are key differences when it comes to SMBs. Public companies are large, complex, and always pushing for growth. They’re also more sensitive to macroeconomic swings. In contrast, SMBs tend to be simpler, more transparent, and often insulated from broad economic trends. Many have been coasting in “cruise control” mode for years leading up to a sale.
Organic growth plans are where searchers have the chance to beat the statistics. If there’s genuine unmet demand and limited competition, projecting an above-average CAGR may be reasonable. But self-skepticism is critical. Growth strategies that rely on launching new revenue lines or stealing share from competitors are difficult to execute—and often modeled with unrealistic optimism. We frequently see plans assume 100% success with zero cost. That’s rarely how things play out.
The payoff for approaching growth with thoughtful skepticism is huge. It helps you make better bids in competitive processes, uncover hidden gems, and steer clear of underperformers. More than that, skeptical planning is one of the best ways to deeply understand a business. Ask questions, form hypotheses, test them, keep digging, repeat.