There has been a bunch of Twitter conversation this week about venture capital economics and performance, stemming from an HBR post titled “Venture Capitalists Get Paid Well to Lose Money.” It was penned by Diane Mulcahy, a senior fellow at the Ewing Marion Kauffman Foundation (where she evaluates venture fund investments).
Mulcahy makes three basic points:
- The VC market has performed terribly for more than a decade, but individual VCs still get paid exceedingly well (thanks to long-term management fees).
- VCs should have more skin in the game via larger GP fund commitments.
- There has been too little innovation on the VC model.
Of these, it’s the first one that interests me most. Mulcahy cites Cambridge Associates data, although she has previously written about performance issues by using returns from Kauffman’s own venture portfolio. The problem for me is that neither data sample is terribly reliable.
To be clear, I’m not casting aspersions on Mulcahy’s work. This isn’t her fault. It’s the industry’s fault.
Cambridge Associates, for example, only includes a total of 93 U.S. VC funds in its samples between 2009 and 2011 – which works out to less than a quarter of the total number of funds raised in those years. That figure may be considered statistically significant in the abstract, but is it representative of VC fund sizes? What if it includes a majority of the multi-billion dollar funds? Or almost none of them? Is there reporting bias – either via the ‘best’ firms holding their data close (since they don’t really need Cambridge’s recommendation) or via the ‘worst’ firms not wanting to air their dirty laundry?
Cambridge, of course, won’t disclose the names of included funds – even though releasing a list of GPs would not actually help anyone figure out fund-by-fund performance data. Probably because its sample may only include funds it has recommended to its clients, which is yet another way bias is introduced.
Other data providers aren’t too much better, with none of them basing benchmarks on anything close to half of the industry.
In short, we just don’t know how venture capital performs as an asset class. What we’re left with is vague data suggesting that it underperforms, and VC tweets that they overperform.
There is a case to be made that none of this matters. Venture capitalist Brad Feld once said during a conference that he doesn’t care if VC survives as an “asset class,” so long as his firm and a few others keep being able to raise new funds and invest in promising startups.
But, from my perspective, LP allocations often are set by board-level executives who use benchmarks as a starting point. If venture (net of fees) is regularly bested by the S&P 500, then perhaps it doesn’t get an allocation in the first place. Fewer funds gets raised and, eventually, fewer startups get funded and fewer jobs are created. Or perhaps the fee structure just needs a tweak in order to keep VC above the public market mark, thus maintaining those allocations.
So we need better data, which begins with more GPs sharing their information with third-party data providers (perhaps right after fundraising, so as to put lawyers at ease). And please don’t argue that such disclosures will harm your competitive position, unless you also want to argue that shops like Union Square Ventures and Spark Capital (which both have publicly-reporting LPs) are desperately grasping to overcome the disadvantage of me knowing their fund IRRs. Moreover, what I’m really talking about here is building better benchmarks, rather than seeking fund-name disclosures.
If VC truly is underperforming (as all of the data seems to indicate), then Mulcahy is correct that the industry’s economic model needs an overhaul. And LPs probably need to force the issue, since VCs aren’t going to alter lucrative fees voluntarily. If the industry is overperforming (as it’s supposed to), then we’re all good. But right now there is no way any of us can make such determinations, and the future of startup funding may depend on it.