If you wanted to see how your nonprofit’s fundraising stacks up to “industry standards,” you’d likely turn to a benchmark report like the M+R 2015 Benchmark report.
That’s fine if you know what you’re looking at. But nonprofit benchmark reports have several flaws, and if you’re not careful you can draw the wrong conclusions.
1. Benchmark reports are biased
For one thing, the results are biased. I don’t mean that the researchers behind the M+R study and others like it deliberately skew their results. Instead I mean biased in a statistical sense: these reports aren’t based on a random sample of nonprofits.
Studies like M+R’s draw their data from organizations that volunteered to share their information with the authors. The authors of some studies even examine organizations they have preexisting relationships with, i.e. their clients.
That means most benchmark studies aren’t representative of all nonprofits—which is problematic since the conceit of such studies, their disclaimers aside, is that they are representative.
About all you can say is that the results of the study are true of the organizations studied. That’s interesting but not very helpful to evaluating your own marketing.
2. You can’t draw useful conclusions from benchmark data
Let’s say you work at an animal welfare group. You’re looking to run some Google ads and want to see if there’ll be competition from your peer groups.
Turn to page 38 of the M+R 2015 Benchmark report. You’ll find that 63 percent of animal welfare groups run paid search ads. So you’re behind the curve.
But wait! Is that what the report really says? Let’s take a look at the data.
The M+R report looks at 85 nonprofits, grouped into seven sectors. Of these, only eight are classified as “wildlife and animal welfare.”
What does it mean that there are 85 total nonprofits and just eight animal welfare groups? It means we can’t draw useful conclusions about animal welfare groups (or any other group) from this data.
Not with any reliability, anyway. For animal welfare groups, the margin of error is ±34 percent at a 95 percent confidence interval.1 For everyone studied, it’s ±10 percent. And this assumes we are dealing with a random sample, which we aren’t.
So according to M+R’s report, anywhere from 29 to 97 percent of animal groups use paid search ads. Between 48 and 68 percent of all nonprofits do so.
Yeah. That narrows it down.
In fact, you’d need a sample of 370 (randomly sampled) nonprofits just to get to a margin of error of ±5%. That would bring the range down—to between 58 and 68 percent of animal groups.
3. Sector-by-sector breakdowns are meaningless
Nonprofit benchmark studies typically categorize their results into sectors: education nonprofits, internationally-focused nonprofits, environmental nonprofits, and so forth.
This breakdown is useful only insofar as groups that focus on the same set of issues are in any way comparable when it comes to their business and marketing practices. Which is a ridiculous assumption to make.
Can you really compare the ASPCA, which had $166 million in revenue in 2013, to your local animal shelter? By the usual benchmarking taxonomy, you should, because both are animal-welfare groups.
Or let’s think about the retail world for a minute. If you ran the Apple Store, would you benchmark your sales results against other technology stores, like Best Buy? This is a really odd comparison to make, as this table shows:
Retailer
|
Sales/square foot
|
Apple Store
|
$4,551
|
Tiffany's
|
$3,043
|
Coach
|
$1,532
|
Best Buy
|
$808
|
Sources: Forbes/eMarketer; Seeking Alpha
The Apple Store isn’t even in the same league as Best Buy, given sales per square foot, a common measure of retail performance. Yet a simplistic definition of sectors, like that used by nonprofit benchmark studies, would lead you believe they’re peers.
So make sure that when you’re comparing yourself to your peers that they really are your peers. Benchmark reports may not give you the data you need to do that.
Why are you measuring yourself against other nonprofits at all?
Charities are notoriously basket cases, with poor management and bad practices. Because they don’t face the same market pressures as for-profit industries, these bad practices can continue for years before anyone fixes them.
So don’t measure your success as a fundraiser against biased data in benchmark studies. Instead, measure your success against your goals: are you meeting revenue targets, retaining enough donors, and acquiring enough donors?
I welcome your feedback. Tell me what you think in the comments.
1. In simple terms, margin of error measures how precise your data is. A bigger sample size will typically reduce your margin of error, meaning your data is more precise.
Let’s say a poll reports that a politician is favored by 50 percent of voters, with a margin of error of ±3 percent. That means his actual popularity (if you were somehow able to talk to every single voter instead of a random sample) is somewhere between 47 and 53 percent. The data captured can’t pinpoint it exactly.
A margin of error of ±34 percent means that your sample data is basically meaningless at predicting the actual data.