Bristol Organizations

An Organization Made Up Of Organizations

By Nick Ellinger, VP of Marketing Strategy, DonorVoice


A significant factor in the donor’s decision to give rests in how s/he answers the question, “how am I going to feel if I make this gift?”  So, the job of the fundraiser is to determine how those factors under an organization’s control can be most effectively presented.


One major set of issues involve those of  overhead”, “Impact” and “control”.  


Overhead rates are even worse than a below-average way of evaluating charities.  Perhaps if they even rose to the standard of just “mediocre”, we could stand them being part of the mix as people evaluate nonprofits.


In reality, they are actively negative ways of evaluating nonprofits — to a point, the nonprofits that do the “worst” on overhead are actually better and more effective organizations.  They avoid the starvation cycle outlined in Gregory and Howard’s article and are investing enough to build the infrastructure of a vital, vibrant organization.


The problem stems from sloth:  the overhead rate of a nonprofit is easy to measure.  It’s comforting to have one easy number that tells  how good something is, even if the metric is built on a foundation of absolute horse puckey.


Thankfully, people will look at actual effectiveness over overhead ratios if given the choice between the two.  But if people are given only “overhead” by which to gauge charities, they will crawl toward this mirage and, when there is no oasis there, try to drink the sand.


So it’s vital we give folks a way to measure nonprofits that work and are truly effective.  That’s why DonorVoice and the DMA Nonprofit Federation worked together to use DonorVoice’s Pre-test Tool to assess what can be done to get donors to look away from overhead and toward more meritorious measures (read: almost any thing else).  If you’d like to see the DMA’s free webinar on the topic, it’s here.


First a bit about the pre-test tool, which you can read more about here.  We create a grid of factors we want to assess (in this case, 1) trust indicators, 2) who your gift helps, 3) how overhead is presented, 4) donor control, and 5) donor identity) and several treatments for each variable.


Then, we create a simulated communication using five different versions of each variable and ask donors which they prefer.  Much like your eye doctor asking you whether you see the eye chart more clearly lens A or lens B.  From that data, you can determine not only what version of each variable works best, but what variable is most important to get right.


This process also has the advantage of testing thousands of test variations at once.  With five rows and five variables in each row , we were able to find winners for each variable very quickly.  Using traditional testing methods, as Roger noted here, it would take 20 A/B tests per year for 125 years to get similar results.


I don’t know about you, but I don’t have that kind of time.


Part of the challenge, then, of traditional A/B testing is the pressure to show results.  As a result, the traditional approach is to test incrementally – red v blue envelope, teaser copy, a line here and there – in ways that lead nowhere in the long run.  As you’ll see, we were able to test some incremental concepts, but also some breakthrough concepts that, if implemented, have vast implications for how to raise funds.


The Test

For this test, we created a fictional cancer charity so those taking the test would have no predispositions about the brand.  (And we chose cancer because it is sufficiently widespread that you will get a mix of people who have or have had the disease, those who know someone who has, and those who have no personal experience.)   The sample was over 400 donors to other nonprofits.


Throughout a week of testing, we’ll go through a variable a day, showing what worked, what didn’t, and why.  Today, in this post,  we’ll start with donor identity today, in part because the results will be so unsurprising to frequent Agitator readers.


The lessons learned were:

  • Donors preferred an identity statement that matched their own experience. If you personally knew what it is like to have cancer or care for someone with cancer, that statement spoke to you and you liked copy that reflected that.

  • Getting an identity wrong was worse than not having an identity statement at all. For example, the statement “you haven’t experienced cancer in your life but you can imagine what it is like for those who have” polled worse than leaving this section blank.  This is because most people in the sample had had a personal experience with cancer, whether direct or indirect, so this inaccurate matching hurt results.

  • Overall results masked these differences by identity. That is, if you look at the overall results, it looks like identity hardly mattered.  But when you looked just at people who held an identity (e.g., a direct connection) getting that identity right mattered very much.  So too may it be with your results.  A test communication could look to have the same results as a control overall, but have a substantial positive impact with one identity and a negative one for another.  That’s why it’s important to test different messages with different identities.

All in all, these data support what we’ve seen in the research literature – people value their identity over effectiveness information.  As one researcher commented “they [donors] care about it [effectiveness], but not enough to sacrifice their own personal preferences when choosing a cause to support.”


We’ve thrown some shade at A/B testing here, but we should say this is now where A/B testing can come in handy: once you have a reason to believe.  That is, these tests allow you to create a meaningful hypothesis about what will happen and why.  From there, you can now put a test into practice: how will your letter/email/phone call work better when you match identity to identity language?


To return to the Bristol Organizations Non Profit Newsletter........ Click Here