Our Production Solutions | PS Digital Team believes in strategies and recommendations backed by best practices and real data. We believe each of our clients has a unique mission and supporter base that require a unique strategy to maximize a donor’s ability to connect with, and support, that mission.
But how does a nonprofit determine what that strategy is? That’s the question we posed to our Digital Producer Van Do. His response – TEST!
“When looking at testing, look to test things that will have a big impact on your constituents”
“Let’s start with a little background, to be sure we’re all talking about the same thing,” Van said when he sat down with us earlier this month. “Everyone is gearing up for year-end, and right now is an important time to not only understand what testing is, but which elements are best to test.”
A/B Testing (the most common form) is simply testing one variable at a time. “Testing two or more variables is called multivariate testing and we won’t cover that here. It requires a larger sample size, and it is easy to misinterpret the data,” said Van, who also noted that we will sit down to discuss multivariate testing later this year to help our nonprofit partners with upcoming fiscal planning.
According to Van, in his more than 10 years of experience in digital production, the most common mistake nonprofits make is “sweating the small stuff.”
“When looking at testing, look to test things that will have a big impact on your constituents,” says Van. “Testing subject lines, graphics, layouts, calls-to-action, and copy are key points to test. Results from these tests will speak volumes about your constituent preferences, and give you greater confidence in the results as you move forward with your campaign.”
“Testing, and the math behind it, doesn’t have to be scary,” Van adds. “Plan your test practically from the beginning by asking yourself a few questions: What do we want to accomplish from this campaign overall? What action(s) do we need our constituents to take to accomplish that goal? How can we increase our chances of them doing that?” These questions will help you determine which variables are best suited for your test.
In addition to selecting variables with the greatest likelihood of impacting your constituents’ actions, Van also suggests avoiding the pitfalls of stopping your test prematurely. “Determining the sample size and threshold are important in any test. You need to decide how many responses based on your sample size will accurately answer the questions you’ve posed to yourself.”
According to James Dahl and Doug Mumford in their white paper Nine Common A/B Testing Pitfalls and How to Avoid Them, avoiding the temptation of stopping a test early is key in gaining useful data. “It is tempting to stop a test if one of the offers performs much better than the other in the first few days of the test,” writes Dhal and Mumford. “However, when the number of observations is low, there is a high likelihood that a positive or negative lift will be observed just by chance, because the conversion rate is averaged over a low number of visitors.”
In short, says Van, “Determine the number of responses (based on your sample size) that will give you enough data points to draw meaningful conclusions. Once you do that, stick to it no matter what the data seem to be telling you early on – even if one version looks like a runaway winner out of the gate.”
By looking at the big picture of your test and how it relates to your campaign goals, along with patience in running these tests, you too can avoid common and often costly testing mistakes.
“The goal of any test is to assess a situation, to make sure your message is actually effective. You don’t have to be afraid; in fact it’s probably more frightening not to test,” says Van. “We love testing at PS Digital and we’re here to help ease your fears!”
Still having test anxiety?
Feel free to reach out to a member of the Production Solutions | PS Digital team for help with testing your Digital campaign strategies. In the meantime, here’s a glossary of some basic testing terms and tips to help you get started:
Conversion Rate: The percentage of users who take a desired action.
Sample Size: The number of observations in a sample; in other words, the total number of constituent actions needed on both the control and the test versions to complete the test.
Confidence Level: The probability that a value falls within a specified range of values. In practical terms, the reliability of your test result. Aim for 95%.
Top 5 Variables to test:
Additional resources on testing:
© 2024 Production Solutions, Inc., a Moore Company. All Rights Reserved. Privacy Policy