Like many authors, I run an email newsletter as a method to engage with readers and market my book. Building an audience for that newsletter is a slow process and there are few shortcuts. Most of the time, it’s about engaging online through your blog or social media and encouraging people to sign up. Yes, there are places out there that will sell you a list of email addresses, but it’s a bad idea. Purchasing such a list typically breaches privacy legislation in several jurisdictions due to lack of consent, and the list will often make you fall foul of spam traps (fake or abandoned email addresses monitored by spam companies to identify offenders).
One technique to accelerate sign-ups used often among indie authors is to run giveaways. Typically, these work by offering a chance at winning a prize in return for signing up to the newsletter (or liking a Facebook page, etc). As an author, how much mileage you get from these giveaways depends on your reach
I’ve participated in a few of these competitions over the past twelve months, all with the same group. I have been careful to ensure those giveaways I’ve participated in have taken an ethical and legal approach, in particular to the competition aspect of the giveaway and compliance with privacy obligations. In each case, the entry form made it extremely clear the entry involved signing up to the newsletters of each author involved, the authors were clearly named, and to complete the entry the person needed to select they acknowledged they were going on each author’s newsletter.
Why am stressing this? Because even with crystal clear terms, people enter and immediately complain about receiving emails, or they just mark your emails as spam, which can truly hurt an email newsletter. The matter was highlighted to me when the email service I was using to manage my newsletter was blacklisted by one of the major spam monitoring companies, affecting many users. Coming out of that, I double checked I was using accepted best practice for managing my mailing list and the newsletters themselves. I ticked all the right boxes (only subscribers who I could demonstrate explicit consent, clear privacy policy, I only email once a month, etc.), but as an extra precaution, I purged approx. 1500 subscribers who hadn’t opened one of my newsletters in the past six months.
While reviewing my subscriber data, I decided to run an experiment with the next giveaway I participated in. Normally, on completion of a giveaway, I’d send them a welcome email letting them know what to expect from my newsletter and giving them an easy unsubscribe button if they didn’t want to get my emails. This time, I decided to test something new; I sent them a confirmation email, explaining how / why I had their details (the giveaway just completed), what they would get in my newsletter and how regularly, and then asking them to confirm if they want to remain subscribed. If they didn’t click the confirm button, I promised not to contact them again.
Going into this, I knew the number of people confirming subscriptions would be low, that’s the nature of these giveaways, but I wanted to know:
I’ve completed stage one of this experiment (the
I’ve captured the statistics from this latest giveaway (a set of Neil Gaiman books) as well as for two previous campaigns (a Star Wars collection and a Dresden Files collection). The two previous campaigns were run in a similar manner, with the Neil Gaiman campaign run as I described above. After sending out the confirmation email, I gave entrants over two weeks to confirm their subscription before extracting the data.
Open rates across the three campaigns were similar, with the test case sitting in between the two previous campaigns at 34%. This isn’t a great open rate, but about what I expect from an onboarding exercise like this.
The click-through statistics here do not include clicks in the test case to confirm a subscription, only clicks of hyperlinks to other content I reference in the email (a link to my book, website, privacy policy, etc). The test case had the lowest click-through, but all three campaigns were similar (and, tbh, low).
As expected, spam complaints were noticeably lower in the test case than the other two campaigns (0.19%). However, they remained higher than I expected. It’s possible some of these were from spam algorithms automatically flagging the email — only 9 of the 4731 emails were flagged as spam.
I wasn’t sure what to expect when I started this experiment. I knew subscriptions rates among the entrants was likely to be low and I was right; only 8% of entrants confirmed their subscription. Compare this to 89-92% for the other campaigns (people who didn’t unsubscribe or mark the campaign email as spam) and this is a pretty bad result. However, it will be telling to see how engaged these new subscribers turn out to be. I’ll be monitoring that over the next few months to try and answer that question.
Since posting the above I've now sent out my next newsletter. As promised, I've kept the test
As shown, across the board the test group performed better:
The stats above are a nice improvement, but considering the tiny subscriber rate, the return here is... not great. If these stats played out over a campaign that attracted 5k entries, then I predict ~ 50 of the triple-confirmed group would open the next newsletter. That's compared to an estimated 430 based on the control group (and using the very low open rates on this particular campaign).