# How To Run A Beta Test... Or Not?

Originally published on 13 September 2004 at
http://www.namesuppressed.com/syneryder/2008/treo-650-mp3-problem.shtml
----------------------------------------------------------------------


I had high hopes for this beta test. It was a 0.1 upgrade release of a
product I'd been selling for 3 years. Most of the bugs had already
been squashed, so I intended to spend most of my time surveying
testers and soliciting ideas for improvements. I thought I could build
word-of-mouth by using a large beta testing team. The expectation was
that this would be a very quick and simple beta.

Oh man. How wrong could I have been?

Several aspects of the test went okay. But I decided to make changes
to how I run my beta tests, and that resulted in problems. These are
some lessons I learned from the latest test.



## Build a huge database of beta candidates

One of the things namesuppressed has always done right is build a
database of beta testing candidates. It tracks contact info,
demographic data, hardware and software, and other metadata. I'm glad
for that - it's clear you need a *lot* of candidates just to find a
good testing team to choose from. Asking candidates to complete an
application form also helps to weed out testers who are only
interested in freebies.

35% of candidates in the database were deemed ineligible for the test.
Eligibility criteria included compatible software/hardware, a history
of answering emails, and a history of providing feedback. That left
65% of the database to choose from.

Of the invitations I sent to potential candidates:

* 22% of the emails sent to candidates bounced
* 2.5% of candidates declined the invitation
* Only 40% of invitations sent resulted in acceptances
* The rest (more than 35%) never replied.



## Never roll your own if you don't have to

I decided to use the free mailing list software provided by my
webhost, even though it had some drawbacks. I thought I could
overcome the drawbacks by writing my own software to compensate for
them. In fact, I already sell software that does this. I thought I'd
make a couple of modifications to the program I sell and all would be
fine....

Uh, no. The extra software & modifications I wrote caused major
problems. People started getting two copies of every email, it messed
with HTML emails, and even modified them in ways that triggered spam
filters. Hotmail users never received any emails until the problem
was fixed - and then they were surprised by a sudden flood of
messages. Fixing those problems was one of the most stressful parts
of the whole beta.

I should have just started a free discussion group on Yahoo Groups, or
purchased an account with Topica. Both are proven solutions. By
writing my own software, I wasted a lot of time that could have been
better spent. But on the bright side, I fixed some bugs in my own
software.



## Make your testers opt-in themselves

Before the test began, I asked all testers personally if they would
like to join an email discussion group for the beta test. Amazingly,
everyone agreed - even I hadn't expected that. However, it soon
deteriorated. As soon as the test began, some testers unsubscribed
from the list straight away. More testers complained later and
unsubscribed (or asked to be unsubscribed).

* About 10% of beta testers unsubscribed as soon as the test began
* Another 10% asked to unsubscribe during the test
* Some testers couldn't find the unsubscribe link

A problem was that some testers didn't recognize the emails when they
first started receiving them. Even though the list was "confirmed
opt-in" (we had written confirmation from each testers email address),
it would be better if testers had performed some action to opt-in (eg
clicking a weblink). This would help them realize they were joining
an email list. Also, explaining how to identify the emails may be
helpful (eg "The subject line of all beta test emails will begin with
[betatest]" or something similar).



## Expect the unforeseen

It's very rare for a beta test to go exactly as you plan. The whole
idea of a beta test is to locate problems you couldn't find yourself.
That approach needs to be applied to the beta testing process too.

* About 10% of testers had personal issues that restricted their
 ability to test.
* Expect the beta test to take about twice as long as planned... so if
 you think you can get through with just 40 hours work, expect to
 take 80 hours instead.



## Organize several beta testing groups

Anyone who has run beta tests before will know that it can be hard to
keep enthusiasm up for the duration of the testing period. The best
way to curb this is to bring in new groups of testers at regular
intervals (every second beta). I didn't have enough candidates to try
that this time. However, it means I can show you a graph of
enthusiasm levels by measuring the frequency of messages throughout
the testing period:

Chart showing frequency of beta tester feedback
http://www.namesuppressed.com/syneryder/synegfx/beta-email.gif

Notice that enthusiasm is highest at the very start, and is maintained
for about a week. It is renewed slightly for the second beta, but
doesn't last much further than that. This is not a criticism of the
beta testers in any way, it's just something to be expected.



## Be exceptionally clear about expectations and etiquette

Our testers were confused about the purpose of our beta-tester email
list. Here's what some of them thought it was for:

**Announcements Only**
Some testers thought the list would only include announcements of new
beta downloads. We explained in the beta invitations that it was a
discussion group they could participate in, but apparently it wasn't
clear to everyone.

**Bug Reports Only**
Some testers thought the list was for making bug reports only. They
thought that telling everyone the bugs would reduce duplicate bug
reports. It's a nice idea, but when you have hundreds of messages
it's difficult for everyone to keep track. You either get lots of
duplicate reports anyway, or lots of people who don't report their
bugs because "someone else probably found that bug already".

**Socializing**
Lots of testers thought the list was so they could talk to other
testers about anything they wanted. Actually, we encouraged this
idea, thinking it would make testers feel comfortable. It didn't quite
work - some felt comfortable, others felt alienated.

**Tutorials**
Some testers thought the purpose was to share images created using the
program we were testing, and teach others how to create those images.
I hadn't anticipated that, and it was too late to accommodate it.
Tutorial lists involve lots of large attachments, and group members
with dialup connections or small email in-boxes couldn't handle it.
Also, some people like tutorial lists while other people really
dislike them, causing a rift in the beta group.

So, what was our list really meant to be? A combination of the above
- we announced all new betas on the list, expected bugs to be reported
to the list, and expected some off topic chat... even the occasional
picture post. But we didn't make this clear in our initial beta test
invitations, so no one knew what to expect or what the boundaries
were. I guess that's because we didn't know where to set the
boundaries either.



## Manage Conflicts Within The Testing Group

On the surface, everything was fine - the test group seemed friendly,
very active and quite productive. Behind the scenes, things were
falling apart. I received angry and upset emails from people who were
frustrated by the volume of emails, frustrated at levels of "off
topic" discussion, people who felt shy or intimidated by others
testers, and even people who just didn't get on with the other
testers. Some statistics:

Chart showing beta tester emotions
http://www.namesuppressed.com/syneryder/synegfx/beta-positive.gif

* 66% of beta testers had some kind of negative experience.
* 33% of beta testers said they felt intimidated.
* 25% said they felt anger during the beta test.

I'm not sure what I needed to do to fix this. Certainly I needed a
better understanding of how to manage virtual communities, and how to
develop a stable culture within the group. Perhaps I needed to set up
two communities with different rules.



## Take a final beta test survey

I was disappointed with the lack of response to my final beta tester
survey. Testers were told that completing the survey was a necessary
part of testing, even in the beta invitations we sent. However, we got
a low survey response rate:

* 36% responded to the first survey request
* Another 18% responded when we sent a reminder.
* Overall, just 55% responded to the final survey

The surveys are extremely important. They provide measurable feedback
on pricing, product features and general opinions of the product. We
use the data to calculate maximal profit curves and select features to
add in later versions. It's crucial information, so dropping the
surveys isn't an option.

Another reason for the surveys is to elicit feedback from quiet
testers. There are always some testers who never send bug reports or
talk on the mailing list. Many of them will respond to an anonymous
web survey though. The feedback is useful and often brutally honest -
exactly what you want.

So, how to increase the response rate?:

**Explain that completing the survey is required**
You need to use the word "required" and make it stand out. Some
testers told me that they'd thought the survey was optional.

**Offer the survey early on**
Perhaps if the surveys are given out when enthusiasm is at its
highest, I would have had a better response rate. As it was, I handed
out the surveys during Beta 3, the time of lowest enthusiasm.

**Double the number of testers**
I've had a 50-55% survey rate on at least two beta surveys. By
doubling the number of testers, I hope that I would get twice the
number of responses.

**Only give an unlocking code once the survey is completed**
That's probably the best solution, and should at least demonstrate how
many testers are interested in the program. But depending how it's
done, it may compromise the anonymity of the survey, and that may
reduce the quality of answers that you get.



## In summary: what should we do in future?

**Use proven software solutions.**
If you need to use software in your testing, use a pre-written
program. Don't write your own unless necessary. You should only be
debugging one program in your tests.

**Set clear expectations and boundaries.**
Tell testers you expect them to give you feedback, and set a minimum
frequency (eg at least one email a week). Let them know if chit-chat
is okay, and how much is acceptable. Set maximum email attachment
sizes & frequency. List anything else you can think of, and tweak the
expectations as you get feedback/complaints from testers.

**Recruit 10 times as many people as you need.**
If you want a database with 30 really good beta testers in it, you'll
need *at least* 300 people to complete your beta tester application
form. And don't forget to start a database to keep track of them all.

**Invite 6 times as many testers as you need.**
Nope, this isn't the same as above... this is saying that when you
choose the beta testers from your database and invite them into your
beta team, invite 6 times as many as you need. If you want 10 survey
responses, you should probably invite 60 to test. This is because
only 40% accept the invitation, 10% have unforeseen problems, another
80% of that continue participating, and 50% of that provide
feedback.... you need to cover all these.

**Keep to a fixed schedule.**
I planned to release a new beta every 7 days, and judging from the
enthusiasm graph this would have been ideal. 2 week intervals would
have been too long.

**Introduce new testers every second beta.**
By the time the 3rd beta came around, everyone was understandably
jaded.

**Act when enthusiasm is highest.**
Get all the important jobs done very early on when enthusiasm is at
its highest.

**Survey your testers.**
It's better to have some feedback than no feedback at all. Survey them
when you deliver their 2nd beta.



## Project Statistics

Name:
Softener 1.20

Duration:
4 weeks [31 days, 120 hours work]

Development Platforms:
Windows 98SE, Windows XP Professional

Release Platforms:
Windows 95/98/98SE/ME/2000/XP

Lines Of Code:
3239 (Softener = 1661, nsPSPlugin = 1578)

Development Tools:
Textpad 4.6.2, Borland C++ Builder 3, FilterMeister 0.4.21, Ghost
Installer 3.7, Ghost Installer 4.1, PADGen, Resource Hacker, XVI Hex
Editor, Microsoft Virtual PC 2004, MySQL 3.23.44, MySQL Admin, MySQL
Front 2.5, Beyond Compare 2, Jasc Paint Shop Pro 7, Jasc Paint Shop
Pro 8, Jasc Paint Shop Pro 9 Beta, Jasc Paint Shop Pro Studio Beta,
Adobe Photoshop 6 Tryout, Adobe Photoshop CS Tryout, Megalux Ultimate
X 1.3, Microsoft Wordpad, ezmlm, namesuppressed WebScriber, other
bespoke namesuppressed software

----------------------------------------------------------------------
(C) 2004 Kohan Ikin / http://www.namesuppressed.com/syneryder/
----------------------------------------------------------------------