Beta Testers and Minimum Viable Product: An Important Connection

Balance

Transparency

Clarity

Multiple Constituents

Real Open Communication
Beta Testers and Minimum Viable Product: An Important Connection

As we’ve grown the Orchid platform at G7, info
we’ve had a lot of internal push and pull around how to best structure an ‘early access’ program. This seems to go by a few names, but by ‘early access’ program, I’m referring to any way in which you structure feedback from your early clients/users to help you define your product, value, and proof of concept.

How to Structure early customer conversations
In school, we were taught to use customer surveys as a simple way to prove demand. Customer interviews and letters of intent were also great. Great ideas because they have tangible outputs (quantitative data, namely), but in practice, these tend to have major flaws. With surveys, you’re burning an ask’ – meaning you as the lowly entrepreneur are only going to have a few chances to ask Ms. Customer for a favor and you need to use them wisely. If you ask someone to fill out a survey, you have ‘burned an ask’ and if that survey is annoying, greatly reduced the chance of success in your next ask (say, asking for a purchase). On top of that, if you have an innovative idea, the customer rarely understands your concept well enough to provide well-informed information. And further, if someone is willing to do a survey they probably know you and want to be nice, so your response bias will suck.

The opposite end of the spectrum is to have completely unstructured discussions with early prospects. This doesn’t do much good either. You end up with a bunch of qualitative notes that are hard to put together into meaningful decisions. Ultimately, it seems that some of these approaches are better suited for different points in the venture, and depending on the focus and market of the venture in question. In our case, we’re focused on the mid to large enterprise client and we’ve just now started building the first iteration of our product. We’re very early and establishing our first set of active users.

MVP is a set of assumptions
Underlying all of this is what you’re building. When you set out with a new great idea, you have no choice but to make a ton of assumptions. In the case of software, assumptions about how the user will interact, why, what they expect to get out of it, how much time they’re willing to spend, etc. drive what your first build looks like. These assumptions are largely connected to what you assume will ultimately make you profitable (I assume people want a fun game on their phone and are willing to pay $2.00 for a week’s worth of entertainment). Additionally, the idea behind minimum viable product (MVP) is to build the leanest feature set possible that adequately address the core demand for your product. Talking to early adopters and potential users is very inexpensive way of fine tuning your assumptions, a first step in a larger discovery-driven planning approach.

Connecting Feedback to MVP
A friend and software entrepreneur building Hands On Test, had a very good point. You must connect the structure of feedback in early customer testing with proving value in the features you’ve defined as MVP. Then, use the feedback to tweak these features and prioritize future ones. Recognizing that we aren’t ready to ask anyone to complete a survey, we’re doing three things.

  1. Incentivize the Pilot

As we ask prospects to talk to us about what we’re doing and help us refine our platform, we’re offering them full access and the ability to shape it to what they need. We’re also pouring on as much free consulting as possible. With everyone we have a loose understanding that in 6 to 12 months the pilots will phase out.

  1. Ask the same 3 questions

We are going the route of unstructured discussion, but we’re asking every prospect the same three to five questions in every meeting. These questions are heavily targeted around our core value proposition, ensuring that at least at a theoretical level, potential users are in fact interested in our proposition and believe that it is worth paying for.

  1. Track usage

As we begin rolling the platform out to our first set of users, we’re incorporating services like Intercom.io to help us track use patters. We are going to then compare this to some of the responses to our targeted questions to see if user behavior matches what we expect is should. If it does, then we feel like we’re headed in the right direction. If it’s off, then it gives us clues about the actual intent of the user versus their ‘theoretical’ intent.

All things considered, I don’t think there’s any one perfect solution out there. I would be interested to see some evidence around early surveys and whether or not they are effective, particularly whether or not those survey takers remain loyal and eventually purchase the product. I would also like to hear from more product entrepreneurs about how much was assumption driven versus customer feedback driven.


Posted

in

, ,

by

Tags: