• http://twitter.com/TriKro Tristan Kromer

    ” It’s always difficult to judge what people will actually do vs. what they say they will do, but this gives a strong indication that a subscription offering is in fact viable. ”

    Grooooaaan….It’s difficult, but hey sure go ahead and advise people to do it anyway. Despite the fact that the margin of error on user introspection on future purchasing behavior is +/- 100%.

  • Brent Chudoba

    Hey Tristan,

    I’m glad you commented on that topic since it’s an important one and something I wanted to expand on a bit on in a future posting on my own blog, so thought I’d post some feedback first. My comment in the post which you highlighted essentially says that people are bad at predicting what they will do in the future (this is something our methodologists at SurveyMonkey remind me of frequently), so you have to discount (hopefully not entirely) what consumers say in some respects if you are using forward looking opinions to help make decisions. So what options do you have when deciding whether or not to build something and/or how much to spend ($ and time) on it? Here is how I was thinking about it:

    If you want to launch a product or feature that you think will be successful (assuming that you wouldn’t consider launching or building it if you didn’t at least think it would be something people would want or would pay for, and that you’ve probably at least asked a few people and sought some user feedback and done some research/analysis). Using the example in the blog post, Modify was considering a subscription service for ordering watches.

    We had Eric Ries come to SurveyMonkey to speak a few months ago and his talk led me to think of two extremes in how a team could launch a product like a subscription service for watches:

    1. Fully featured: Spend days/weeks/months building a fully featured product or solution with recurring billing, recommendation engines that looked at prior orders, etc.

    2. Zero featured: Stick a button on the site and drive people to a page that describes a watch subscription service, see how many people click (or even go so far as to take them through a billing funnel and see if they actually submit a card, without actually charging it). If anyone clicks, nicely tell them “sorry, we were testing out the feature, you get a free subscription when we actually launch this”.

    I assume that any real-life approach falls somewhere in between these two extremes and with at least some research to make sure people actually want what is being built.

    Depending on what you want to do, and what the cost of your approach is, at what point do you do some research, and what instruments do you use? Launching a simple page test and coming up with a process and test plan isn’t free, and spending weeks of work to build a feature is even more expensive. So at what point do you think about using research tools to help make sure cost ($ and time) is worth it at all?

    If you came up with a metric of say, X% of any cost of future development should be spent on research, user experience testing, offline research etc., would that help guide product development and make it more successful? People typically spend some time and money on research anyway, and I view surveys as yet another instrument that can help in this process, and actually one (with the product I work on, SurveyMonkey Audience) that can be (not always) much faster and less expensive than other methods.

    While it is definitely possible that future introspection via survey data can have a wide margin of error, I do believe that a well crafted survey project can get you a good indication of how consumers will respond. I would trust the data provided by a large sample of consumers who answer the question, “In the past week, how many times have you visited the grocery store?” more than “How likely would you be to purchase this new product?”, but when using a forward looking question, my goals would typically be to make sure an idea isn’t totally crazy, the demand non-existent, or my initial pricing thoughts way off the mark.

  • josh@qasymphony

    Great article and Survey Monkey is a great tool. My question is how do you get the survey out to the target population and get them to fill it out if you are a relatively new start-up? Our target market is a niche area, software testers, we are still developing our brand, so it’s difficult to partner up with a trade rag or community to sponsor it with us. Any suggestions on how to get to that population of testers?

  • lockce

    These jackets and other clothing from North Deal with are extensively recognized north face clearance their convenience and longevity. Not only are they completely functional – from going north face denali sale walks on cold mornings to snow boarding, these are also pretty stylish.North Encounter is usually a San Francisco based company which was recognized in 1968.

  • Chris Bumgardner

    Hi Josh, we built our startup over the past year to solve that exact problem: reaching a targeted audience of professionals for user research. Check us out at https://www.AskYourUsers.com, we’d love to help.

  • http://www.facebook.com/paul.ballard.31337 Paul Ballard

    We offer a service that helps with this exact service at http://www.startupsurveys.com! Please come and have a look and see how we can help you

  • joy

    Great article, I
    would like to recommend http://colibritool.com/ (Colibri Tool) ,
    https://ahrefs.com/ and http://www.majesticseo.com/ as one of the best tools
    to use. Also Googly Analytics. http://colibritool.com/

  • hemant

    well,you have discussed about surveys but you didn,t take into account the errors that can come.how to negate those ??

  • https://plus.google.com/115814021235533244972/posts Brent Chudoba

    What types of errors did you have in mind?

  • hemant

    sir , my question is that if i survey 1000 customers like you
    suggested their replies can differ as they can’t be homogeneous they can differ on basis of their incomes,habits ,preferences e.t.c then how we can summarise their replies and come to a conclusion?

  • https://plus.google.com/115814021235533244972/posts Brent Chudoba

    It’s important to sample from a frame (group of people) that represents your target audience of customers or respondents. Since you need a diversity of opinions within the frame, you are exactly right that understanding how incomes, habits and preferences may be similar or different is important. For example, if you are doing a generic study and want a perspective of the US population, the sample of respondents should be balanced to be representative of the US population (e.g., balanced along gender, age and income groups). If you were to conduct a study of iPhone owners, you would not necessarily want the same balance since iPhone owners is a select group and may have different income, education and age tendencies, but getting a representative mix of the different types of iPhone owners is important. This topic of sample composition is something that is important for researchers to think through. In our SurveyMonkey Audience product, we offer ways to create quota groups to target a specific mix of respondents and also balance our samples by default to include a representative mix of people by age and gender, but can also balance samples to whatever composition customers require to ensure data is as reliable and actionable as possible.

Mobile Theme