Customer Development and Our Strategy Process

Welcome to the second post in my series on strategy development for EWB’s public sector agric team. Check out the first post in this series if you haven’t already at http://theborrowedbicycle.ca/2011/03/strategy-development-in-small-meal-sized-chunks/

This post will be an overview of some of the learnings we’ve taken from our recent experience trying to bring our Agriculture as a Business program to scale within the Ministry of Food and Agriculture (MoFA), and what we’re doing differently as we go forward.

The Past

In the earlier days of AAB we focused and invested a lot of time and resources on ‘product’ development (the AAB curriculum), making sure it worked for developing effective farmer groups if the program was carried out by a well-trained Agriculture Extension Agent (AEA). We developed a curriculum that is an effective tool and started thinking strongly about the plan to spread (or scale) AAB only after we were satisfied with the quality we had reached. In this way I think we made a mistake that many other organizations have made in the past – investing in research and an idea we thought was exciting without a clear understanding of who would eventually need to own and run the program. We built a product that was great, but at the end of the day, MoFA (not farmers) is our customer and our product didn’t fit well enough with our market [1]. Fortunately we spend a lot of time directly on the ground with districts, have recognized this shortcoming and have been able to learn about what can be done differently.

Before I move I just thought I’d point out that I just said that MoFA and not farmers is our customer, and that might come as a bit of surprise. Don’t worry, the justification is in my next post and it will all come together.

Customer Development (to the Rescue?)

As we’ve been learning about what hasn’t worked about our approach to scaling AAB, I’ve seen a lot of useful ideas in the Customer Development methodology that Steve Blank writes about. Eric Ries gives a great introduction to some of the principles behind customer development at http://www.startuplessonslearned.com/2008/11/what-is-customer-development.html. My copy of Steve’s book The Four Steps to the Epiphany is well worn, and I agree with Eric that it is sometimes a bit of a dense read but well worth it if you are interested validating a model before spending the money to take it to scale.

Five key takeaways from the principles of customer development that we are adopting are:

1. Form and validate problem hypotheses (for our customer segments within MoFA, not just farmers) before investing in a solution.

One of the more difficult and often discussed elements of doing development work is “ownership” or “sustainability”. When an outside organization brings a new idea, intervention or solution, it is almost guaranteed that the intended beneficiary will accept, unless the idea is truly atrocious. This makes perfect sense from the beneficiary’s perspective – free stuff! Why wouldn’t they accept? Especially if there is the possibility of a longer term relationship with more benefits in the future. Consequently it is very difficult to get honest feedback on whether an idea is valuable for a partner, or how much value it actually provides. Instead of simply presenting solution ideas, we are explicitly identifying problem hypotheses that our solutions address, and validating those before we even mention a solution. The intention is to first understand the value of solving a problem as opposed to the value of a solution. This may seem like two sides of the same coin, but it changes our approach significantly, as I’ll discuss in the upcoming post on Getting Out of the Building.

2. Explicitly write down all of our solution hypotheses.

In order to have an strong understanding of exactly what it will take to reproduce results at scale we need to be very explicit about every aspect of our impact model. Finding effective language and a solid framework to easily write down and categorize all of the important hypotheses has been a bottleneck in the past. We’re now using the Business (Impact) Model Canvas in order to do this. The next few posts will go through the basics of the Impact Model Canvas, analyzing AAB as a concrete example.

3. Test solution hypotheses as cheaply as possible without necessarily assembling the “whole product”.

In the past we have moved through iterations of our model by implementing and testing the entire thing as quickly as possible. As the end vision of our model evolved, our work on the ground tried to match that exactly. This is an expensive and difficult way to test small changes to parts of the model as it doesn’t isolate the different assumptions of what works and what doesn’t. As we move to a model that promotes explicit hypotheses for each component of our model, we can find creative ways to test individual components of our model without constructing the full thing. This will accelerate our learning and create knowledge about specific hypotheses that can be generalized outside of our current model. We will end up better understanding how the system we’re operating in actually functions all at a lower cost than a full model test. To me this is one of the most important changes in our approach and deserves at least one dedicated. I’ll defer that until I can build on a concrete example to add clarity over the theoretical babble that is this paragraph.

4. Pilot in true at-scale conditions before investing in a scale up.

Another oversight that can end up wasting dollars and generating disappointing results is a false pilot stage. In my opinion a pilot stage should include the whole model in the conditions as they will exist at scale. Pilot results often determine funding however, so it is no surprise that the highest quality staff and facilitators and sometimes additional resources will be given to pilots. The location of the pilot is often chosen to be more suitable than the average as well. This is dangerous as it means scaling the pilot model is a gamble; the conditions at scale will be quite different. This may lead to a slight decrease in quality which is not the end of the world, or a totally different outcome which does not meet expectations in the slightest.

This is why I’m wary of projects like the Millennium Villages. Unfortunately Jeffrey Sachs and the host of scientists and academics that are heavily involved in the handful of pilots across the world are not a scalable resource. While this set up can drive a lot of learning, I would expect to see a demonstration of the program working when villages receive resources that are more typical of those they would receive at scale before actually scaling. We could easily fall into this same trap with AAB by using the districts that are running AAB successfully (higher capacity districts with significant previous investment and continuing support from EWB staff) as evidence that we are ready to go to scale. We need to demonstrate first that districts with more typical capacity levels will be successful with the level of resources that we would be able to provide at scale.

5. Consider our “market type” when we evaluate expected timescales and investment for change.

One of the important insights in The Four Steps to the Epiphany is the understanding of the magnitude of investment required to enter different types of markets. As an example, we’ve had success in scaling several products in the realm of monitoring and evaluation and reporting within MoFA. In each case we built a product that was simply better than the existing system. It was easy to scale because our MoFA understood the purpose of our product and the existing alternative it was replacing. AAB on the other hand is a completely new market. While developing farmer-based organizations has always been part of MoFA’s mandate, in many cases there was little focus on long-term investment outside of the concept of creating groups to access development projects. A program to simply build strong groups for the groups’ sake is a foreign idea, and hence will take a much longer period of investment to see it catch on. When we evaluate the ideas we want to invest in it is important to have a strong understanding of the market type we are entering and hence the resources required to reach scale (in the best scenario). This one warrants it’s own post as well, or you can check out Steve’s blog post on market types at http://steveblank.com/2009/09/10/customer-development-manifesto-part-4/.

While all of this all still very theoretical and untested in this context, I’m optimistic about these principles being a further improvement to our processes.

What the Process Looks Like

We’re taking some ideas from the strategy process in Business Model Generation to actually articulate the different steps we are going through.

  • Mobilize: We went through this step in February – we agreed as a team that we wanted to go commit to this process and decided to take a first stab at articulating all of the possible problem and solution hypotheses that we were interested in investigating.
  • Learn: This is where we’re getting out of the building. We are first probing on our problem hypotheses – trying to understand MoFA’s current behaviour and what problems are actually valuable to solve from different perspectives within MoFA. We are also learning more and more about who within MoFA and in the donor community is interested in our different impact models and how to engage them. Our outputs from this step are clear validations (or refutations) of our problem hypotheses. Ideally this is accompanied with performance targets, or expectations for what a solution to these problems would provide. Best-case pure gold would be a statement from CIDA or the Ministry of Agriculture saying “sure, show us that you can build strong farmer-based organizations that are functioning a year after the end of the program in 80% of the cases, and we’ll fund it/scale it throughout MoFA”. This is analogous to the Search phase that I mentioned in my previous post.
  • Design: This is where we start actually testing elements of our model. This can be as simple as asking different people whether the model, or parts of the model make sense, or as complicated as doing real, on the ground tests with results that support or refute a hypothesis. This is the Prototype phase from my previous post, and we would hope to see strong evidence for each of the hypotheses in our model before moving to build the entire working product. Evidence against any part of our model requires a re-think. Once we’re comfortable that we’ve got a full model that is worth investing in, we’ll move into testing the entire model. This is the Pilot stage, and a successful pilot that meets the objectives identified in the Learn phase is what we’re aiming for. This is also the phase where we need to validate not only that our model works for MoFA, but also provides positive impact to farmers, or whoever our ultimate end beneficiary is.
  • Execute: Once we’ve validated we have a full model that works under scale conditions, it is time to execute and get implementation right. If we’ve done things well, we have buy-in from high-influence stakeholders with money or power that want to see us succeed, as we are solving a problem for them as well. This is the Scale stage.
  • Manage: What next? This cycle is iterative, our definition of Scale might change, we may want to move into a new context or simply evolve the model in a new direction. This will introduce a whole host of new hypotheses to test and validate before hitting scale again.

Development is a messy process, so I don’t expect for a moment that things will run through this process smoothly. We’re testing and adapting as we go, finding ways to manage the huge amount knowledge that come out of the learning and exploration phases. Part of that learning is feedback from others on things we’re overlooking or pitfalls to this process, so lets hear them!

Thanks for making it through another small (maybe medium sized?) meal of a post. Even so I feel like I’ve glossed over so many details and will try and find the time to flesh out some of the points from customer development. This has all been very theoretical up to this point, so next up we will be diving into the Impact Model Canvas with AAB as a real life example.

 

[1] We built a lot of ownership and interest at the ground level with AEAs that were excited about the program, but that wasn’t enough to see the program spread and run effectively It is important to recognize is that we had not yet built the strong relationships we now have with people in higher influence positions within donors and MoFA, so this was not necessarily an option from the beginning of AAB, so this is not necessarily a criticism of our past approach as running AAB has allowed us to build these relationships. Now that we have them however, I believe we should use them to validate our problem hypotheses early in the process.

This entry was posted in (Human) Development and tagged , . Bookmark the permalink.

11 Responses to Customer Development and Our Strategy Process

  1. Hey Ben,

    Though I understand that you’re going to be justifying that MoFA is EWB’s customer, I still can’t accept that farmers are not another customer… if not the primary customer.

    In the business model canvas, yes, MoFA should be considered a customer in itself since we hope to support their work. However, I believe that defining them as a customer alone does not fully capture their positioning as an actor in development. I
    believe they should really be considered a “business channel” that allows services and support to reach the primary customer… the poor.

    Especially being a big fan of Business Model Generation, I’d like to see how the canvas is used to discuss the strategy of team MoFA!

    Anthony

    • Ben says:

      Hey Anthony,

      Thanks for the thoughts! Totally agreed that farmers are a customer in the system, but what I’ll argue in my next post tomorrow is that they’re not our customer (they’re MoFA’s), and if we treat them as such we’ll end up with less effective, less scalable solutions. I won’t go into the details here as there’s a whole post on it coming up but would love to hear your thoughts once I post that one.

      And I’ve also been hearing that you’re rocking the Business Model Generation stuff down south – lots of posts on that coming up here!

      Cheers,

      Ben

  2. Mike says:

    Wow, that was a full feast indeed! This is a huge and ambitious articulation effort – hats off to you for it Ben. One thing I’m excited to see/read come through more clearly is how the scaling happens – from this post, the Design->Execute transition seems to imply that you “figure out” the model in the design phase, and then scaling involves a replication of that model. I’m super interested in ideas about how a model needs to change as it gets scaled, and the value of having different a strategy for scale different than simply replicating a well-tested model.

    I think bringing the examples in hard into the next post will help open things up and ground some of the theory. Keep em coming Ben!

    • Ben says:

      Hey Mike,

      Agreed, this is a big question that still needs to be answered – is it actually possible to test and validate how a model will work at scale before actually scaling it? And I agree we need to look into other ways of scaling that aren’t simply replication of an existing model. I still think we can test different ways of scaling in a pilot format before investing in them with a little creativity.

      Any thoughts on what some of those strategies might be? Some of the lower cost but lower impact methods come to mind such as distributing program and training materials across a wider audience, but would love to hear what other strategies you’re thinking about.

  3. Pingback: Strategy Development in small-meal-sized chunks: Part 2 « What am I doing here?

  4. Dhaval says:

    Hey Ben,

    Great post, I really like the direction you’re taking the strategy development, especially the idea of hypothesis testing, and bringing that to Team MoFA. That was what drove my passion with GaRI, and I think its a great approach to working with the government as its extremely important during scale up. My question is this. How does scaling occur during the hypothesis testing (HT) phase. I think HT requires two different stages, one initially, and another post-pilot, however that can be run simultaneously to ensure even greater learning, and prevent an inefficient expenditure of precious resources. I like the way TEAM MoFA is going (really need a different name. especially now that MoFA is going to be the primary player, articulated as such). I think it will be interesting to have multiple different HT stages going on in different parts of the country if the resources were there, some in other regions to really understand what NEEDS to be there, and what can be imitated to succeed… i.e. what EWB can bring to the table, and what’s outside our sphere of influence. Some of these things that are outside our sphere of influence are within others’ though, but it requires much more coordination to make an impact on those.

    Sadly, at pointed out by Erin in her blog, resources, both staff and money, are limited. I look forward to reading how Team MoFA navigates this. Great post again Ben. Good luck.

    -Dhaval

    BTW, sorry about the randomness of this comment.

    • Ben says:

      Thanks Dhaval. Just wanted to clarify a point – do you mean the testing hypotheses across different contexts? If I’m understanding correctly we may prove a model works in the North where we do most of our work but that model may not work in districts in the South where context is entirely different. I think this is a really important point to get right – where do we draw those context boundaries and how many tests do we do before we invest in scaling? Tough questions that I don’t think have a prescriptive answer. You can draw those boundaries in all sorts of dimensions as well, for example along the district capacity lines. Maybe our model only works in districts with a high-capacity director and an officer who is passionate about driving the model forward. I think the key is to understand the limitations of the model and only invest resources where it makes sense. Easier said than done however…

      Great to hear from you! Hope all is well back in Canada – hope my long posts aren’t taking up too much of your study time!

      • Dhaval says:

        Hey Ben,

        That’s exactly what I meant, and what I think is really important moving forward for any development program or organization. I’m looking forward to seeing how the JFs fit into this equation once they get there, and when they get back. I think that testing hypotheses across these different contexts will build your ability to argue the program for scaling somewhere in the future because it will mean that you can prove you know what you are talking about, rather than coming in like some other development organizations do, and implementing projects and programs without so much as considering how it will fit into the vast diversity of situations that are present in these district departments and cultures.

        I’m really looking forward to reading what this inquiry takes you and the MOFA team.

        -Dhaval

  5. James says:

    Hey Ben,

    Very interesting post. Before I add anything more, I’d first like to state that I know very little about MOFA, it’s operations or EWBs day-to-day relationship with them. My opinions are based on my exposure to district governments in Malawi, which, through conversations with Ghanaian EWBers, I’ve come to recognize quite a few similarities between the two. So please forgive any baseless generalizations I make.
    I fully agree that our programs should be partner focused, and as EWB readily cedes authority to cultural understanding and community interaction to field workers in our partner organizations (given our massive levels of ignorance), it follows that we should place our efforts on things we do understand (organization, leadership, systems analysis, information management, etc.), which interestingly enough our things local governments often struggle with (probably why we’re there).
    Now, something that I struggled with in my placement (which mind you was only 3.5 months) was how to balance focus on tangible, implementable programs (like WPM, community management trainings or in your case AAB), which were often constrained by low motivation, poor resources, corruption and bad leadership, or working on measures to change the enabling environment of the partner organization (better reporting, developing performance based incentives, better communication with higher ministries and donors, etc.). I think its fair to say that we want to see more learning, innovation and experimentation in the development sector, however by focusing on developing programs for partners to implement are we not ignoring the root cause of this problem, namely the inability of our partner to generate such a program organically? It sounds like from your post our programs in MOFA are moving toward the latter, which on paper makes sense, however the task of monitoring “impacts” becomes even more ambiguous (e.g. how do you quantify an increase in motivation? and how does this translate into an increase in program effectiveness? and how does this increase in effectiveness benefit dorothy?) . I’m interested how you will maintain accountability, focus and understanding as you delve into increasing levels of ambiguity about the impact of your work.

    Sorry if this doesn’t make sense,

    James

    • Ben says:

      Hey James,

      Thanks for a great comment – this highlights a big issue that we’ve been talking about on our team recently. One of the ways to tackle the challenge you’re talking about is to work on programs that address short term, medium term and long term needs. Erin has been talking about that a lot on our team, and having a basket of changes that we’re investing in can certainly help on the motivation side of things.

      The impact evaluation side is definitely much tougher. Quantifying an increase in motivation is definitely difficult, and linking that to impact at the end of the day is even more challenging. For now, some element of our motivation to work on these problems is how obviously they negatively affect the work that MoFA is trying to do. That’s not necessarily the greatest justification, and someday I think we’ll have to find more rigorous ways of evaluating the work, but for now we’re still in a searcher mode. In order to believe something is working, I think we will have to see some sort of change, and that change will start to guide us to solutions for the impact evaluation problem.

      Thanks again for bringing up a great (and unsolved) issue. I think you’ll also appreciate my next post in the pipeline as well – I get a bit into the issues of investing in long-term change.

      Ben

    • Janine says:

      Great comment James, I definitely see the point that I think you are trying to make, and it’s sparked thoughts about the appropriateness of developing programs for governments to use in general, vs. as you put it enabling the government to come up with these things organically. I think Ben’s latest reply touches on this a bit, in that investing in different goals along different timeframes is a way of creating small wins, and maybe even hedging your bets. I am tending to think though that if our “customers” are not at the stage that they are able to develop these kinds of programmes themselves, will they be able to effectively take them up and work with them adaptively in the future? Are innovation and adaptation two facets of the same ability, or can they be separated and worked on individually?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>