Welcome to the second post in my series on strategy development for EWB’s public sector agric team. Check out the first post in this series if you haven’t already at http://theborrowedbicycle.ca/2011/03/strategy-development-in-small-meal-sized-chunks/
This post will be an overview of some of the learnings we’ve taken from our recent experience trying to bring our Agriculture as a Business program to scale within the Ministry of Food and Agriculture (MoFA), and what we’re doing differently as we go forward.
In the earlier days of AAB we focused and invested a lot of time and resources on ‘product’ development (the AAB curriculum), making sure it worked for developing effective farmer groups if the program was carried out by a well-trained Agriculture Extension Agent (AEA). We developed a curriculum that is an effective tool and started thinking strongly about the plan to spread (or scale) AAB only after we were satisfied with the quality we had reached. In this way I think we made a mistake that many other organizations have made in the past – investing in research and an idea we thought was exciting without a clear understanding of who would eventually need to own and run the program. We built a product that was great, but at the end of the day, MoFA (not farmers) is our customer and our product didn’t fit well enough with our market . Fortunately we spend a lot of time directly on the ground with districts, have recognized this shortcoming and have been able to learn about what can be done differently.
Before I move I just thought I’d point out that I just said that MoFA and not farmers is our customer, and that might come as a bit of surprise. Don’t worry, the justification is in my next post and it will all come together.
Customer Development (to the Rescue?)
As we’ve been learning about what hasn’t worked about our approach to scaling AAB, I’ve seen a lot of useful ideas in the Customer Development methodology that Steve Blank writes about. Eric Ries gives a great introduction to some of the principles behind customer development at http://www.startuplessonslearned.com/2008/11/what-is-customer-development.html. My copy of Steve’s book The Four Steps to the Epiphany is well worn, and I agree with Eric that it is sometimes a bit of a dense read but well worth it if you are interested validating a model before spending the money to take it to scale.
Five key takeaways from the principles of customer development that we are adopting are:
1. Form and validate problem hypotheses (for our customer segments within MoFA, not just farmers) before investing in a solution.
One of the more difficult and often discussed elements of doing development work is “ownership” or “sustainability”. When an outside organization brings a new idea, intervention or solution, it is almost guaranteed that the intended beneficiary will accept, unless the idea is truly atrocious. This makes perfect sense from the beneficiary’s perspective – free stuff! Why wouldn’t they accept? Especially if there is the possibility of a longer term relationship with more benefits in the future. Consequently it is very difficult to get honest feedback on whether an idea is valuable for a partner, or how much value it actually provides. Instead of simply presenting solution ideas, we are explicitly identifying problem hypotheses that our solutions address, and validating those before we even mention a solution. The intention is to first understand the value of solving a problem as opposed to the value of a solution. This may seem like two sides of the same coin, but it changes our approach significantly, as I’ll discuss in the upcoming post on Getting Out of the Building.
2. Explicitly write down all of our solution hypotheses.
In order to have an strong understanding of exactly what it will take to reproduce results at scale we need to be very explicit about every aspect of our impact model. Finding effective language and a solid framework to easily write down and categorize all of the important hypotheses has been a bottleneck in the past. We’re now using the Business (Impact) Model Canvas in order to do this. The next few posts will go through the basics of the Impact Model Canvas, analyzing AAB as a concrete example.
3. Test solution hypotheses as cheaply as possible without necessarily assembling the “whole product”.
In the past we have moved through iterations of our model by implementing and testing the entire thing as quickly as possible. As the end vision of our model evolved, our work on the ground tried to match that exactly. This is an expensive and difficult way to test small changes to parts of the model as it doesn’t isolate the different assumptions of what works and what doesn’t. As we move to a model that promotes explicit hypotheses for each component of our model, we can find creative ways to test individual components of our model without constructing the full thing. This will accelerate our learning and create knowledge about specific hypotheses that can be generalized outside of our current model. We will end up better understanding how the system we’re operating in actually functions all at a lower cost than a full model test. To me this is one of the most important changes in our approach and deserves at least one dedicated. I’ll defer that until I can build on a concrete example to add clarity over the theoretical babble that is this paragraph.
4. Pilot in true at-scale conditions before investing in a scale up.
Another oversight that can end up wasting dollars and generating disappointing results is a false pilot stage. In my opinion a pilot stage should include the whole model in the conditions as they will exist at scale. Pilot results often determine funding however, so it is no surprise that the highest quality staff and facilitators and sometimes additional resources will be given to pilots. The location of the pilot is often chosen to be more suitable than the average as well. This is dangerous as it means scaling the pilot model is a gamble; the conditions at scale will be quite different. This may lead to a slight decrease in quality which is not the end of the world, or a totally different outcome which does not meet expectations in the slightest.
This is why I’m wary of projects like the Millennium Villages. Unfortunately Jeffrey Sachs and the host of scientists and academics that are heavily involved in the handful of pilots across the world are not a scalable resource. While this set up can drive a lot of learning, I would expect to see a demonstration of the program working when villages receive resources that are more typical of those they would receive at scale before actually scaling. We could easily fall into this same trap with AAB by using the districts that are running AAB successfully (higher capacity districts with significant previous investment and continuing support from EWB staff) as evidence that we are ready to go to scale. We need to demonstrate first that districts with more typical capacity levels will be successful with the level of resources that we would be able to provide at scale.
5. Consider our “market type” when we evaluate expected timescales and investment for change.
One of the important insights in The Four Steps to the Epiphany is the understanding of the magnitude of investment required to enter different types of markets. As an example, we’ve had success in scaling several products in the realm of monitoring and evaluation and reporting within MoFA. In each case we built a product that was simply better than the existing system. It was easy to scale because our MoFA understood the purpose of our product and the existing alternative it was replacing. AAB on the other hand is a completely new market. While developing farmer-based organizations has always been part of MoFA’s mandate, in many cases there was little focus on long-term investment outside of the concept of creating groups to access development projects. A program to simply build strong groups for the groups’ sake is a foreign idea, and hence will take a much longer period of investment to see it catch on. When we evaluate the ideas we want to invest in it is important to have a strong understanding of the market type we are entering and hence the resources required to reach scale (in the best scenario). This one warrants it’s own post as well, or you can check out Steve’s blog post on market types at http://steveblank.com/2009/09/10/customer-development-manifesto-part-4/.
While all of this all still very theoretical and untested in this context, I’m optimistic about these principles being a further improvement to our processes.
What the Process Looks Like
We’re taking some ideas from the strategy process in Business Model Generation to actually articulate the different steps we are going through.
- Mobilize: We went through this step in February – we agreed as a team that we wanted to go commit to this process and decided to take a first stab at articulating all of the possible problem and solution hypotheses that we were interested in investigating.
- Learn: This is where we’re getting out of the building. We are first probing on our problem hypotheses – trying to understand MoFA’s current behaviour and what problems are actually valuable to solve from different perspectives within MoFA. We are also learning more and more about who within MoFA and in the donor community is interested in our different impact models and how to engage them. Our outputs from this step are clear validations (or refutations) of our problem hypotheses. Ideally this is accompanied with performance targets, or expectations for what a solution to these problems would provide. Best-case pure gold would be a statement from CIDA or the Ministry of Agriculture saying “sure, show us that you can build strong farmer-based organizations that are functioning a year after the end of the program in 80% of the cases, and we’ll fund it/scale it throughout MoFA”. This is analogous to the Search phase that I mentioned in my previous post.
- Design: This is where we start actually testing elements of our model. This can be as simple as asking different people whether the model, or parts of the model make sense, or as complicated as doing real, on the ground tests with results that support or refute a hypothesis. This is the Prototype phase from my previous post, and we would hope to see strong evidence for each of the hypotheses in our model before moving to build the entire working product. Evidence against any part of our model requires a re-think. Once we’re comfortable that we’ve got a full model that is worth investing in, we’ll move into testing the entire model. This is the Pilot stage, and a successful pilot that meets the objectives identified in the Learn phase is what we’re aiming for. This is also the phase where we need to validate not only that our model works for MoFA, but also provides positive impact to farmers, or whoever our ultimate end beneficiary is.
- Execute: Once we’ve validated we have a full model that works under scale conditions, it is time to execute and get implementation right. If we’ve done things well, we have buy-in from high-influence stakeholders with money or power that want to see us succeed, as we are solving a problem for them as well. This is the Scale stage.
- Manage: What next? This cycle is iterative, our definition of Scale might change, we may want to move into a new context or simply evolve the model in a new direction. This will introduce a whole host of new hypotheses to test and validate before hitting scale again.
Development is a messy process, so I don’t expect for a moment that things will run through this process smoothly. We’re testing and adapting as we go, finding ways to manage the huge amount knowledge that come out of the learning and exploration phases. Part of that learning is feedback from others on things we’re overlooking or pitfalls to this process, so lets hear them!
Thanks for making it through another small (maybe medium sized?) meal of a post. Even so I feel like I’ve glossed over so many details and will try and find the time to flesh out some of the points from customer development. This has all been very theoretical up to this point, so next up we will be diving into the Impact Model Canvas with AAB as a real life example.
 We built a lot of ownership and interest at the ground level with AEAs that were excited about the program, but that wasn’t enough to see the program spread and run effectively It is important to recognize is that we had not yet built the strong relationships we now have with people in higher influence positions within donors and MoFA, so this was not necessarily an option from the beginning of AAB, so this is not necessarily a criticism of our past approach as running AAB has allowed us to build these relationships. Now that we have them however, I believe we should use them to validate our problem hypotheses early in the process.