The other day Ron Schmeltzer tweeted how broken the RFP process is for buying software and his opinion of the process (he dislikes it). [Correction: Ron was talking about consulting RFP’s, so the post below doesn’t really matter. But, it did get me off my bum to write the post below, something I’ve had in my head for quite some time.] I quickly responded, asking him how he’d do it differently if he were a buyer. I have some thoughts on this, but was curious what he thought. Brett Miller responded with a blog post he wrote back in July as to the pros and cons of the RFP process.
In general, my opinion about Brett’s points are that the cons outweigh the pros in most every case, and that there are alternative ways to achieve the pros without invoking the cons.
I feel quite strongly that the RFP process for software purchasing is totally broken, and have an idea to replace it based on the work that some early founders of WebLayers did when I was selling Actional to them at Credit Suisse.
First let me explain why I think it’s broken. Then I’ll share my recommended fix.
1. RFP’s are biased. Typically, RFP’s are issued by companies after they’ve done some due diligence. That due diligence is “biased” based on who they spoke to, that bias finds its way into the form and function of the RFP. If all vendors have been involved prior to the RFP issue, that’s fine. But, if not… then the RFP is weighted towards those who have participated. And, that’s not always good for the consumer.
2. RPF’s only provide a partial view into what’s important. RFP’s often have hundreds of questions, some requiring complex answers. They’re meant to (1) get a complete comparison of relevant information, and (2) standardize the answers. Well, by the time a purchase is made and an implementation happens, the state of the various features will change, so knowing the current state isn’t necessarily helpful. I realize it provides a baseline, but that assumes that none of the vendors stretches the truth. Also, a simple question like “Do you support WS-Security” doesn’t have a simple answer, like “yes”. Usually, the answer is something between “no” and “sort of”… there are interoperability issues, minimum platform issues, which pieces of the standard are supported, and how the support is implemented. Second, standardized answers are not useful for a large portion of the questions… and in my opinion those are the important questions. What RPF writers really should want to understand are what makes each vendor unique, and how their philosophy around the solution aligns with the needs of the organization.
3. RPF responses are difficult to write, and even more difficult to evaluate. Finally, companies usually have a very short time to respond to an RFP making the responses less than the quality documents anyone would really like. I know it’s surprising to buyers, but the product information you would expect to be cut-and-paste is not often available. Even a prior RFP response that’s 3 months earlier is probably out of date. And, even the best “cut-and-paster” out there (I think I’m up there) is hard pressed to weave multiple cut-and-paste sources back together into a professional looking and consistent document. What about the review process? It’s time consuming and similarly biased. A grading system would certainly be unable to evaluate things like strategic alignment and uniques… and anything less is subject to the preferences of the reader… and what they happen to pick up when reading the responses. Try this. After all the responses have been read, put a simple 10 question list of features/capabilities in front of the readers… and ask them to match them to the vendors that wrote the answers. Do you think they’d remember which vendor wrote which answer?
One final point. The time/effort it takes for the teams to write the responses and the team to evaluate them all… isn’t there a better way to spend our collective time to get better technology out there faster and solve problems sooner?
So, what do I recommend?
Keep in mind that I’ve been almost exclusively at vendors/integrators in my career so admittedly I’m probably leaving some administrative/purchasing requirements out. However, I think the following makes for a great place to start.
1. Use analysts only to get a view of the landscape and make sure you know all the relevant vendors out there. Analysts don’t have the time to do much hands-on evaluation to validate what the vendors tell them. And, analysts have their own biases which may or may not align with your own. Save the analysts for when you have specific questions about relative vendor comparisons and market trends.
2. Along with a non-subjective checklist of standards and IT requirements (such as interoperability with existing systems/platforms, support in particular countries, number of SI’s trained in a product suite, etc.) deliver a set of use cases for how the product would be used. The use cases should include some long-term (and therefore less specific items) and some short term cases. The short term ones should really address the driving need for the evaluation. At least one use case should test performance and scalability in order to prove out the scaling model and help drive to a final configuration (and therefore a final project cost). Other use cases should include interoperability testing for integration to existing systems, and how the product gets migrated between development and production.
Slight aside. By checklist I mean there should be nowhere on this list to explain anything. Answers should be unambiguous lists, yes/no, dates, numbers, etc. This keeps it black-and-white. If it needs an explanation, it best to have the vendor answer in the context of the use case.
3. Ask vendor participants to fill in the checklist, give “essay” answers to the use cases, and then provide 10 items that they believe you should evaluate as part of the evaluation. These 10 items will be all you need to understand how each believes they compete with the other vendors, the vendors’ philosophical alignment to the problem space, and their unique value propositions. By the way, these 10 items should include use cases on how to test them, and an explanation of why they are important to the proposed solution. Of course, these 10 items might be non-technical… like they might be about standards support, or SI relationships, or whatever the vendor thinks is important.
I believe if we moved in this direction, we’d have a process that got customers what they need, faster, with higher quality results. And, the efforts used in the decision process (by implementing the use cases in a POC) would be directly relevant to deploying the solution, so once the process is complete, you’ve done more than selected a vendor, you’ve begun your implementation.