To use the RICE scoring model, you evaluate each of your competing ideas (new products, product extensions, features, etc.) by scoring them according to the following formula:
The first factor in determining your RICE score is to get a sense of how many people you estimate your initiative will reach in a given timeframe.
You have to decide both what “reach” means in this context and the timeframe over which you want to measure it. You can choose any time period—one month, a quarter, etc.—and you can decide that reach will refer to the number of customer transactions, free-trial signups, or how many existing users try your new feature.
Your reach score will be the number you’ve estimated. If you expect your project will lead to 150 new customers within the next quarter, then your reach score is 150. If you estimate your project will deliver 1,200 new prospects to your trial-download page within the next month, and that 30% of those prospects will sign up, then your reach score is 360.
Impact can reflect a quantitative goal, such as how many new conversions for your project will result in when users encounter it, or a more qualitative objective such as increasing customer delight.
Even when using a quantitative metric (“How many people who see this feature will buy the product?”), measuring impact will be difficult, because you won’t necessarily be able to isolate your new project as the primary reason (or even a reason at all) for why your users take action. If measuring the impact of a project after you’ve collected the data will be difficult, you can assume that estimating it beforehand will also be a challenge.
Intercom developed a five-tiered scoring system for estimating a project’s impact:
3 = massive impact
2 = high impact
1 = medium impact
.5 = low impact
.25 = minimal impact