Tracking Adoption of Research Recommendations: The Recommendation-Adoption Score
Use the RAS (Recommendation-Adoption Score) to track how much research value reaches your user.
Tracking Adoption of Research Recommendations: The Recommendation-Adoption Score
Brian Utesch and
Tammi Fitzwater
February 20, 2026
Email article
Summary:
Use the RAS (Recommendation-Adoption Score) to track how much research value reaches your user.
In the first article of this series, we defined research breakage as the gap between recommendations and actual change. While breakage is real, spotting it is tricky without data. A vague spreadsheet or a status line in a slide does not cut it. Without structure, adoption gets overstated, breakage stays hidden, and sooner or later credibility takes a hit.
This is where the recommendation-adoption score, or RAS, comes in. We developed the RAS as a real-world metric to track breakage in our work at Cisco. But the RAS is more than just a number: it tells you how much of the value you worked to create actually reaches the user.
###
In This Article:
Treat Recommendations like Inventory
Calculating the RAS
Recommendation-Adoption Score: The Formula
How Long Should You Track?
A Real Example of RAS in Practice
What RAS Is and Is Not
Where We Go Next
Treat Recommendations like Inventory
Retailers track products through every stage of the supply chain. If something breaks along the way, they want to know where and why. Design recommendations that come from research should be treated in the same way. Each recommendation represents time, energy, and money. Each one is a unit of value that either reaches the user intact or is lost.
That means a design recommendation cannot be fuzzy. It needs a clear description that someone could test later. It needs to identify exactly which user problem it solves and why that problem matters. It needs to be tied back to the evidence, following the chain from data to findings, to insight, to recommendation. Just as importantly, it needs a definition of “done” that is specific enough to prevent creative reinterpretation. If you cannot tell whether a recommendation shipped as intended, it is already at high risk of breakage.
****
| Create concrete recommendations that sway stakeholders. Our live training, Mastering Influence, gives you the skills you need to strategically communicate and persuade stakeholders, whether you're making sure more of your recommendations reach users or building lasting relationships. |
|---|
In retail, if a shipment arrives damaged, someone logs it, reports it, and prevents it from happening again. “The warehouse” is not the owner. A person is.
The same is true for research. “The team” is not enough. Every recommendation should have a named owner who has both authority and accountability. That might be a product manager, a lead engineer, or another role who can carry it through delivery. Without an owner, a recommendation is just floating in space, and floating recommendations almost always eventually sink.
Calculating the RAS
To calculate the RAS, each recommendation should have a status and a value rating.
RAS Status Categories
One of the easiest places for breakage to hide is in vague statuses like In Progress. That phrase can mean anything from “we are actively working on it” to “someone glanced at it once.”
RAS cuts through that by sticking to clear, verifiable statuses.
- Adopted indicates that the recommendation has shipped, the fix has been put into action, and the researcher has verified that it was implemented as intended. The recommendation does not need further work and is in its final, completed form.
- Committed indicates the recommendation has been scoped and resourced. Verbal promises such as “we’re going to get to that soon” do not count. The recommendation must be in a release plan or roadmap. Committed recommendations should be reviewed regularly as product plans may change, and this status should not be used to indefinitely postpone addressing certain recommendations.
- Communicated means the recommendation has been delivered and acknowledged, but is not on a direct path to adoption yet. In other words, a researcher has provided a clearly written research-based recommendation that identifies a clear fix. The recommendation has been handed off to the product team, but no plans or actions have occurred yet. Ideally, communicated recommendations should move to the next status within 60 days of being communicated.
- Canceled covers the cases where adoption is not happening because the recommendation was rejected, deferred, or is no longer relevant. For example, if a recommendation suggests that a feature needs to be made more usable, but the business stops supporting that feature, then the recommendation is no longer relevant and is considered canceled. Canceled items are excluded from the scoring pool altogether.
These statuses make it harder for recommendations to disappear into limbo, which is where most breakage occurs.
RAS User-Value Ratings
Recommendations can come from all types of research, including evaluative, generative, or exploratory. When calculating the user value of a recommendation, we consider both status of the recommendation (Adopted, Committed, or Communicated) as well as value to the user.
Adopted Recommendations
These are recommendations that are already in production and fulfilled their mission. However, not all adopted recommendations are equally important. Some recommendations fix mission-critical blockers. Others smooth smaller friction points. Some are cosmetic. If we were to treat them all the same, it would make it easy for teams to game the system by implementing only the easiest fixes. Therefore, the user value of an adopted recommendation will depend on the impact of that recommendation:
- 3 points (high value): These recommendations address major problems tied to retention, adoption, or core task success. They lead to changes that users notice immediately. If the fix goes in, the product feels different in an obvious way.
- 2 points (medium value): These recommendations address important friction points, but the impact may not be as visible to all users. The fix improves the experience, but it might be noticed only in specific situations or by users who perform a specific task frequently.
- 1 point (low value): These fixes provide minor or cosmetic improvements. By themselves, they are unlikely to be noticed at all. They may polish rough edges or remove tiny irritants but will not shift overall perception.
- 0 points (no value): These are cancelled recommendations covering cases where adoption did not happen because the recommendations were deferred, rejected, or no longer relevant.
Committed Recommendations
Committed recommendations are scoped, resourced, and placed in a release — but not yet shipped. They deserve some credit. We know that big fixes take time. Some require design work, cross-team coordination, or engineering effort that does not happen overnight. But, until they ship, these recommendations are still at risk. This is why committed items earn some “credit,” but less than an actual adopted fix.
The user value of a committed recommendation is 0.66 points, regardless of the potential impact of that recommendation.
[...]