Explore more add-on discovery A/B test scenarios
On issue #143, andymckay wrote:
I love the idea of A/B testing. I would like to explore how we'd set this up from an Engineering point of view and tracking metrics around that testing. We haven't done that on AMO and we totally should be.
I’d love to discuss more things that we can test.
Here are my initial test scenarios, just for the disco pane:
- Install models: on/off toggle (proposed) vs. “Add to Firefox” vs. “Install now” vs. “Free” vs. etc. I think that on/off is the one that makes the most sense, but I’d like to know.
- How many curated add-ons is too many to show in the pane?
- Showing vs. not showing alternative add-ons.
I’m sure there are heaps of things we can test. Do you have more ideas?
┆Issue is synchronized with this Jira Task
Hi Bram - I am a strong believer in star ratings/number of raters in a product listing. I propose we test with these as well. See item 143 for one example of where to place the stars in the add-on summary. Here is a link that shows one way to do the test (fewer stars with many reviewers vs more stars with only 3, 4, and 5 reviewers) http://baymard.com/blog/user-perception-of-product-ratings
Hi Michelle, that was a great read! I agree that we should test ratings and compare it against our current approach.
This issue has been automatically marked as stale because it has not had recent activity. If you think this bug should stay open, please comment on the issue with further details. Thank you for your contributions.