What you need to know about sample ratio mismatches (SRMs)

An infographic about SRMs and why they ruin your A/B test result data.

Why randomization in experimentation is important. Confounds could mess up your result. Confounds are things that are likely to influence the results of the test but are not the focus of the study. They are factors, which are not your independent variable, that can confuse the situation and throw off the results. Examples of confounds are: user type (new or returning), customer location, traffic source, etc. How to Check if Your Sample is Randomized OK Step 1. You start with a sample. There are 20 people in this example. Some have a particular confounding factor and some don’t. Step 2. Choose the ratio you want. Set your testing tool to a ratio that splits the sample into the percentage you want exposed to the control vs. the percentage in the treatment group. This example shows a 50/50 split. Step 3. Your testing tool randomly splits the sample into the ratio you want. Your tool does the coin-flip math with an algorithm. Step 4. After the sample is split, do a “Sample Ratio Mismatch (SRM)” check. An “SRM check” is how you tell if your randomization worked OK or not. If you have a good A/B testing tool, it checks for SRMs, and it notifies you if one is found. Find a list of tools with an SRM-check feature by filtering for SRM at: 
https://speero.com/ab-testing-tools 
Or use the manual SRM checker at: LukasVermeer.nl/srm/microsite How to Understand the SRM Check Result Sample Ratio Mismatch (SRM) Check A sample ratio mismatch (SRM) happens when the number of participants placed in the treatment compared to the control is extremely unlikely to happen if all users truly had the assignment probability you set your tool to. If the sample IS NOT split how you wanted. Your tool tells you the split and whether or not you should worry if it’s too far from your 50/50 target. 65% of people are in the control and 35% of people are in the variant. Oh no! This is bad. There are 13 people and 5 confounds in the control and there are 7 people and 5 confounds in the treatment. The same number of confounds, but they're diluted differently, which shows the confounds have a disproportionate impact on the treatment. When this happens, there’s a problem (called an “SRM”). This data is garbage.
Stop the experiment, check your experiment setup, and try again. If the sample IS split how you wanted. Your tool shows that the split is 50/50. The split just needs to be somewhere within the range of what’s statistically likely. 50% of people are in the control and 50% of people are in the treatment. There are 5 confounds in each sample, which means their effect is neutralized. Confounds are split more or less evenly between the control and treatment groups. When the tool splits your sample more or less according the ratio you set it to that means that your sample is likely randomized OK. You can use this data. Your confounds are neutralized, and your results are reliable. This content is from the book Design for Impact: Your Guide to Designing Effective Product Experiments by Erin Weigel. It's published by Rosenfeld Media. You can buy the book within the EU at https://erindoesthings.com/design-for-impact/ Globally, Design for Impact can be purchased at https://rosenfeldmedia.com/books/design-for-impact/
This infographic explains why randomization is important and how to spot if your randomization goes wrong.

Check out these websites to help you detect sample ratio mismatches in your test results:

What to do if you spot a sample ratio mismatch (SRM)

What should you do if you observe an SRM?

  • Check your experiment setup for bugs.
  • Fix the bug(s) if you find any.
  • Run the experiment again.

If you don’t find any bugs, rerun the experiment anyway. If you observe an SRM again, you missed a bug in your experiment setup. If you observe no SRM during the fresh run of the experiment, the data is likely OK. In that case, the SRM you observed was likely a false positive because Type I and Type II errors apply to SRM checks, too.

Experiment tracking and sample ratio mismatches

SRMs happen for all kinds of reasons. But they tend to crop up most with fancy JavaScript event tracking. This type of "frontend" tracking typically gets triggered “on-view” (when a certain thing becomes visible on the screen) or “on-click” (when a user interacts with a link, button, image, icon or whatever). When you track this way, sometimes the code doesn't run fast enough to track each person who may have been exposed to the experiment.

So, be very careful when you use frontend Javascript tracking (or “client-side” tracking). A more reliable way to track your experiments is with back-end tracking (or “server-side” or “software development kit (SDK)” tracking). Back-end tracking tends to be more reliable because it gets triggered immediately upon the page load request vs. whenever the Javascript happens to load. This means all people in the experiment get tracked vs. being missed because the Javascript doesn't load fast enough.

Learn more about the different types of tracking on Convert's blog. 

More useful resources

The Good Experimental Design toolkit templates

The Good Experimental Design toolkit

The Good Experimental Design toolkit templates and checklist level-up your experimental design. As Ronald Fisher learned, experiment data is only as good as the design you put into it.

Go to resource
SRM calculator UI

Lukas Vermeer’s manual sample ratio mismatch (SRM) checker

Randomization is the hidden power behind A/B testing. When randomized properly, the confounds in your data are completely removed. This allows you trust any cause/effect relationship you might observe.

Go to resource
Screenshot of Speero's A/B tesing tool comparison website

A/B testing tool comparison

Speero’s A/B testing tool comparison website Helping you find the right experimentation tool quickly and easily Speero’s A/B testing too comparison website includes a comprehensive list of options. If you’re […]

Go to resource
Scroll to Top