Mutual Exclusion
Unlike other platforms which require manually creating and managing exclusion rules between concurrent tests, Shoplift handles mutual exclusion out of the box. This means that you can run multiple tests simultaneously with confidence, as any tests which might have significant interaction effects are mutually exclusive by default.
What is Mutual Exclusion?
Mutual exclusion ensures that each visitor is only enrolled in one of several potentially conflicting tests. This protects data integrity by preventing interaction effects and attribution errors that can occur when a single visitor is exposed to multiple experiments.
How Mutual Exclusion Works
Shoplift automatically enforces mutual exclusion between tests that could have significant interaction effects. The specific rules depend on which tests you run simultaneously.
When a visitor lands on your site, views a specific page, or takes an action that would qualify them for multiple conflicting tests, Shoplift randomly assigns them to just one.
When Mutual Exclusion Applies
Mutual exclusion depends on a test's entry criteria, which occurs globally or conditionally.
Global entry tests enroll visitors in a test on any page, as with Theme tests, Price tests, or Automatic API tests.
Conditional entry tests enroll visitors in a test on specific pages or after taking specific actions, as with template tests, URL tests, and Manual API tests.
When a global test and a conditional test are both active, visitors who qualify for either are randomly assigned to one test as soon as they enter your site.
If assigned to the global test, they participate immediately.
If assigned to the conditional test, we create a "reservation" for them. This excludes them from the global test right away, but they won't actually enter the conditional test until they meet its entry conditions.
Example: Theme Test and Price Test Exclusion
Imagine you're running two tests—a theme test and a price test. The theme test uses global entry, so visitors can be assigned from any page. The price test uses conditional entry, so visitors are only assigned once they scroll a tested price into view.
Here's what happens when a visitor lands on your site:
They're immediately and randomly assigned to one of the two tests.
If assigned to the theme test, they enter it right away and are excluded from the price test.
If assigned to the price test, we create a reservation for them. They're excluded from the theme test, but they won't actually enter the price test until they scroll a tested price into view during their visit.
Below is a table with enrollment and participation criteria for each test:
Theme
Global
Immediate
Any page load
Automatic API
Global
Immediate
Any page load
Manual API
Conditional
Deferred
Any defined criteria
Template
Conditional
Deferred
Specific pages
URL
Conditional
Deferred
Specific page
Price
Conditional
Deferred
When price is viewed
How many tests can I run at the same time?
There's no limit to how many tests you can run at the same time. However, when tests are mutually exclusive, your traffic is split between them—so the more tests you run, the smaller each test's sample size becomes. This means each test will take longer to reach statistical significance.
Why doesn't Shoplift mark same-funnel tests as conflicting?
Shoplift doesn't automatically flag tests in the same funnel as conflicting because standard statistical practices keep each test's results valid, even when experiments overlap.
Random distribution spreads interaction effects evenly. Traffic is randomly split for each experiment, so when multiple tests run along one funnel, all combinations of variations occur across users. Any interaction effect is distributed evenly rather than biasing one group—and with sufficient sample size, these effects cancel out.
Statistical significance filters out noise. Shoplift requires 95% confidence and sufficient sample size before declaring a winner. This high bar means small cross-test effects are treated as noise and won't trigger a false positive. Only genuinely significant improvements come through.
This approach is proven at scale. Companies like Meta run thousands of simultaneous experiments—even on the same user journey—by relying on large sample sizes and rigorous statistics to isolate each effect.
This means you can safely run more tests in parallel to speed up optimization.
Last updated
Was this helpful?