A/B testing is a randomized experimentation process, where we split users into 2 groups, and expose each group to different app configurations. If you have a hypothesis, which requires you to change one variable and measure the effect of this change, this is where you start.
You can use A/B testing to validate any hypothesis from adding ad units, segments (definition / behavior), add networks, bidding vs. waterfall, open auction optimizations, timeouts etc.
To help you measure the effect of a change, we use bayesian inference to state the probability of any one variant to be best overall. This way as an experiment generates more data, we will make a recommendation on the variant that is best overall and how likely is the variant to be best overall. This helps you take an informed and accurate decision quickly, without making the mistake of acting on early data.
We also help you understand how experiments are impacting other KPI like retention rate and LTV as well but making experiment and variants available in Cohort Analysis.
To create an experiment first open the App Workspace. To open your App Workspace, click on All Apps, and then Select the App from the menu. Then go to Reports / Experiments, and click on. To create an A/B experiment, select A/B
To define an experiment,
- Start with the hypothesis
- Create randomized user groups with uniform probability (Uniform Choice) or weighted probability (Weighted Random)
- Configure the control group, size of user group and the targeting group
Size of user group defines what percentage of eligible users will be exposed to a particular treatment. Please note users who are mapped to particular treatment once are always exposed to the same treatment until the experiment is Live.
The targeting group selected for the control is the current active targeting group that all eligible users are exposed to.
To define a variant,
- Configure the variant group, size of user group and the targeting group
- Configure the variant targeting group, primarily networks, adlines, and open auction.
For accurate evaluate we recommend, that variant adline credentials (adunit id, placement id, zone id, tag id etc..) are set up afresh with the respective networks.
To add ad lines for non existent networks, first go to Partner / Networks and enable the network. Create new ad lines for the variant by clicking on Configure for each variant.
Experiments can be saved in Draft state. To start an experiment click onor .
- An experiment cannot be started without a variant.
- Once an experiment is started (Live), neither the size of user groups or targeting groups can be changed.
- It is recommended that networks, ad lines and open auction settings not be changed after an experiment is started.
- A Live experiment can be Paused () or Concluded.
To view experiment results click on from Results / Experiments for the respective experiment.
The following are reported for each variant
|DAU||Count of DAU that have been exposed to a treatment, either control or any variant.|
|Ad Requests||Count of ad requests that have been exposed to a treatment, either control or any variant.|
|Impressions||Count of impressions that have been shown to users exposed to a treatment, either control or any variant.|
|Revenue||Total revenue reported for a treatment, either control or any variant.|
|Fill Rate||Fill rate reported for a treatment, either control or any variant.|
|eCPM||eCPM reported for a treatment, either control or any variant.|
|Credible Interval||The range in which revenue / impression will fall with probability of 0.95 for a treatment, either control or any variant|
|P(Beat All)||The likelihood that a variant will beat all other variants|
To conclude an experiment, select the variant that is delivering the best results either on the basis of the aggregate metrics or on the basic of the highest P(Beat All). Click on
To start a follow up experiment from a concluded experiment, go to Reports / Experiments / and click on . This will create a copy of the Concluded experiments in Draft state. In draft state experiments can be edited.