# Experimenting with Placements

Experimenting with placements allows you to try out different placement configurations (variants) to achieve the best returns. Experiments run on a subset of your traffic so that you can understand the effect of a change prior to making the change for your entire traffic. After running an experiment and analyzing the results, you can decide to either keep experiment running and change the percentage of traffic the variant receives, or end the experiment by deprecating the original placement configuration in favor of a more profitable variant or retaining the original placement if the experiment proved the change did not significantly improve your metrics.

{% hint style="info" %}
To run Experiments, use DT FairBid SDK 3.1.0 or later.
{% endhint %}

## Setting Up an Experiment

An Experiment includes the following components:

| Control group         | The original active placement configurations. Use the control group as a benchmark to measure the effectiveness of the variant group. |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
| Variant group         | The proposed placement with modified configurations that you are testing.                                                             |
| Allocation percentage | The percentage of traffic you want to allocate to the variant.                                                                        |

Before you begin configuring an experiment, identify the following items:

* Identify the placement configuration that you want to test.
* Identify the key metric you want to optimize and the actionable metric value at which you plan to deprecate the control group in favor of the variant group.
* Identify how long to run the experiment. For example, if your placement usually gets 20K impressions and 10K unique impressions per day, DT recommends running the experiment for about a week to gather sufficient data for analysis.

To set up an experiment:

1. From the **App details** screen, click the placement that you want to test.\
   The **Placement details** screen displays.

<div align="left" data-with-frame="true"><img src="https://content.gitbook.com/content/LbREhkP3WlLtP6TNVZ2Q/blobs/T7q2tCDq3TbfRDQ0PWep/15182816911004" alt="Placement details screen showing the Start Experiment button."></div>

2. Click **Set up experiment**.\
   The **Placement details** screen enters **SETTING UP EXPERIMENT** mode where the following experiment items are ready for you:

* Your current placement configuration is now the Control Group (Test A) and no longer editable.
* A copy of the control group (Test B) is now your variant group, ready for you to modify for the experiment.

<div align="left" data-with-frame="true"><img src="https://content.gitbook.com/content/LbREhkP3WlLtP6TNVZ2Q/blobs/sHy1idQ3YXhMbtAEMqIl/15182914685980" alt="Placement details screen in Experiment Setup mode"></div>

3. While in **SETTING UP EXPERIMENT** mode, complete the following tasks to configure your test:
   * [Name the variant group](#naming-a-variant)
   * [Allocate traffic for the variant group](#allocating-traffic-to-a-variant)
   * [Configure the variant for the properties that you want to test](#adding-a-variant)
4. If you want to begin the experiment later, click **Save**.\
   Experimental placement configurations get saved.

{% hint style="info" %}
Saved experiments do not run until you click the **Start experiment** button.
{% endhint %}

5. To begin the experiment right away, click **Start experiment**.\
   The Placement Details screen exits **EXPERIMENT SETUP MODE**. Variant configurations are no longer editable, and the **End Experiment** button appears. Additionally, on the **Apps Details** screen, the Placement for which you are running the experiment now displays as `Experiment running`.

<div align="left" data-with-frame="true"><img src="https://content.gitbook.com/content/LbREhkP3WlLtP6TNVZ2Q/blobs/JfB6oh0oowSSnQsfhfxl/15611513266588" alt="Placement details screen while experiment is running"></div>

<div align="left" data-with-frame="true"><img src="https://content.gitbook.com/content/LbREhkP3WlLtP6TNVZ2Q/blobs/M2LV5Eb7gR5kH2OLwvmw/15611635712668" alt="App details screen while experiment is running"></div>

### Naming a Variant

To name a variant:

1. On the **Placement Details**, ensure that the screen is in **SETTING UP EXPERIMENT** mode.
2. For the variant you want to name, hover over the default name, and click the **Edit** icon.

<div align="left" data-with-frame="true"><img src="https://content.gitbook.com/content/LbREhkP3WlLtP6TNVZ2Q/blobs/m6FHIsaG5KW3chSyOt5x/15393904618652" alt="Variant Management"></div>

3. Name the variant and click **Update Experiment**.

{% hint style="success" %}
Consider appending the date you start the experiment on to later help you determine when to end the experiment.
{% endhint %}

### Allocating Traffic to a Variant

Experiments route a subset of your traffic to use the variant configuration. The default variant allocation is 10%. You can modify this allocation both before and during the experiment.

To adjust traffic allocation for a variant:

1. While on the **Placement Details** tab, for the variant you want to adjust traffic allocation, hover over the current traffic allocation setting, and click the **Edit** icon.

<div align="left" data-with-frame="true"><img src="https://content.gitbook.com/content/LbREhkP3WlLtP6TNVZ2Q/blobs/wmkixG5QqbKpi5TRLnp7/15393904619804" alt=""></div>

2. Specify an allocation percentage using one of the following methods:
   * Drag the slider to the desired allocation percentage. The allocation for the Control Group automatically adjusts to ensure that total traffic allocation equals 100%.
   * Enter the desired allocation percentage for the variant and control group.
3. Click **Update Experiment**.

### Modifying a Variant for Testing

Experimenting with placements allows you to modify placement properties and associated networks to analyze the effects on your monetization goal.

As a best practice, modify only one aspect at a time so that you can directly attribute outcomes to a particular change. To analyze the effects of modifying placement properties such as prices, targeting, and display frequencies, modify the variant placement as you would modify an actual placement. For more information about the placement properties you can modify, see [Setting Up Placements](https://docs.digitalturbine.com/dt-console/app-management/setting-up-an-existing-app/setting-up-placements). To analyze the effects of modifying the mediated networks instances associated with a placement, add the instance to the variant placement as you would add an instance to an actual placement. For more information about adding a mediated network instance to a placement, see [Setting Up Instances](https://docs.digitalturbine.com/dt-console/app-management/setting-up-an-existing-app/setting-up-instances).

Make all variant modifications on the appropriate variant tab while in **SETTING UP EXPERIMENT** mode. You cannot modify a variant once an experiment is running. When you have finished entering the variant modification, click **Save variant changes**.

<div align="left" data-with-frame="true"><img src="https://content.gitbook.com/content/LbREhkP3WlLtP6TNVZ2Q/blobs/3BJ7iIHb176gtYm2Wk5k/15394607685404" alt="Variant Modifications"></div>

### Adding a Variant

You can run one experiment per placement, and for each experiment, you can test up to two variants. You can add a second variant only when the **Placement Details** screen is in **SETTING UP EXPERIMENT** mode. Base the new variant on either the control group or the existing variant. If needed, [name the new variant](#naming-a-variant).

1. On the **Placement Details**, ensure that the placement is in **SETTING UP EXPERIMENT** mode.
2. Click **Duplicate** for the test group (Control group or existing variant group) upon which you want to base the new variant.\
   A new variant (`Test C`) appears in the experiment.
3. Modify `Test C`, as described in [Modifying a Variant for Testing](#h_01J6DN9A47YY8C1G1Q7G8CQX3T).

## Analyzing Results

Experimenting with placements allows you to optimize the following metrics:

| Term             | Definition                                                                                                                                   |
| ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------- |
| ARPDEU           | Average Revenue (publisher payout) Per Daily Engaged User. Engaged users are users that see at least one ad from the experimental placement. |
| Publisher Payout | Amount the publisher earns from the placement. This metric depends on the traffic allocation of the impressions.                             |
| Fill Rate        | The number of times an ad request is filled by an ad network divided by the total number of ad requests received.                            |

A deeper analysis of the experiment data can be carried out in the [Dynamic Reports](https://docs.digitalturbine.com/dt-console/reports/using-the-reports) module of the DT Console:

1. In the [DT Console](https://console.fyber.com/), go to **Dynamic Reports→App Performance**.
2. Apply the following filters in order:
   * Publisher
   * App
   * Placement
3. Split the data using the Variant Name as a dimension.
4. Add further dimensions relevant to your testing, such as Publisher Name, App Name, and Placement Name.
5. Select the metrics that you want to analyze.
6. Based on the gathered data, determine which test group to retain, and [end the experiment](#ending-an-experiment).

## Ending an Experiment

Once you have gathered enough data, end the experiment.

To end an experiment:

1. From the **App details** screen, click the placement running the experiment you want to stop.

<div align="left" data-with-frame="true"><img src="https://content.gitbook.com/content/LbREhkP3WlLtP6TNVZ2Q/blobs/fZlzd5fzYUsqchpjN99T/15182860776860" alt="App details screen while experiment is running"></div>

2. Click **End experiment**.\
   A dialog box displays for you to select which experiment group to retain.

<div align="left" data-with-frame="true"><img src="https://content.gitbook.com/content/LbREhkP3WlLtP6TNVZ2Q/blobs/KUSKpYNuCBzmDg8ai2JI/15182860777756" alt="End Experiment Confirmation"></div>

3. To keep your current configuration, select **Control Group**, and click **End experiment**.
4. To switch to the new configuration, select the variant, and click **End experiment**.
