Use * syntax for faster search

Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The Experiments feature allows different line items to be targeted to distinct, non-overlapping groups of users, enabling performance comparisons between setups. Also known as A/B testing or Randomized Control Trials (RCTs), this feature creates isolated user subsets, ensuring each group is exposed to only one line item configuration at a time. Read more on API support for Experiments here.

Use Cases

  • Compare Bidding Strategies: Multiple bidding methods can be tested to determine which yields the best results. This can be done by creating identical line items for targeting, with audience splits to compare performance across different bidding strategies.

  • Compare Multiple Creatives: Different sets of creatives or landing pages can be tested to identify which generates more conversions. This is achieved by creating a separate line item for each set of creatives and assigning a distinct test group to each.

  • Persistent Control Group: A control group of creatives (e.g., PSAs) can be set aside to measure incremental lift. This is done using the X Percent Hold-out Test Plans, where the PSA line item is assigned to the Control Group (a small portion of users) and all other line items are assigned to the Test Group (a larger portion of users).

...

Field NameValuesLog Files
experiment_user_indexAny value between 1 to 1000 randomly assigned to a user for the test group assignment
  • Bid logs
  • Win logs
  • Conversion logs (attributed)
  • Streaming logs
test_group_idTest group assigned to the user who received the impression.
test_plan_idTest plan associated with the campaign.
experiment_id_type

The ID type users selected on the campaign level to run the experiment

  • Standard
  • IP
  • LIVERAMP_PERSON
  • LIVERAMP_HOUSEHOLD

API

...



You are evaluating Refined.