Stay organized with collections Save and categorize content based on your preferences.
When you are reaching out to your users or starting a new marketing campaign, you want to make sure that you get it right. A/B testing can help you find the optimal wording and presentation by testing message variants on selected portions of your user base. Whether your goal is better retention or conversion on an offer, A/B testing can perform statistical analysis to determine if a message variant is outperforming the baseline for your selected objective.
To A/B test feature variants with a baseline, do the following:
An experiment that uses the Notifications composer lets you evaluate multiple variants on a single notification message.
Sign in to the Firebase console and verify that Google Analytics is enabled in your project so that the experiment has access to Analytics data.
If you did not enable Google Analytics when creating your project, you can enable it on the Integrations tab, which you can access using settings > Project settings in the Firebase console.
In the Engage section of the Firebase console navigation bar, click A/B Testing.
Click Create experiment, and then select Notifications when prompted for the service you want to experiment with.
Enter a Name and optional Description for your experiment, and click Next.
Fill out the Targeting fields, first choosing the app that uses your experiment. You can also target a subset of your users to participate in your experiment by choosing options that include the following:
Set the Percentage of target users: Select the percentage of your app's user base matching the criteria set under Target users that you want to evenly divide between the baseline and one or more variants in your experiment. This can be any percentage between 0.01% and 100%. Percentages are randomly reassigned to users for each experiment, including duplicated experiments.
In the Variants section, type a message to send to the baseline group in the Enter message text field. To send no message to the baseline group, leave this field blank.
(optional) To add more than one variant to your experiment, click Add Variant. By default, experiments have one baseline and one variant.
(optional) Enter a name for each variant in your experiment to replace the names Variant A, Variant B, etc.
Define a goal metric for your experiment to use when evaluating experiment variants along with any desired additional metrics from the dropdown list. These metrics include built-in objectives (engagement, purchases, revenue, retention, etc.), Analytics conversion events, and other Analytics events.
Choose options for your message:
Click Review to save your experiment.
You are allowed up to 300 experiments per project, which could consist of up to 24 running experiments, with the rest as draft or completed.
Validate your experiment on a test deviceFor each Firebase installation you can retrieve the FCM registration token associated with it. You can use this token to test specific experiment variants on a test device with your app installed. To validate your experiment on a test device, do the following:
Messaging.messaging().token { token, error in if let error = error { print("Error fetching FCM registration token: \(error)") } else if let token = token { print("FCM registration token: \(token)") self.fcmRegTokenMessage.text = "Remote FCM registration token: \(token)" } }Objective-C
[[FIRMessaging messaging] tokenWithCompletion:^(NSString *token, NSError *error) { if (error != nil) { NSLog(@"Error getting FCM registration token: %@", error); } else { NSLog(@"FCM registration token: %@", token); self.fcmRegTokenMessage.text = token; } }];Java
FirebaseMessaging.getInstance().getToken() .addOnCompleteListener(new OnCompleteListener<String>() { @Override public void onComplete(@NonNull Task<String> task) { if (!task.isSuccessful()) { Log.w(TAG, "Fetching FCM registration token failed", task.getException()); return; } // Get new FCM registration token String token = task.getResult(); // Log and toast String msg = getString(R.string.msg_token_fmt, token); Log.d(TAG, msg); Toast.makeText(MainActivity.this, msg, Toast.LENGTH_SHORT).show(); } });Kotlin
FirebaseMessaging.getInstance().token.addOnCompleteListener(OnCompleteListener { task -> if (!task.isSuccessful) { Log.w(TAG, "Fetching FCM registration token failed", task.exception) return@OnCompleteListener } // Get new FCM registration token val token = task.result // Log and toast val msg = getString(R.string.msg_token_fmt, token) Log.d(TAG, msg) Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show() })C++
firebase::InitResult init_result; auto* installations_object = firebase::installations::Installations::GetInstance( firebase::App::GetInstance(), &init_result); installations_object->GetToken().OnCompletion( [](const firebase::Future<std::string>& future) { if (future.status() == kFutureStatusComplete && future.error() == firebase::installations::kErrorNone) { printf("Installations Auth Token %s\n", future.result()->c_str()); } });Unity
Firebase.Messaging.FirebaseMessaging.DefaultInstance.GetTokenAsync().ContinueWith( task => { if (!(task.IsCanceled || task.IsFaulted) && task.IsCompleted) { UnityEngine.Debug.Log(System.String.Format("FCM registration token {0}", task.Result)); } });
Whether you create an experiment with Remote Config, the Notifications composer, or Firebase In-App Messaging, you can then validate and start your experiment, monitor your experiment while it is running, and increase the number of users included in your running experiment.
When your experiment is done, you can take note of the settings used by the winning variant, and then roll out those settings to all users. Or, you can run another experiment.
Start an experimentOnce an experiment has been running for a while, you can check in on its progress and see what your results look like for the users who have participated in your experiment so far.
Click Running, and then click on, or search for, the title of your experiment. On this page, you can view various observed and modeled statistics about your running experiment, including the following:
After your experiment has run for a while (at least 7 days for FCM and In-App Messaging or 14 days for Remote Config), data on this page indicates which variant, if any, is the "leader." Some measurements are accompanied by a bar chart that presents the data in a visual format.
After an experiment has run long enough that you have a "leader," or winning variant, for your goal metric, you can release the experiment to 100% of users. This lets you select a variant to publish to all users moving forward. Even if your experiment has not created a clear winner, you can still choose to release a variant to all of your users.
Roll out your experiment to all users by doing one of the following:
If you find that an experiment isn't bringing in enough users for A/B Testing to declare a leader, you can increase distribution of your experiment to reach a larger percentage of the app's user base.
You can target the users to include in your experiment using the following user-targeting criteria.
Targeting criterion Operator(s) Value(s) Note Version contains,When using any of the contains, does not contain, or matches exactly operators, you can provide a comma-separated list of values.
When using the contains regex operator, you can create regular expressions in RE2 format. Your regular expression can match all or part of the target version string. You can also use the ^ and $ anchors to match the beginning, end, or entirety of a target string.
User audience(s) includes all of,For numbers:
On the client, you can set only string values for user properties. For conditions that use numeric operators, the Remote Config service converts the value of the corresponding user property into an integer/float.
When using the contains regex operator, you can create regular expressions in RE2 format. Your regular expression can match all or part of the target version string. You can also use the ^ and $ anchors to match the beginning, end, or entirety of a target string. Country/Region N/A One or more countries or regions used to select users who might be included in the experiment. Languages N/A One or more languages and locales used to select users who might be included in the experiment. First open More thanWhen you create your experiment, you choose a primary, or goal metric, that is used to determine the winning variant. You should also track other metrics to help you better understand each experiment variant's performance and track important trends that may differ for each variant, like user retention, app stability and in-app purchase revenue. You can track up to five non-goal metrics in your experiment.
For example, say you've added new in-app purchases to your app and want to compare the effectiveness of two different "nudge" messages. In this case, you might decide to choose to set
Purchase revenueas your goal metric because you want the winning variant to represent the notification that resulted in the highest in-app purchase revenue. And because you also want to track which variant resulted in more future conversions and retained users, you might add the following in
Other metrics to track:
The following tables provide details on how goal metrics and other metrics are calculated.
Goal metrics Metric Description Crash-free users The percentage of users who have not encountered errors in your app that were detected by the Firebase Crashlytics SDK during the experiment. Estimated ad revenue Estimated ad earnings. Estimated total revenue Combined value for purchase and estimated ad revenues. Purchase revenue Combined value for allpurchase
and in_app_purchase
events. Retention (1 day) The number of users who return to your app on a daily basis. Retention (2-3 days) The number of users who return to your app within 2-3 days. Retention (4-7 days) The number of users who return to your app within 4-7 days. Retention (8-14 days) The number of users who return to your app within 8-14 days. Retention (15+ days) The number of users who return to your app 15 or more days after they last used it. first_open An Analytics event that triggers when a user first opens an app after installing or reinstalling it. Used as part of a conversion funnel. Other metrics Metric Description notification_dismiss An Analytics event that triggers when a notification sent by the Notifications composer is dismissed (Android only). notification_receive An Analytics event that triggers when a notification sent by the Notifications composer is received while the app is in the background (Android only). os_update An Analytics event that tracks when the device operating system is updated to a new version.To learn more, see Automatically collected events. screen_view An Analytics event that tracks screens viewed within your app. To learn more, see Track Screenviews. session_start An Analytics event that counts user sessions in your app. To learn more, see Automatically collected events. BigQuery data export
In addition to viewing A/B Testing experiment data in the Firebase console, you can inspect and analyze experiment data in BigQuery. While A/B Testing does not have a separate BigQuery table, experiment and variant memberships are stored on every Google Analytics event within the Analytics event tables.
The user properties that contain experiment information are of the form userProperty.key like "firebase_exp_%"
or userProperty.key = "firebase_exp_01"
where 01
is the experiment ID, and userProperty.value.string_value
contains the (zero-based) index of the experiment variant.
You can use these experiment user properties to extract experiment data. This gives you the power to slice your experiment results in many different ways and independently verify the results of A/B Testing.
To get started, complete the following as described in this guide:
If you're on the Spark plan, you can use the BigQuery sandbox to access BigQuery at no cost, subject to Sandbox limits. See Pricing and the BigQuery sandbox for more information.
First, make sure that you're exporting your Analytics data to BigQuery:
Select a region and choose export settings.
Note: For more information about Google Analytics for Firebase settings, see Data collection.Click Link to BigQuery.
Depending on how you chose to export data, it may take up to a day for the tables to become available. For more information about exporting project data to BigQuery, see Export project data to BigQuery.
Access A/B Testing data in BigQueryBefore querying for data for a specific experiment, you'll want to obtain some or all of the following to use in your query:
https://console.firebase.google.com/project/my_firebase_project/config/experiment/results/25
, the experiment ID is 25.project_name.analytics_000000000.events
).YYYYMMDD
suffix. So, if your experiment ran from February 2, 2024 through May 2, 2024, you'd specify a _TABLE_SUFFIX between '20240202' AND '20240502'
. For an example, see Select a specific experiment's values.in_app_purchase
events, ad_impression
, or user_retention
events.After you gather the information you need to generate your query:
If you're using the Blaze plan, the Experiment overview page provides a sample query that returns the experiment name, variants, event names, and the number of events for the experiment you're viewing.
To obtain and run the auto-generated query:
The following example shows a generated query for an experiment with three variants (including the baseline) named "Winter welcome experiment." It returns the active experiment name, variant name, unique event, and event count for each event. Note that the query builder doesn't specify your project name in the table name, as it opens directly within your project.
/*
This query is auto-generated by Firebase A/B Testing for your
experiment "Winter welcome experiment".
It demonstrates how you can get event counts for all Analytics
events logged by each variant of this experiment's population.
*/
SELECT
'Winter welcome experiment' AS experimentName,
CASE userProperty.value.string_value
WHEN '0' THEN 'Baseline'
WHEN '1' THEN 'Welcome message (1)'
WHEN '2' THEN 'Welcome message (2)'
END AS experimentVariant,
event_name AS eventName,
COUNT(*) AS count
FROM
`analytics_000000000.events_*`,
UNNEST(user_properties) AS userProperty
WHERE
(_TABLE_SUFFIX BETWEEN '20240202' AND '20240502')
AND userProperty.key = 'firebase_exp_25'
GROUP BY
experimentVariant, eventName
For additional query examples, proceed to Explore example queries.
Explore example queriesThe following sections provide examples of queries you can use to extract A/B Testing experiment data from Google Analytics event tables.
Extract purchase and experiment standard deviation values from all experimentsYou can use experiment results data to independently verify Firebase A/B Testing results. The following BigQuery SQL statement extracts experiment variants, the number of unique users in each variant, and sums total revenue from in_app_purchase
and ecommerce_purchase
events, and standard deviations for all experiments within the time range specified as the _TABLE_SUFFIX
begin and end dates. You can use the data you obtain from this query with a statistical significance generator for one-tailed t-tests to verify that the results Firebase provides match your own analysis.
For more information about how A/B Testing calculates inference, see Interpret test results.
/*
This query returns all experiment variants, number of unique users,
the average USD spent per user, and the standard deviation for all
experiments within the date range specified for _TABLE_SUFFIX.
*/
SELECT
experimentNumber,
experimentVariant,
COUNT(*) AS unique_users,
AVG(usd_value) AS usd_value_per_user,
STDDEV(usd_value) AS std_dev
FROM
(
SELECT
userProperty.key AS experimentNumber,
userProperty.value.string_value AS experimentVariant,
user_pseudo_id,
SUM(
CASE
WHEN event_name IN ('in_app_purchase', 'ecommerce_purchase')
THEN event_value_in_usd
ELSE 0
END) AS usd_value
FROM `PROJECT_NAME.analytics_ANALYTICS_ID.events_*`
CROSS JOIN UNNEST(user_properties) AS userProperty
WHERE
userProperty.key LIKE 'firebase_exp_%'
AND event_name IN ('in_app_purchase', 'ecommerce_purchase')
AND (_TABLE_SUFFIX BETWEEN 'YYYYMMDD' AND 'YYYMMDD')
GROUP BY 1, 2, 3
)
GROUP BY 1, 2
ORDER BY 1, 2;
Select a specific experiment's values
The following example query illustrates how to obtain data for a specific experiment in BigQuery. This sample query returns the experiment name, variant names (including Baseline), event names, and event counts.
SELECT
'EXPERIMENT_NAME' AS experimentName,
CASE userProperty.value.string_value
WHEN '0' THEN 'Baseline'
WHEN '1' THEN 'VARIANT_1_NAME'
WHEN '2' THEN 'VARIANT_2_NAME'
END AS experimentVariant,
event_name AS eventName,
COUNT(*) AS count
FROM
`analytics_ANALYTICS_PROPERTY.events_*`,
UNNEST(user_properties) AS userProperty
WHERE
(_TABLE_SUFFIX BETWEEN 'YYYMMDD' AND 'YYYMMDD')
AND userProperty.key = 'firebase_exp_EXPERIMENT_NUMBER'
GROUP BY
experimentVariant, eventName
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-05-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-05-07 UTC."],[],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3