Adding experiment code

Last updated:

|Edit this page|

Once you've created your experiment in PostHog, the next step is to add your code:

Step 1: Fetch the feature flag

In your experiment, each user is randomly assigned to a variant (usually either 'control' or 'test'). To check which variant a user has been assigned to, fetch the experiment feature flag. You can then customize their experience based on the value in the feature flag:

// Ensure flags are loaded before usage.
// You only need to call this on the code the first time a user visits.
// See this doc for more details: /docs/feature-flags/manual#ensuring-flags-are-loaded-before-usage
posthog.onFeatureFlags(function() {
// feature flags should be available at this point
if (posthog.getFeatureFlag('experiment-feature-flag-key') == 'variant-name') {
// do something
}
})
// Otherwise, you can just do:
if (posthog.getFeatureFlag('experiment-feature-flag-key') == 'variant-name') {
// do something
}
// You can also test your code by overriding the feature flag:
// e.g., posthog.featureFlags.overrideFeatureFlags({ flags: {'experiment-feature-flag-key': 'test'}})

Since feature flags are not supported yet in our Java and Rust SDKs, to run an experiment using these SDKs see our docs on how to run experiments without feature flags. This also applies to running experiments using our API.

Step 2 (server-side only): Add the feature flag to your events

This step is not required for events that are submitted via our client-side SDKs (e.g., JavaScript web, iOS, Android, React, React Native).

For our backend SDKs, with the exception of the Go library, this step is not required if the server has local evaluation enabled and the flag in question has no property filters. In these cases, flag information is automatically appended to every event sent to PostHog.

For any server-side events that are also goal metrics for your experiment, you need to include feature flag information when capturing those events. This ensures that the event is attributed to the correct experiment variant (e.g., test or control).

There are two methods to do this:

Include the property $feature/experiment_feature_flag_name: variant_name when capturing events:

client.capture({
distinctId: 'distinct_id',
event: 'event_name_of_your_goal_metric',
properties: {
'$feature/experiment-feature-flag-key': 'variant-name'
},
})

Method 2: Set send_feature_flags to true

The capture() method has an optional argument sendFeatureFlags, which is set to false by default. This parameter controls whether feature flag information is sent with the event.

Basic usage

Setting sendFeatureFlags to true will include feature flag information with the event:

Node.js
client.capture({
distinctId: 'distinct_id_of_your_user',
event: 'event_name',
sendFeatureFlags: true,
})

Advanced usage (v5.5.0+)

As of version 5.5.0, sendFeatureFlags can also accept an options object for more granular control:

Node.js
client.capture({
distinctId: 'distinct_id_of_your_user',
event: 'event_name',
sendFeatureFlags: {
onlyEvaluateLocally: true,
personProperties: { plan: 'premium' },
groupProperties: { org: { tier: 'enterprise' } }
}
})

Performance considerations

  • With local evaluation: When local evaluation is configured, setting sendFeatureFlags: true will not make additional server requests. Instead, it uses the locally cached feature flags, and it provides an interface for including person and/or group properties needed to evaluate the flags in the context of the event, if required.

  • Without local evaluation: PostHog will make an additional request to fetch feature flag information before capturing the event, which adds delay.

Breaking change in v5.5.0

Prior to version 5.5.0, feature flags were automatically sent with events when using local evaluation, even when sendFeatureFlags was not explicitly set. This behavior has been removed in v5.5.0 to be more predictable and explicit.

If you were relying on this automatic behavior, you must now explicitly set sendFeatureFlags: true to continue sending feature flags with your events.

Questions? Ask Max AI.

It's easier than reading through 681 pages of documentation

Community questions

Was this page useful?

Next article

Testing and launching an experiment

Once you've written your code, it's a good idea to test that each variant behaves as you'd expect. If you find out your implementation had a bug after you've launched the experiment, you lose days of effort as the experiment results can no longer be trusted. The best way to do this is adding an optional override to your release conditions . For example, you can create an override to assign a user to the test variant if their email is your own (or someone in your team). To do this: Go to…

Read next article