Dashboard Settings
The dashboard settings page is your place to set up parts of your experiments.
Applications
Here you can declare all of the platforms where your experiments may run. When creating an experiment, you will be asked to name all applications where the experiment will be run.
API Keys
API keys are used to allow access to the API from your applications.
Each key can have 3 permissions:
- Making requests to the SDK
- Accessing the SDK context endpoint
- Fetching experiment data from the API
API keys are independent of environments. The same key can be used in both production
and development
environments.
CORS Origins
In CORS Origins, you can declare all the domains which are allowed to access the API. These can be written as a specific domain or you can pass in a RegEx pattern to allow for a myriad of domains.
Environments
Environments can be of type production
or development
. You can have as many
environments of each type as necessary.
Production environments will only be running production
experiments.
Development environments run both production
and development
experiments, but
development
experiments only collect data from development
environments, and
production
experiments only collect data from production
environments.
Goals
Goals are the names given to processes that you will later track in your code.
The goal name will be used in your code, so we recommend using a keyword-like
or kebab-case name. For example, newsletter_subscription
, cpu_load_time
or
bookings
.
Metrics
Metrics are the parameters that will be used to track your goals. They can give you insights about your business, your users' behavior, the performance of your system and more!
For more information on creating a Metric, see the Creating a Metric section of the Getting Started guide!
Roles
The Roles section allows you to customize and give specific permissions to the users of your Web Console.
Segments
Segments are used to slice up your metrics based on different attributes. For example, you may be passing the following as attributes in an experiment:
context.attributes({
customer_age: 'new_customer',
url: location.toString(),
user_agent: navigator.userAgent
referrer: document.referrer,
screenName,
pageName,
country: headers['HTTP_CF_IPCOUNTRY'],
language: headers['Accept-Language'],
channel: query_params.utm_medium,
})
If the utm_medium
is something that would be interesting to segment the data
on, you could create a Channel segment with the Source attribute set to
channel.
Additionally, if you have the User-Agent parser turned on, ABsmartly will parse the UA
string passed on the user_agent
attribute and automatically create the
following attributes for you:
user_agent/browser
: Chrome, Edge, Firefox, Safari...user_agent/browser_type
: Browser, Email, Crawler...user_agent/device_type
: Desktop, Mobile, Tablet...user_agent/platform
: Android, iOS, Mac OS X, Windows...user_agent/crawler
: yes, no
Pass one of those keys in the Source attribute field to automatically create a segment based on the respective information from the User-Agent string.
Tags
Both experiments and goals can be given tags. This can be useful for searching through your goals and experiments to find a specific one.
We recommend prefixing each tag with the tag's type. For example,
location:Header
, stack:Backend
or psychological:Trust
.
Teams
In the teams section, you can declare the teams that will be running experiments. This will allow you to assign and search for experiments by a specific team.
Units
Units are the unique identifiers that are going to be used to generate a variant.
For experiments running across different platforms (iOS, Android, web, email, etc.) the
unit should be known across all the platforms. Most likely the authenticated
user's user-id
. For experiments running in apps or parts of the website where
the user is not authenticated, you might want to create a unit by device.
anonymous-id
or device-id
are common names for those.
For experiments running in email newsletters, you could even have a unit based on the user’s email address, but if you are interested in collecting metrics tracked in the website that the newsletter is linking to, it’s better to use the same unit that is being used for the experiments running in the website.
Have a look at the Tracking Unit section of the Creating an Experiment docs for more information.
Users
In Users, you can create and give permissions to the users who should have access to the platform.
Webhooks
Webhooks can be used for custom integrations. For example:
- Invalidating the CDN’s cache of the data file with any experiment being started or stopped.
- Sending all actions, notifications, and comments to a Slack channel.
- Synchronizing the status of a Jira ticket with the status of the respective experiment.
Have a look at our Slack integration guide for an example of how to use Webhooks, or checkout our example Github repository for a more comprehensive code example!
Webhooks are available for all experiment, goal and metric events on the Web Console, namely:
- Experiment Created
- Experiment Started in Development
- Experiment Started
- Experiment Stopped
- Experiment Restarted
- Experiment Put Full On
- Experiment Edited
- Experiment Commented On
- Experiment Archived
- Experiment Unarchived
- Experiment Alert Created
- Goal Archived
- Goal Created
- Goal Edited
- Metric Archived
- Metric Created
- Metric Edited
Payloads
The JSON payloads of each Webhook event are as follows:
Experiment Created
{
"event_name": "ExperimentCreated",
"event_at": 1608560579651,
"id": 7,
"name": "example_experiment",
"user_id": 1,
"comment": "example note"
}
Experiment Started in Development
{
"event_name": "ExperimentDevelopment",
"event_at": 1608560579651,
"id": 7,
"name": "example_experiment",
"user_id": 1,
"comment": "example note"
}
Experiment Started
{
"event_name": "ExperimentStarted",
"event_at": 1608560579651,
"id": 7,
"name": "example_experiment",
"user_id": 1,
"comment": "example note"
}
Experiment Stopped
{
"event_name": "ExperimentStopped",
"event_at": 1608560579651,
"id": 7,
"name": "example_experiment",
"user_id": 1,
"comment": "example note"
}
Experiment Restarted
{
"event_name": "ExperimentRestarted",
"event_at": 1608560579651,
"id": 7,
"name": "example_experiment",
"user_id": 1,
"comment": "example note",
"new_experiment_id": 8,
"new_experiment_state": "development"
}
Experiment Put Full On
{
"event_name": "ExperimentFullOn",
"event_at": 1608560579651,
"id": 7,
"name": "example_experiment",
"user_id": 1,
"comment": "example note",
"full_on_variant": 2
}
Experiment Edited
{
"event_name": "ExperimentEdited",
"event_at": 1608560579651,
"id": 7,
"name": "example_experiment",
"user_id": 1,
"comment": "example note",
"changes": {
"description": "example description updated",
"applications": [1, 2],
"other": "example other updated",
"prediction": "example prediction updated",
"percentages": "50/50",
"percentage_of_traffic": 95,
"implementation_details": "example implementation details updated",
"hypothesis": "example hypothesis updated",
"action_points": "example action points updated",
"owners": [1, 19],
"secondary_metrics": [
{
"goal_id": 1,
"goal_value_id": 9
},
{
"goal_id": 5,
"goal_value_id": 17
}
],
"tags": [1, 2],
"teams": [1, 5],
"variants": [
{
"variant": 0,
"name": "control",
"config": null
},
{
"variant": 1,
"name": "variant",
"config": "{\"a\": \"b\"}"
}
]
}
}
Experiment Commented On
{
"event_name": "ExperimentCommented",
"event_at": 1608560579651,
"id": 626,
"experiment_id": 7,
"user_id": 1,
"reply_to_note_id": 5,
"comment": "example comment"
}
Experiment Archived
{
"event_name": "ExperimentArchived",
"event_at": 1608560579651,
"id": 7,
"name": "example_experiment",
"user_id": 1,
"comment": "example note"
}
Experiment Unarchived
{
"event_name": "ExperimentUnarchived",
"event_at": 1608560579651,
"id": 7,
"name": "example_experiment",
"user_id": 1,
"comment": "example note"
}
Experiment Alert Created
{
"event_name": "ExperimentAlertCreated",
"event_at": 1608560579651,
"id": 102,
"type": "audience_mismatch",
"experiment_id": 288,
"created_at": 1661958869000,
"updated_at": 1663243653000,
"metadata": "{ \"ramala\": \"jamala\" }"
}
Goal Archived
{
"archived": true,
"name": "Example Goal",
"event_name": "GoalArchived",
"event_at": 1674470115161,
"id": 95,
"user_id": 89
}
Goal Created
{
"name": "Example Goal",
"description": "Example goal description",
"archived": false,
"event_name": "GoalCreated",
"event_at": 1674465526780,
"id": 95,
"user_id": 89
}
Goal Edited
{
"name": "Example Goal",
"description": "Example goal description",
"goal_tags": [
17
],
"owners": [
64
],
"teams": [
1
],
"event_name": "GoalEdited",
"event_at": 1674468687624,
"id": 95,
"user_id": 89
}
Metric Archived
{
"archived": true,
"name": "Example Metric",
"event_name": "MetricArchived",
"event_at": 1674205693525,
"id": 158,
"user_id": 89
}
Metric Created
{
"property_filter": "{\"filter\":{\"and\":[]}}",
"outlier_limit_upper_arg": null,
"outlier_limit_method": "unlimited",
"precision": 1,
"description": "Example metric description",
"scale": 1,
"type": "goal_unique_count",
"archived": false,
"goal_id": 69,
"value_source_property": "",
"effect": "positive",
"name": "Example Metric",
"format_str": "{}",
"outlier_limit_lower_arg": null,
"event_name": "MetricCreated",
"event_at": 1674204454990,
"id": 158,
"user_id": 89
}
Metric Edited
{
"goal_id": 78,
"outlier_limit_upper_arg": {
"scale": 0,
"value": "Fg=="
},
"outlier_limit_method": "quantile",
"effect": "negative",
"name": "Example Metric",
"description": "Example metric description",
"type": "goal_count",
"outlier_limit_lower_arg": {
"scale": 0,
"value": "AQ=="
},
"metric_tags": [
1,
7
],
"owners": [
81
],
"teams": [
1
],
"event_name": "MetricEdited",
"event_at": 1674236301978,
"id": 158,
"user_id": 89
}
Platform Settings
The Platform settings page allows you to choose whether your experiments'
description fields are optional
, required
or hidden
.