No matter if the sport is basketball, soccer, or baseball, individual players always seek ways to refine their play in anticipation of a championship game.
You can say the same for marketers, using testing to improve their marketing content. The need for refinement stems from programmatic campaigns. Already essential to paid search, programmatic strategies are now being more frequently deployed with email (and others) campaigns as well. Programmatic campaigns are the championship rounds when it comes to transforming consumer interest into action.
One of the latest testing tools for marketers is Google Optimize. Google has offered Optimize as part of Google Analytics 360, the enterprise measurement suite that competes with alternatives like Adobe Analytics. Google now offers Optimize for its basic Google Analytics service, as well as integration with Google Data Studio (I discussed details of Data Studio in this post). The move is meant to allow the integration of a test application with a tag manager and analytics solution. The ultimate objective for Google is to get marketers viewing Google as a complete measurement solution. Optimize brings the right features to accomplish that task.
Setting up Optimize.
To start, users manage Google Optimize via an online user interface. Users add a Google Optimize tag to a website in a similar fashion to a tag manager container. If you are aware of containers in web design, you get the general idea of what it is – a framework to hold a script. An Optimize container can be labeled in the user interface with a name to match your campaign, with the label no longer than 255 text characters.
To set test pages, Optimize includes a visual editor. The visual editor lets users create a test page variant without recoding of a site each time a test is issued.
Once the Optimize tag is installed, users can create an experiment. An experiment is the intended test, meant to determine which variation is the most effective at achieving a specific objective on a website or web app element. There are three experiment settings: A/B, Redirect, and Multivariate.
- A/B is a random test of using two or more variants of the same web page. Each variant is served at similar time so that the performance can be observed and measured in the pen of external factors
- Redirect is a split test that permits testing separate web pages against each other, be it two different landing pages or a page redesign with different URLs.
- Multivariate test inspects two or more elements simultaneously to see which combination creates the best outcome.
Objectives
Once users have decided on an experiment, the next step is actually to consider what results are anticipted. That’s where Google Optimize “objectives” come in.
Objectives play a similar role in Optimize that goals play in Google Analytics reports – they are results expected from the website or web app elements being updated. In test parlance an objective is a hypothesis: an educated guess, validated or invalidated by the experiment.
Google has made the choice of objectives a simple selection. Up to three pre-selected objectives can be chosen for a given experiment.
Targeting
Targeting lets users add a detail associated with the target audience for the experiment. The percentage of the visitors who will see an experiment can be planned, based on coding elements, such as cookie or JavaScript element fired, or set according to a URL or path rule. This allows consideration of the path flow to which a visitor will see a tested page or tested elephant.
Participant attributes are the most interesting feature because they can incorporate key assumptions in user behaviors. Time of day, demographics, and user devices can be a set factor for when the experiment can be shown.
Ultimately, targeting offers a range of ways to limit the experiment to a specific intended audience.
How better testing improves campaigns
The targeting features in Optimize reflect the machine learning capability deployed in Google Optimize A/B testing. Adobe applied a machine learning upgrade with its Test and Target solution. Using machine learning in conducting A/B test increases test accuracy. Tests can automatically be run with appropriate sample size balanced against conditions that are a reasonable influence on choice.
Moreover, that balance allows the introduction of Bayesian probability for testing. Bayesian probability is based on adjusting the likelihood for a hypothesis to be valid when more evidence or information becomes available during a test.
That perspective is essential, with more marketing campaigns relying on automation where conditions are not solely defined by web technicalities or code. Direct email campaigns are a great example. Because more emails are opened in mobile devices, marketers must consider what real world actions — like arriving at a store — are in play, on top of code-related page influences like a button size, image or text.
Accounting for real-world conditions enhances frequency-based test questions. Instead of just focusing on how many test runs are needed, the machine learning capacity in the latest analytic test solutions can interpret real-time decisions associated with testing a call to action, or the time of day that an email is sent. That is an essential distinction for emails opened in smartphones, with users on the go, and seeing real world influences in the moment of engaging with the email.
With Optimize, the benefits of better predictability, test accuracy, and ease of implementation, can lead to better campaigns. It’s one solution that can raise marketers’ games to championship level.