When marketers like us create landing pages, write email copy, or design call-to-action buttons, it could possibly be tempting to make use of our intuition to predict what is going to make people click and connect.
Nevertheless, you’re significantly better off conducting A/B testing than basing marketing decisions off of a “feeling,” as this may be detrimental to your results.
Keep reading to learn the way to conduct your entire A/B testing process before, during, and after data collection so you possibly can make the perfect decisions out of your results.
How to Read A/B Testing Results
What’s A/B testing?
A/B testing, also generally known as split testing, is a marketing experiment wherein you split your audience to check variations on a campaign and determine which performs higher. In other words, you possibly can show version A of a bit of selling content to at least one half of your audience and version B to a different.
A/B testing may be precious because different audiences behave, well, in a different way. Something that works for one company may not necessarily work for an additional.
The truth is, conversion rate optimization (CRO) experts hate the term “best practices” because it could not actually be the perfect practice for you. Nevertheless, this type of testing may be complex in the event you’re not careful.
Let’s review how A/B testing works to make sure you don’t make incorrect assumptions about what your audience likes.
How does A/B testing work?
To run an A/B test, you want to create two different versions of 1 piece of content, with changes to a single variable.
Then, you will show these two versions to 2 similarly sized audiences and analyze which one performed higher over a particular period (long enough to make accurate conclusions about your results).
Image Source
A/B testing helps marketers observe how one version of a bit of selling content performs alongside one other. Listed here are two sorts of A/B tests you may conduct to extend your website’s conversion rate.
Example 1: User Experience Test
Perhaps you wish to see if moving a certain call-to-action (CTA) button to the highest of your homepage as an alternative of keeping it within the sidebar will improve its click-through rate.
To A/B test this theory, you’d create one other, alternative web page that uses the brand new CTA placement.
The present design with the sidebar CTA — or the “control” — is version A. Version B with the CTA at the highest is the “challenger.” Then, you’d test these two versions by showing each to a predetermined percentage of site visitors.
Ideally, the share of tourists seeing either version is identical.
Learn the way to easily A/B test a component of your website with HubSpot’s Marketing Hub.
Example 2: Design Test
Perhaps you wish to discover if changing the colour of your CTA button can increase its click-through rate.
To A/B test this theory, you’d design another CTA button with a distinct button color that results in the same landing page because the control.
Should you normally use a red CTA button in your marketing content, and the green variation receives more clicks after your A/B test, this might merit changing the default color of your CTA buttons to green any further.
To learn more about A/B testing, download our free introductory guide here.
A/B Testing in Marketing
A/B testing has many advantages to a marketing team, depending on what you select to check. There’s a limitless list of things you possibly can test to find out the general impact in your bottom line.
Listed here are some elements you may determine to check in your campaigns:
- Subject lines.
- CTAs.
- Headers.
- Titles.
- Fonts and colours.
- Product images.
- Blog graphics.
- Body copy.
- Navigation.
- Opt-in forms.
After all, this list isn’t exhaustive. Your options are countless. Above all, though, these tests are precious to a business because they’re low in cost but high in reward.
As an instance you utilize a content creator with a $50,000/yr salary. This content creator publishes five articles weekly for the corporate blog, totaling 260 articles per yr.
If the typical post on the corporate’s blog generates 10 leads, you possibly can say it costs just over $192 to generate 10 leads for the business ($50,000 salary ÷ 260 articles = $192 per article). That is a solid chunk of change.
Now, in the event you ask this content creator to spend two days developing an A/B test on one article, as an alternative of writing two posts in that point, you may burn $192, as you are publishing fewer articles.
But when that A/B test finds you possibly can increase conversion rates from 10 to twenty leads, you only spent $192 to potentially double the number of consumers what you are promoting gets out of your blog.
If the test fails, after all, you lost $192 — but now you possibly can make your next A/B test much more educated. If that second test succeeds, you ultimately spent $384 to double your organization’s revenue.
Regardless of how persistently your A/B test fails, its eventual success will almost at all times outweigh the fee of conducting it.
You may run many sorts of split tests to make the experiment price it in the long run.
A/B Testing Goals
A/B testing can inform you so much about how your intended audience behaves and interacts along with your marketing campaign.
Not only does A/B testing help determine your audience’s behavior, but the outcomes of the tests may also help determine your next marketing goals.
Listed here are some common goals marketers have for his or her business when A/B testing.
Increased Website Traffic
You’ll need to use A/B testing to enable you to find the fitting wording in your website titles so you possibly can catch your audience’s attention.
Testing different blog or web page titles can change the number of people that click on that hyperlinked title to get to your website. This may increase website traffic.
A rise in web traffic is thing! More traffic normally means more sales.
Higher Conversion Rate
Not only does A/B testing help drive traffic to your website, it could possibly also help boost conversion rates.
Testing different locations, colours, and even anchor text in your CTAs can change the number of people that click these CTAs to get to a landing page.
This may increase the number of people that fill out forms in your website, submit their contact info to you, and “convert” right into a lead.
Lower Bounce Rate
A/B testing may also help determine what’s driving traffic away out of your website. Perhaps the texture of your website doesn’t vibe along with your audience. Or perhaps the colours clash, leaving a foul taste in your audience’s mouth.
In case your website visitors leave (or “bounce”) quickly after visiting your website, testing different blog post introductions, fonts, or featured images can retain visitors.
Perfect Product Images
You realize you’ve got the proper services or products to supply your audience. But, how do you recognize you have picked the fitting product image to convey what you’ve got to supply?
Use A/B testing to find out which product image best catches the eye of your intended audience. Compare the photographs against one another and pick the one with the best sales rate.
Lower Cart Abandonment
Ecommerce businesses see a median of 70% of customers leave their website with items of their shopping cart. That is generally known as “shopping cart abandonment” and is, after all, detrimental to any online store.
Testing different product photos, check-out page designs, and even where shipping costs are displayed can lower this abandonment rate.
Now, let’s examine a checklist for organising, running, and measuring an A/B test.
Tips on how to Design an A/B Test
Designing an A/B test can seem to be a sophisticated task at first. But, trust us — it’s easy.
The important thing to designing a successful A/B test is to find out which elements of your blog, website, or ad campaign that may be compared and contrasted against a latest or different version.
Before you jump into testing all the weather of your marketing campaign, try these A/B testing best practices.
Test appropriate items.
List elements that might influence how your audience interacts along with your ads or website. Specifically, consider which elements of your website or ad campaign influence a sale or conversion.
Be certain the weather you select are appropriate and may be modified for testing purposes.
For instance, you may test which fonts or images best grab your audience’s attention in a Facebook ad campaign. Or, you may pilot two pages to find out which keeps visitors in your website longer.
Pro tip: Select appropriate test items by listing elements that affect your overall sales or lead conversion, after which prioritize them.
Determine the proper sample size.
The sample size of your A/B test can have a big impact on the outcomes of your A/B test — and sometimes, that isn’t thing. A sample size that is simply too small will skew the outcomes.
Ensure your sample size is large enough to yield accurate results. Use tools like a sample size calculator to enable you to work out the proper variety of interactions or visitors you want to your website or campaign to acquire the perfect result.
Check your data.
A sound split test will yield statistically significant and reliable results. In other words, the outcomes of your A/B test aren’t influenced by randomness or likelihood. But, how are you going to make sure your results are statistically significant and reliable?
Identical to determining sample size, tools can be found to assist confirm your data.
Tools, corresponding to Convertize’s AB Test Significance Calculator, allow users to plug in traffic data and conversion rates of variables and choose the specified level of confidence.
The upper the statistical significance achieved, the less you possibly can expect the information to occur by likelihood.
Pro tip: Ensure your data is statistically significant and reliable through the use of tools like A/B test significance calculators.
Schedule your tests.
When comparing variables, keeping the remaining of your controls the identical is very important — including whenever you schedule to run your tests.
Should you’re within the ecommerce space, you’ll must take holiday sales into consideration.
For instance, in the event you run an A/B test on the control during a peak sales time, the traffic to your website and your sales make could also be higher than the variable you tested in an “off week.”
To make sure the accuracy of your split tests, pick a comparable timeframe for each tested elements. Make sure you run your campaigns for a similar length of time, too, to get the perfect, most accurate results.
Pro tip: Select a timeframe when you possibly can expect similar traffic to each portions of your split test.
Test just one element.
Each variable of your website or ad campaign can significantly impact your intended audience’s behavior. That’s why taking a look at only one element at a time is very important when conducting A/B tests.
Attempting to check multiple elements in the identical A/B test will yield unreliable results. With unreliable results, you will not know which element had the largest impact on consumer behavior.
Make sure you design your split test for only one element of your ad campaign or website.
Pro tip: Don’t attempt to test multiple elements without delay. A great A/B test will probably be designed to check just one element at a time.
Analyze the information.
As a marketer, you may have an idea of how your audience behaves along with your campaign and web pages. A/B testing can offer you a greater indication of how consumers are really interacting along with your sites.
After testing is complete, take a while to thoroughly analyze the information. You is likely to be surprised to seek out what you thought was working in your campaigns is less effective than you initially thought.
Pro tip: Accurate and reliable data may tell a distinct story than you first imagined. Use the information to assist plan or make changes to your campaigns.
Tips on how to Conduct A/B Testing
Follow together with our free A/B testing kit with every thing you want to run A/B testing, including a test tracking template, a how-to guide for instruction and inspiration, and a statistical significance calculator to see in case your tests were wins, losses, or inconclusive.
Before the A/B Test
Let’s cover the steps to take before you begin your A/B test.
1. Pick one variable to check.
As you optimize your web pages and emails, you’ll find there are numerous variables you wish to test. But to judge effectiveness, you’ll be wanting to isolate one independent variable and measure its performance.
Otherwise, you possibly can’t make sure which variable was chargeable for changes in performance.
You may test a couple of variable for a single web page or email — just make sure you are testing them one after the other.
To find out your variable, have a look at the weather in your marketing resources and their possible alternatives for design, wording, and layout. You could also test email subject lines, sender names, and alternative ways to personalize your emails.
Take into accout that even easy changes, like changing the image in your email or the words in your call-to-action button, can drive big improvements. The truth is, these kinds of changes are often easier to measure than the larger ones.
Note: Sometimes, testing multiple variables reasonably than a single variable makes more sense. This is known as multivariate testing.
Should you’re wondering whether you must run an A/B test versus a multivariate test, here’s a helpful article from Optimizely that compares the processes.
2. Discover your goal.
Although you will measure several metrics during anyone test, select a primary metric to concentrate on before you run the test. The truth is, do it before you even arrange the second variation.
That is your dependent variable, which changes based on the way you manipulate the independent variable.
Take into consideration where you wish this dependent variable to be at the tip of the split test. You may even state an official hypothesis and examine your results based on this prediction.
Should you wait until afterward to take into consideration which metrics are essential to you, what your goals are, and the way the changes you are proposing might affect user behavior, then chances are you’ll not arrange the test in essentially the most effective way.
3. Create a ‘control’ and a ‘challenger.’
You now have your independent variable, your dependent variable, and your required final result. Use this information to establish the unaltered version of whatever you are testing as your control scenario.
Should you’re testing an online page, that is the unaltered page because it exists already. Should you’re testing a landing page, this might be the landing page design and duplicate you’d normally use.
From there, construct a challenger — the altered website, landing page, or email that you simply’ll test against your control.
For instance, in the event you’re wondering whether adding a testimonial to a landing page would make a difference in conversions, arrange your control page with no testimonials. Then, create your challenger with a testimonial.
4. Split your sample groups equally and randomly.
For tests where you’ve got more control over the audience — like with emails — you want to test with two or more equal audiences to have conclusive results.
The way you do this can vary depending on the A/B testing tool you utilize. Suppose you are a HubSpot Enterprise customer conducting an A/B test on an email, for instance.
HubSpot will routinely split traffic to your variations in order that each variation gets a random sampling of tourists.
5. Determine your sample size (if applicable).
How you identify your sample size can even vary depending in your A/B testing tool, in addition to the form of A/B test you are running.
Should you’re A/B testing an email, you’ll likely need to send an A/B test to a subset of your list large enough to attain statistically significant results.
Eventually, you will pick a winner to send to the remaining of the list. (See “The Science of Split Testing” ebook at the tip of this text for more.)
Should you’re a HubSpot Enterprise customer, you will have some help determining the scale of your sample group using a slider.
It’ll allow you to do a 50/50 A/B test of any sample size — although all other sample splits require a listing of not less than 1,000 recipients.
Image Source
Should you’re testing something that doesn’t have a finite audience, like an online page, then how long you retain your test running will directly affect your sample size.
You’ll have to let your test run long enough to acquire a considerable variety of views. Otherwise, it’ll be hard to inform whether there was a statistically significant difference between variations.
6. Resolve how significant your results must be.
Once you have picked your goal metric, take into consideration how significant your results must be to justify selecting one variation over one other.
Statistical significance is a brilliant essential a part of the A/B testing process that is often misunderstood. Should you need a refresher, I like to recommend reading this blog post on statistical significance from a marketing standpoint.
The upper the share of your confidence level, the more sure you possibly can be about your results. Normally, you’ll be wanting a confidence level of 95% minimum, especially if the experiment was time-intensive.
Nevertheless, sometimes it is smart to make use of a lower confidence rate in the event you don’t need the test to be as stringent.
Matt Rheault, a senior software engineer at HubSpot, thinks of statistical significance like placing a bet.
What odds are you comfortable placing a bet on? Saying, “I’m 80% sure that is the fitting design, and I’m willing to bet every thing on it” is analogous to running an A/B test to 80% significance after which declaring a winner.
Rheault also says you’ll likely want a better confidence threshold when testing for something that only barely improves conversion rate. Why? Because random variance is more prone to play a much bigger role.
“An example where we could feel safer lowering our confidence threshold is an experiment that may likely improve conversion rate by 10% or more, corresponding to a redesigned hero section,” he explained.
“The takeaway here is that the more radical the change, the less scientific we must be process-wise. The more specific the change (button color, microcopy, etc.), the more scientific we must always be since the change is less prone to have a big and noticeable impact on conversion rate.”
7. Ensure you are only running one test at a time on any campaign.
Testing a couple of thing for a single campaign can complicate results.
For instance, in the event you A/B test an email campaign that directs to a landing page whilst you’re A/B testing that landing page, how can you recognize which change caused the rise in leads?
Throughout the A/B Test
Let’s cover the steps to take during your A/B test.
8. Use an A/B testing tool.
To do an A/B test in your website or in an email, you’ll have to make use of an A/B testing tool.
Should you’re a HubSpot Enterprise customer, the HubSpot software has features that allow you A/B test emails (learn how here), CTAs (learn how here), and landing pages (learn how here).
For non-HubSpot Enterprise customers, other options include Google Analytics, which enables you to A/B test as much as 10 full versions of a single web page and compare their performance using a random sample of users.
9. Test each variations concurrently.
Timing plays a major role in your marketing campaign’s results, whether it is the time of day, day of the week, or month of the yr.
Should you were to run version A during one month and version B a month later, how would you recognize whether the performance change was attributable to the various design or the various month?
When running A/B tests, you could run the 2 variations concurrently. Otherwise, chances are you’ll be left second-guessing your results.
The one exception is in the event you’re testing timing, like finding the optimal times for sending emails.
Depending on what what you are promoting offers and who your subscribers are, the optimal time for subscriber engagement can vary significantly by industry and goal market.
10. Give the A/B test enough time to supply useful data.
Again, you’ll be wanting to be certain that you simply let your test run long enough to acquire a considerable sample size. Otherwise, it’ll be hard to inform whether the 2 variations had a statistically significant difference.
How long is long enough? Depending on your organization and the way you execute the A/B test, getting statistically significant results could occur in hours … or days … or weeks.
A giant a part of how long it takes to get statistically significant results is how much traffic you get — so if what you are promoting doesn’t get plenty of traffic to your website, it’ll take for much longer to run an A/B test.
Read this blog post to learn more about sample size and timing.
11. Ask for feedback from real users.
A/B testing has so much to do with quantitative data … but that will not necessarily enable you to understand why people take certain actions over others. Whilst you’re running your A/B test, why not collect qualitative feedback from real users?
A survey or poll is among the finest ways to ask people for his or her opinions.
You may add an exit survey in your site that asks visitors why they didn’t click on a certain CTA or one in your thank-you pages that asks visitors why they clicked a button or filled out a form.
For instance, you may find that many individuals clicked on a CTA leading them to an ebook, but once they saw the value, they didn’t convert.
That sort of data offers you plenty of insight into why your users behave in certain ways.
After the A/B Test
Finally, let’s cover the steps to take after your A/B test.
12. Give attention to your goal metric.
Again, although you will be measuring multiple metrics, concentrate on that primary goal metric whenever you do your evaluation.
For instance, in the event you tested two variations of an email and selected leads as your primary metric, don’t get caught up on click-through rates.
You may see a high click-through rate and poor conversions, by which case you may select the variation that had a lower click-through rate in the long run.
13. Measure the importance of your results using our A/B testing calculator.
Now that you’ve got determined which variation performs the perfect, it is time to determine whether your results are statistically significant. In other words, are they enough to justify a change?
To search out out, you’ll have to conduct a test of statistical significance. You can do this manually… or you possibly can just plug in the outcomes out of your experiment to our free A/B testing calculator.
For every variation you tested, you will be prompted to input the overall variety of tries, like emails sent or impressions seen. Then, enter the variety of goals it accomplished — generally, you will have a look at clicks, but this may be other sorts of conversions.
Image Source
The calculator will spit out your data’s confidence level for the winning variation. Then, measure that number against your chosen value to find out statistical significance.
14. Take motion based in your results.
If one variation is statistically higher than the opposite, you’ve got a winner. Complete your test by disabling the losing variation in your A/B testing tool.
If neither variation is critical, the variable you tested didn’t impact results, and you will have to mark the test as inconclusive. On this case, keep on with the unique variation, or run one other test. You should use failed data to enable you to work out a latest iteration in your latest test.
While A/B tests enable you to impact results on a case-by-case basis, you may also apply the teachings you learn from each test to future efforts.
For instance, suppose you have conducted A/B tests in your email marketing and have repeatedly found that using numbers in email subject lines generates higher clickthrough rates. In that case, think about using that tactic in additional of your emails.
15. Plan your next A/B test.
The A/B test you only finished can have helped you discover a latest technique to make your marketing content simpler — but don’t stop there. There’s at all times room for more optimization.
You may even try conducting an A/B test on one other feature of the identical web page or email you only did a test on.
For instance, in the event you just tested a headline on a landing page, why not do a latest test on body copy? Or a color scheme? Or images? All the time keep a watch out for opportunities to extend conversion rates and leads.
You should use HubSpot’s A/B Test Tracking Kit to plan and organize your experiments.
Image Source
Tips on how to Read A/B Testing Results
As a marketer, you recognize the worth of automation. Given this, you likely use software that handles the A/B test calculations for you — an enormous help. But, after the calculations are done, you want to know the way to read your results. Let’s go over how.
1. Check your goal metric.
Step one in reading your A/B test results is taking a look at your goal metric, which is normally conversion rate.
After you’ve plugged your results into your A/B testing calculator, you’ll get two results for every version you’re testing. You’ll also get a major result for every of your variations.
2. Compare your conversion rates.
By taking a look at your results, you’ll likely have the opportunity to inform if one among your variations performed higher than the opposite. Nevertheless, the true test of success is whether or not your results are statistically significant.
For instance, variation A had a 16.04% conversion rate. Variation B had a 16.02% conversion rate, and your confidence interval of statistical significance is 95%. Variation A has a better conversion rate, but the outcomes aren’t statistically significant, meaning that variation A won’t significantly improve your overall conversion rate.
3. Segment your audiences for further insights.
No matter significance, it’s precious to interrupt down your results by audience segment to grasp how each key area responded to your variations. Common variables for segmenting audiences are:
- Visitor type, or which version performed best for brand spanking new visitors versus repeat visitors.
- Device type, or which version performed best on mobile versus desktop.
- Traffic source, or which version performed best based on where traffic to your two variations originated.
Let’s go over some examples of A/B experiments you possibly can run for what you are promoting.
A/B Testing Examples
We’ve discussed how A/B tests are utilized in marketing and the way to conduct one — but how do they really look in practice?
As you may guess, we run many A/B tests to extend engagement and drive conversions across our platform. Listed here are five examples of A/B tests to encourage your individual experiments.
1. Site Search
Site search bars help users quickly find what they’re after on a specific website. HubSpot found from previous evaluation that visitors who interacted with its site search bar were more prone to convert on a blog post. So, we ran an A/B test to extend engagement with the search bar.
On this test, search bar functionality was the independent variable, and views on the content offer thanks page was the dependent variable. We used one control condition and three challenger conditions within the experiment
The search bar remained unchanged within the control condition (variant A).
Image Source
In variant B, the search bar was larger and more visually distinguished, and the placeholder text was set to “search by topic.”
Image Source
Variant C appeared similar to variant B but only searched the HubSpot Blog reasonably than your entire website.
In variant D, the search bar was larger, however the placeholder text was set to “search the blog.” This variant also searched only the HubSpot Blog.
Image Source
We found variant D to be essentially the most effective: It increased conversions by 3.4% over the control and increased the share of users who used the search bar by 6.5%.
2. Mobile CTAs
HubSpot uses several CTAs for content offers in our blog posts, including ones within the body of posts in addition to at the underside of the page. We test these CTAs extensively to optimize their performance.
We ran an A/B test for our mobile users to see which form of bottom-of-page CTA converted best.
For our independent variable, we altered the design of the CTA bar. Specifically, we used one control and three challengers in our test. We used pageviews on the CTA thanks page and CTA clicks for our dependent variables.
The control condition included our normal placement of CTAs at the underside of posts. In variant B, the CTA had no close or minimize option.
Image Source
In variant C, mobile readers could close the CTA by tapping an X icon. Once it was closed out, it wouldn’t reappear.
Image Source
In variant D, we included an option to reduce the CTA with an up/down caret.
Image Source
Our tests found all variants to achieve success. Variant D was essentially the most successful, with a 14.6% increase in conversions over the control. This was followed by variant C with an 11.4% increase and variant B with a 7.9% increase.
3. Creator CTAs
In one other CTA experiment, HubSpot tested whether adding the word “free” and other descriptive language to writer CTAs at the highest of blog posts would increase content leads.
Past research suggested using “free” in CTA text would drive more conversions and that text specifying the form of content offered would help search engine marketing. Within the test, the independent variable was CTA text, and the principal dependent variable was conversion rate on content offer forms.
Within the control condition, the writer CTA text was unchanged (see the orange button within the image below).
Image Source
In variant B, the word “free” was added to the CTA text.
Image Source
In variant C, descriptive wording was added to the CTA text along with “free.”
Image Source
Interestingly, variant B saw a loss in form submissions, down by 14% in comparison with the control. This was unexpected, as including “free” in content offer text is widely considered a best practice.
Meanwhile, form submissions in variant C outperformed the control by 4%. It was concluded that adding descriptive text to the writer CTA helped users understand the offer and thus made them more prone to download.
4. Blog Table of Contents
To assist users higher navigate the blog, HubSpot tested a latest Table of Contents (TOC) module. The goal was to enhance user experience by presenting readers with their desired content more quickly. We also tested whether adding a CTA to this TOC module would increase conversions.
The independent variable of this A/B test was the inclusion and form of TOC module in blog posts. The dependent variables were conversion rate on content offer form submissions and clicks on the CTA contained in the TOC module.
The control condition didn’t include the brand new TOC module — control posts either had no table of contents or a straightforward bulleted list of anchor links throughout the body of the post near the highest of the article (pictured below).
Image Source
In variant B, the brand new TOC module was added to blog posts. This module was sticky, meaning it remained onscreen as users scrolled down the page. Variant B also included a content offer CTA at the underside of the module.
Image Source
Variant C included an analogous module to variant B but with the CTA removed.
Image Source
Each variants B and C didn’t increase the conversion rate on blog posts. The control condition outperformed variant B by 7% and performed equally with variant C. Also, few users interacted with the brand new TOC module or the CTA contained in the module.
5. Review Notifications
To find out one of the simplest ways of gathering customer reviews, we ran a split test of email notifications versus in-app notifications.
Here, the independent variable was the form of notification, and the dependent variable was the share of those that left a review out of all those that opened the notification.
Within the control, HubSpot sent a plain text email notification asking users to go away a review. In variant B, HubSpot sent an email with a certificate image including the user’s name.
Image Source
For variant C, HubSpot sent users an in app-notification.
Image Source
Ultimately, each emails performed similarly and outperformed the in-app notifications. About 25% of users who opened an email left a review versus the ten.3% who opened in-app notifications. Emails were also more often opened by users.
Start A/B Testing Today
A/B testing lets you get to the reality of what content and marketing your audience desires to see. Learn the way to best perform a few of the steps above using the free ebook below.
Editor’s note: This post was originally published in May 2016 and has been updated for comprehensiveness.