How to Run A/B Tests on Email Campaigns in UniLink (Test Subject Lines and Content)

How to Run A/B Tests on Email Campaigns in UniLink (Test Subject Lines and Content)
Stop guessing what resonates with your audience. Split your list, test subject lines or full email content, set a winner criterion, and let UniLink automatically send the winning version to the rest of your list.
Most email marketers pick subject lines based on intuition. A/B testing replaces intuition with data. When you test a direct subject line against a curiosity-based one — or a plain-text email against an image-heavy version — you find out what your specific audience actually responds to, not what a marketing blog says should work. Even a modest improvement in open rate compounds significantly over time: a 5-point open rate lift across 12 campaigns per year is a meaningful difference in revenue and engagement.
UniLink's A/B testing is built directly into the campaign builder, so you do not need a third-party tool or a statistics background to run a clean test. This guide covers the full A/B testing workflow from setup to interpreting results.
What A/B Testing Does
When you enable A/B testing on a campaign in UniLink, you create two variants: Variant A and Variant B. UniLink splits a portion of your recipient list randomly — by default 20% goes to Variant A and 20% to Variant B. The remaining 60% of your list is held back. After the test duration ends (you choose between 1 hour and 72 hours), the system evaluates both variants against your chosen winner criterion: open rate or click rate. The variant with the better result is then sent to the remaining 60% of your list automatically.
You can test two types of variables. Subject line testing keeps the email body identical across both variants and only changes the subject line — isolating the impact of your subject line on open rates. Content testing uses the same subject line but sends different email bodies to each group — useful for testing a promotional offer against a value-first email, or a text-heavy layout against a visual one. Testing one variable at a time is the correct methodology; testing both simultaneously makes it impossible to know which change drove the result.
Results are reported in the Campaign Reports section with per-variant breakdowns of sends, opens, clicks, and the winning margin. The winning variant, the margin of victory, and the time at which the winner was automatically sent to the remaining list are all recorded for future reference.
How to Get Started With A/B Testing
- Create a new campaign — Go to Email Marketing → Campaigns and click New Campaign. In the campaign setup, fill in the campaign name and select your recipients (segment or full list).
- Enable A/B testing — In the campaign settings panel, find the A/B Test toggle and turn it on. The composer expands to show two variant tabs: Variant A and Variant B.
- Choose what to test — At the top of the A/B section, select Subject Line Test or Content Test. If you select Subject Line, you will write the same body but enter two different subject lines. If you select Content, you write two separate email bodies with the same subject line.
- Compose both variants — Click Variant A and write your first version completely. Then click Variant B and write the second. For subject line tests, only the subject field differs. For content tests, keep the subject field identical across both tabs.
- Set the test split — The default 20/20/60 split (20% to A, 20% to B, 60% held for winner) works well for most list sizes. If your list is small (under 500), consider increasing the test split to 40/40/20 to get statistically meaningful results.
- Choose the winner criterion — Select Open Rate (best for subject line tests) or Click Rate (best for content tests). The winner criterion should match the variable you are testing.
- Set the test duration — Choose how long the test runs before a winner is picked. Common choices are 4 hours (time-sensitive campaigns), 24 hours (standard campaigns), or 48 hours (weekend sends where behaviour patterns differ by day). Click Schedule or Send Now to start the test.
How to Use A/B Testing
- Monitor results during the test period — Go to Email Marketing → Campaigns and open your active A/B test campaign. The results panel shows live open and click counts for both variants as they accumulate. Do not manually pick a winner early — let the test complete its full duration for a fair result.
- Wait for automatic winner send — When the test duration ends, UniLink compares both variants on your winner criterion and sends the winning variant to the remaining held list automatically. You will receive a notification confirming which variant won and when the winner send completed.
- Review the results report — Go to Campaign Reports and open the A/B test campaign. The full report shows variant-level metrics: sent count, open rate, click rate, unsubscribe rate, and the winning margin. Save or screenshot this for your testing log.
- Log your findings — Keep a simple spreadsheet tracking what you tested, which variant won, and by how much. Over 10–15 tests you will start to see patterns specific to your audience: your readers prefer question-based subject lines, or they open more when you personalise with their first name, or they convert better from plain-text emails.
- Apply winning patterns to all future campaigns — If curiosity subject lines consistently beat direct ones by more than 5 points, use curiosity framing as your default going forward. A/B testing is only valuable if the learnings change how you write every campaign, not just the one you tested.
- Test one thing at a time — Resist the temptation to change multiple elements between variants. If Variant A has a different subject line, preview text, and opening paragraph compared to Variant B, you cannot know which change caused any difference in performance.
- Re-test important variables after list growth — A test result from when your list was 500 contacts may not hold at 5,000. As your audience changes, revisit key tests — subject line tone, plain vs HTML email — to see if conclusions still hold.
Key Settings Explained
| Setting | What it controls | Best practice |
|---|---|---|
| Test type (Subject Line / Content) | Whether the two variants differ in subject line only or in email body content | Run subject line tests more frequently — they are faster to set up and subject line is the biggest driver of open rate |
| Test split (A% / B% / remaining%) | What fraction of the list sees each variant and what fraction is held for the winner | Use 20/20/60 for lists over 1,000; use 40/40/20 for smaller lists to ensure each variant gets enough data |
| Winner criterion (Open Rate / Click Rate) | The metric used to determine which variant wins at the end of the test duration | Use Open Rate for subject line tests; use Click Rate for content tests where the goal is driving link clicks |
| Test duration | How long the system waits before comparing variants and sending the winner | 24 hours is the standard; use 4 hours for time-sensitive campaigns and 48 hours for weekend sends |
| Auto-send winner | Whether the system sends the winning variant to the remaining held contacts automatically | Keep enabled — the purpose of the held group is to receive the winning version; turning this off defeats the purpose of the test |
How to Get the Most Out of A/B Testing
The most common reason A/B testing fails to deliver useful insights is running tests on lists that are too small. With a 20/20 split on a list of 200 subscribers, each variant goes to just 40 people. A difference of 2 opens between the variants is statistically meaningless — it could be random noise. As a rule of thumb, each variant should receive at least 200–300 emails for the results to be meaningful. If your list is smaller, use a 40/40 split or wait until your list grows before investing heavily in formal testing.
Treat A/B testing as a long-term learning programme rather than a single experiment. One test tells you what worked once. Ten tests across different campaign types, subject line styles, content formats, and audience segments tells you what your audience consistently prefers. Create a testing calendar — for example, running one structured A/B test per month — and track all results in a single document. After six months you will have a data-backed style guide for your email programme that is specific to your audience, not borrowed from someone else's audience.
Do not stop at subject lines. Content A/B tests are underused but often more valuable. Test a long-form narrative email against a short punchy one. Test an email with one clear call-to-action against an email with three options. Test starting with the offer versus starting with the story. Click rate differences in content tests can reveal a great deal about how your audience prefers to be sold to — information that is hard to get any other way.
Use your A/B test results to personalise at scale. If you discover that one audience segment responds to curiosity subject lines while another responds to direct benefit statements, create separate campaigns for each segment with the appropriate framing. This is not additional work — it is applying what the data already told you, segment by segment.
Troubleshooting Common Issues
| Problem | Likely cause | Fix |
|---|---|---|
| Both variants show identical results | The two variants are not different enough to produce measurable differences | Redesign the test with structurally distinct variants — change subject line tone, personalisation, or full email format rather than minor wording tweaks |
| Winner send did not fire after test duration | The campaign was paused or the test duration setting was set to manual winner selection | Check the campaign status in Campaign Reports; if Auto-send winner is off, manually select the winner and click Send to Remaining List |
| Test result is inconclusive (variants within 1–2% of each other) | Sample size too small or the tested variable has no meaningful impact on this audience | Accept the null result — this is valid data. Consider testing a larger structural difference in the next campaign |
| Variant B received fewer contacts than expected | List size reduced due to unsubscribes or bounces between test setup and send | This is normal — the split is calculated at send time on the active list. Review the actual split in Campaign Reports to confirm neither variant was severely under-sampled |
Pros
- Automatic winner send ensures the majority of your list receives the better-performing variant without manual follow-up
- Results are tracked per campaign so you can build an evidence-based style guide over time
- Flexible test split supports small and large lists with adjustable percentages
- Both open rate and click rate winner criteria are supported, matching the test type to the right metric
Cons
- Results are statistically weak on small lists (under 500 total subscribers) — tests on tiny lists can mislead
- Only two variants (A and B) are supported — multi-variant tests require multiple separate campaigns
- Test duration delays the full campaign send — time-sensitive campaigns may not benefit from a full 24-hour test window
Frequently Asked Questions
Can I run an A/B test on an automated email sequence, not just a one-off campaign?
A/B testing in UniLink is available for broadcast campaigns. Automated sequence emails (triggered by automation rules) do not currently support the built-in A/B test workflow. To test automated emails, create two versions of the sequence and split your audience manually using CRM segments.
What happens if neither variant clearly wins — can I set a tie-breaker?
If both variants have identical performance at the test deadline, UniLink sends Variant A to the remaining list as the default. You can override this by going to Campaign Reports and manually selecting a winner before the deadline if you have a preference.
Is the audience split truly random?
Yes. UniLink randomises which contacts receive Variant A versus Variant B at send time. The randomisation uses contact ID shuffling, not sequential splitting, so there is no systematic bias from the order contacts were added to your list.
Can I see which individual contacts received which variant?
Yes. In Campaign Reports, open the Recipients tab and filter by Variant A or Variant B. Each contact in the list shows which variant they received, along with their individual open and click status.
How many A/B tests can I run per month?
There is no limit on the number of A/B tests you can run. Every campaign can be run as an A/B test at no additional cost. Running one structured test per campaign is a sustainable habit that produces meaningful data over time.
Key Takeaways
- A/B testing in UniLink sends two variants to a test split and automatically sends the winner to the remaining list.
- Test subject lines or full content — always change only one variable per test to get actionable results.
- Choose Open Rate as the winner criterion for subject line tests and Click Rate for content tests.
- Each variant needs at least 200–300 recipients for statistically meaningful results — adjust the test split on small lists.
- Track all test results over time to build an audience-specific style guide that improves every campaign you send.
Ready to send emails that actually convert?
Run your first A/B test in UniLink and let the data guide your email strategy — not guesswork.
Get Started FreeCreate Your Free Link-in-Bio Page
Join thousands of creators using UniLink. 40+ blocks, analytics, e-commerce, and AI tools — all free.
Get Started Free