Cold email A/B testing: What to test first for maximum impact
multichannel_m · 2026-02-27 · 1,340 views
Been A/B testing cold emails for 2 years. Wasted a lot of time testing the wrong things first. Here is the priority order that actually moves the needle based on hundreds of tests.
Test #1: Subject lines. This has the biggest impact on reply rates. Test 2 subject lines per campaign. Run each for 200+ sends before declaring a winner. Test curiosity-based vs direct vs personalized subject lines.
Test #2: Email length. Short (under 50 words) vs medium (50-100 words). In my experience, shorter almost always wins for cold email. But test it for your ICP — some enterprise buyers prefer more context.
Test #3: CTA style. Question CTA ("Would this be relevant?") vs soft CTA ("Happy to share more if helpful") vs direct CTA ("Free for a 15-min call Thursday?"). Question CTAs typically win.
Test #4: Personalization depth. Generic (name + company) vs signal-based (reference funding, hiring, tech stack). Signal-based personalization usually lifts reply rates by 50-100% but costs more to produce.
Do NOT bother testing:
- Font styles or formatting — keep everything plain text
- Signature variations — minimal impact
- Send time within the 8-10 AM window — negligible difference
- From name variations — stick with first name + last name
Focus your testing energy on subject lines and CTAs first. These two variables account for 80% of the performance variance in cold email campaigns.
Comments (4)
copycarl · 2026-02-28
subject lines and CTAs being the highest leverage tests is spot on. I have run over 200 A/B tests in the last 2 years and subject line changes regularly move reply rates 1-3%. CTA changes move reply rates 30-50%. nothing else comes close to that impact per effort
techsales22 · 2026-02-28
ngl the question CTA tip is gold. switched from 'let's hop on a call' to 'would this be relevant for your team?' and reply rate went from 2.1% to 3.8%. people are way more willing to answer a low-commitment question than commit to a meeting
curiouscathy · 2026-03-01
how many sends do you need before an A/B test is statistically significant? I've been switching after like 50 sends per variant and not sure if that's enough
dataderek · DataCo · 2026-03-02
@curiouscathy minimum 200 sends per variant to get reliable data. at 50 sends you are basically flipping a coin. use a significance calculator — I like the one from AB Testguide. most cold email A/B tests need 200-500 sends per variant depending on your baseline metrics