Impact View
Impact View translates experiment results into business language. Instead of showing p-values and confidence intervals, it shows estimated revenue impact, grouped by business goals. This is what you present to leadership.
Why Impact View matters
Executives don't care about statistical significance. They care about:
- "How much revenue did testing generate?"
- "Is our testing program worth the investment?"
- "What should we test next?"
Impact View answers these questions by connecting experiments to business outcomes.
Setting up Impact View
Step 1: Create business goals
Navigate to Dashboard > Goals and create goals that represent business outcomes:
- "Increase Signups"
- "Increase Revenue Per Visitor"
- "Reduce Cart Abandonment"
- "Improve Activation Rate"
Each goal should be a measurable business metric, not a test idea.
Step 2: Assign experiments to goals
When creating or editing an experiment, select a business goal from the dropdown. Multiple experiments can contribute to the same goal.
For example, three different experiments might all aim to "Increase Signups":
- Hero headline test
- CTA button color test
- Social proof placement test
Step 3: View the Impact dashboard
Navigate to Dashboard > Impact View. You'll see:
- Summary cards — Total projected impact, completed tests, winning tests
- Goals with experiments — Each goal shows its contributing experiments and their individual impact
- Cumulative program value — The total estimated impact across all goals and experiments
Reading Impact View
Each completed experiment with a winning variant shows:
| Metric | Meaning | |--------|---------| | Relative lift | How much better the winner performed vs. control (e.g., +12.3%) | | Projected annual impact | Estimated yearly value of implementing the winner (e.g., $45,000) | | Confidence level | Statistical confidence in the result |
Impact is estimated, not guaranteed
Projected impact is calculated from your experiment data and the goal metrics you set. Real-world impact may differ due to seasonality, market changes, or interaction effects between tests.
Sharing with leadership
Impact View is designed for executive communication. Here's how to use it effectively:
In monthly reviews: Show the cumulative value card. "Our testing program has generated an estimated $X in annual impact this quarter."
For individual tests: Show the goal breakdown. "Our signup optimization goal has three completed tests. Two were winners, generating an estimated $Y combined."
To justify resources: Frame testing as an investment. "For every hour spent on testing, we've generated $Z in projected annual value."
Best practices
Set conservative revenue estimates. It's better to under-promise and over-deliver. If you're unsure about a value, use the lower bound.
Update goal metrics as you learn. As you gather more data about customer lifetime value or conversion-to-revenue ratios, update your goal settings.
Review monthly with stakeholders. Regular cadence builds trust in the testing program and keeps leadership engaged.
Document negative results too. Tests that prevented bad changes have value. A test that stopped a -10% change is worth the same as a test that found a +10% improvement.
Group tests strategically. Don't create too many goals. 3–5 high-level business goals are easier to communicate than 20 granular ones.
Next steps
- Dashboard Guide — Navigate the full dashboard
- Reading Results — Understand the statistics behind Impact View
- Creating Tests — Set up experiments linked to goals