Daily Active Users (DAU)

Unique users who engage with your product in a single day.

DAU = Unique users with ≥1 qualifying action per day

What it measures

Count of unique users with at least one qualifying action in a 24-hour period. What counts as "active" is product-specific: it might be logging in, viewing content, or completing a core action like sending a message.

What to watch

  • Rising: Indicates growing engagement, but verify sustainability. A viral spike that fades within 7-14 days signals temporary interest, not real growth. Pair with retention metrics.
  • Falling: Could signal product issues, seasonality, or a shift in user behavior. Segment by cohort to identify whether it's new user acquisition or existing user engagement that's declining.

In practice

After launching push notifications, a productivity app saw DAU jump 40% in week one. But sessions per user dropped from 3.2 to 1.8. Users opened the app more but did less each time. The team shifted to weekly digest notifications, which recovered session depth while maintaining the DAU gain.

Sessions Per User (SPU)

How often users return to your product in a given period.

SPU = Total sessions / Total unique users

What it measures

Average number of separate sessions per user over a given period. A session typically ends after 30 minutes of inactivity, though this varies by platform.

What to watch

  • Rising: Users are returning more frequently, a sign of habit formation. High-frequency products (messaging, social) should target 3+ daily sessions; lower-frequency products (finance, travel) may see 2-4 weekly.
  • Falling: Users may be consolidating activity into fewer, longer sessions (not necessarily bad) or losing interest. Cross-reference with session duration to distinguish these cases.

In practice

A news app saw SPU rise from 1.4 to 2.1 after launching personalized feeds, but average session duration dropped 30%. Users were snacking on headlines rather than reading articles. The team added a "deep read" mode and saw both metrics improve together.

Conversion Rate

Percentage of users who complete a goal action. Benchmarks: 2-3% e-commerce, 15-25% SaaS trial-to-paid.

Conversion Rate = (Users who completed action / Users who could have) × 100%

What it measures

Percentage of users who complete a specific goal action. "Conversion" is context-dependent: it might mean signing up, subscribing, purchasing, or completing any defined step. Always specify what you're measuring (visitor-to-signup, trial-to-paid).

What to watch

  • Rising: Your funnel is more effective. But check volume: optimizing for conversion can sometimes attract lower-intent visitors who inflate the denominator.
  • Falling: Something is blocking users. Use funnel analysis to find the drop-off point. Also check for audience mix shifts, as different traffic sources convert at different rates.

In practice

An e-commerce site simplified checkout from 5 pages to 1 and saw conversion jump from 2.1% to 3.4%. But average order value dropped 15%. Users were impulse-buying smaller items. They added a "frequently bought together" prompt, which recovered AOV while keeping most of the conversion gain.

Feature Adoption Rate (FAR)

Percentage of eligible users who use a feature. Target: 50%+ for core features, 10-30% for secondary.

FAR = (Users who used the feature / Eligible users exposed to it) × 100%

What it measures

Percentage of eligible users who use a specific feature. "Eligible" is key: measure against users who could use the feature (had access, saw it), not your entire user base.

What to watch

  • Rising: The feature is finding its audience. Low adoption isn't always bad, as some features are for power users only.
  • Falling: Initial curiosity may be wearing off. Track whether users who try the feature continue using it (feature retention), not just first use.

In practice

A project management tool launched a time-tracking feature with 45% adoption in week one, dropping to 12% by week four. Users liked the idea but found manual time entry tedious. The team added automatic tracking, and adoption stabilized at 38%: lower than the spike but sustainable.

Customer Acquisition Cost (CAC)

Total cost to acquire a paying customer. The key ratio: LTV:CAC should be 3:1 or better.

CAC = (Sales + Marketing costs) / New customers acquired

What it measures

Total cost to acquire a new paying customer, including marketing spend, sales salaries, tools, and overhead allocated to acquisition. Decide whether to include just direct costs (blended CAC) or fully-loaded costs including salaries. Note: no standardized calculation method exists—companies calculate CAC differently, making benchmarking imprecise.

Industry benchmarks

  • SaaS (Overall): $702 average
  • Fintech: $1,450 average
  • E-commerce: $274 average

What to watch

  • Falling: Acquisition is more efficient, but verify quality. Cheaper customers may churn faster or spend less. Pair with LTV to ensure you're not sacrificing long-term value.
  • Rising: Competition is intensifying, or you've saturated easy-to-reach audiences. Segment by channel, as some channels scale poorly. If LTV rises faster than CAC, rising costs can still be profitable.

In practice

A SaaS company saw paid search CAC rise from $120 to $180 over six months as competition increased. Content marketing CAC was $95 but took 6 months to show results. They maintained paid search for immediate pipeline while investing in content for long-term CAC reduction. Blended CAC stabilized at $135.

Growth Accounting

Breaking down user growth into its components. Reveals whether growth is healthy or hollow.

MAU Growth = New Users + Resurrected Users − Churned Users

What it measures

A framework that decomposes user growth into New (first-time users), Retained (continued from last period), Resurrected (returned after absence), and Churned (stopped using). The User Quick Ratio ((New + Resurrected) / Churned) measures growth efficiency.

What to watch

  • High resurrection, high churn: Users cycle in and out. You're not building a stable base. Investigate why users leave and what brings them back.
  • Low resurrection, low churn: Stable but may lack growth. Your existing users stay, but you're not winning back lapsed users.
  • User Quick Ratio above 4: Excellent for SaaS. Above 1.5 is very good for consumer apps. Below 1 means you're shrinking.

In practice

A mobile game showed 15% MAU growth, but growth accounting revealed 60% of "active" users each month were resurrected players who churned again within weeks. True retained users were declining. They shifted focus from re-engagement campaigns to fixing the core gameplay loop that caused churn in the first place.

Activation Rate

Percentage of new users who reach a meaningful first milestone. Target: 60-70%+, excellent is 80%+.

Activation Rate = (Users who activated / Total new users) × 100%

What it measures

Percentage of new users who complete a predefined action that correlates with long-term retention. The "activation event" varies by product. Famous examples: Facebook (adding 7 friends within 10 days), Dropbox (uploading first file), Slack (2,000 messages sent by team), Twitter (following 30 users). Activation is the only part of your product that 100% of users touch—poor activation cascades into poor retention regardless of product quality.

What to watch

  • Above 60%: Strong activation. You're converting most new users to engaged users. Exceptional products reach 80%+.
  • Below 40%: Significant friction exists. Check onboarding flow, value proposition clarity, and technical issues. Also verify your activation event still correlates with retention—it may need updating as your product evolves.

In practice

A project management tool defined activation as "create first project." After reducing the setup flow from 8 steps to 3, activation rose from 34% to 52%. But when they analyzed retention, users who also invited a teammate had 3× better retention, so they added teammate invitation to their activation definition.

Time-to-Value (TTV)

Time from signup to first "aha moment." Target: first session for consumer, hours/days for B2B.

TTV = Median time from signup to first value event

What it measures

Elapsed time from signup to a user's first meaningful value moment. You must define what "value" means for your product: completing a task, achieving a result, or reaching an "aha moment" feature. Use median to avoid outliers skewing results.

What to watch

  • Shorter: Users reach value faster, which strongly correlates with retention. Best-in-class products aim for value within the first session.
  • Longer: Friction in onboarding, unclear value prop, or complex setup requirements. Map the user journey step-by-step to find where time is lost.

In practice

A budgeting app had a median TTV of 4 days because users signed up but didn't link accounts until later. They added a "quick demo mode" with sample data so users could explore immediately. TTV for demo users was 8 minutes, and those users linked real accounts at 2× the rate.

Trial-to-Paid Conversion Rate

Percentage of trial users who become paying customers. The moment of truth for your value proposition.

Trial-to-Paid = (Users who converted to paid / Total trial users) × 100%

What it measures

How effectively your trial experience demonstrates enough value to justify payment. This metric is highly sensitive to trial design: opt-out trials (requiring credit card upfront) convert 2-3× higher than opt-in trials, but attract different user types.

What to watch

  • Opt-in trials: Target 18-25% conversion. Below 15% suggests users aren't reaching value within the trial period. Above 25% is excellent, but verify you're not filtering out potentially valuable users with too-complex signup.
  • Opt-out trials: Target 49-60% conversion. Below 40% indicates poor activation or value mismatch. Above 60% is best-in-class. Watch for involuntary churn in Month 2 from users who forgot to cancel.

In practice

A project management tool offered 30-day trials with 22% conversion. When they analyzed user behavior, most converters decided within 7 days; non-converters rarely returned after Day 10. They switched to 14-day trials with more aggressive onboarding emails. Conversion rose to 31%: shorter timeline created urgency and focused the team on faster activation.

Product-Qualified Leads (PQLs)

Users whose product engagement signals conversion likelihood. Quality over quantity.

PQLs = Count of users meeting predefined engagement criteria
PQL Rate = (PQLs / Total trial or free users) × 100%

What it measures

Users who have demonstrated meaningful product engagement that correlates with conversion likelihood. Unlike marketing-qualified leads (based on content engagement), PQLs are qualified by actual product usage.

What to watch

  • Rising: More users are reaching meaningful engagement, which is good for pipeline. Track PQL-to-paid conversion to validate your criteria.
  • Falling: Fewer users are engaging deeply. Check if onboarding is broken, if traffic quality has declined, or if product changes have made the "aha moment" harder to reach.

In practice

A SaaS company defined PQLs as "created 3+ projects and invited 1+ teammates." Sales closed 35% of PQLs vs. 8% of all trial users. When they added "used integration feature" to the criteria, PQL volume dropped 40% but close rate jumped to 52%. Better targeting made their sales team more efficient.

Day 1/7/30 Retention (N-day Retention)

Percentage of users who return on specific days after signup. The clearest early signal of product-market fit.

Day N Retention = (Users active on Day N / Users who signed up on Day 0) × 100%

What it measures

Whether users come back after their first experience. Day 1 measures immediate onboarding success; Day 7 indicates early habit formation; Day 30 reflects sustained value delivery. These milestones reveal problems weeks before they show in revenue.

Benchmarks by category (2024-2025)

  • All apps average: Day 1: 25-26% | Day 7: 11-13% | Day 30: 6-7%
  • Banking/Fintech: Day 1: 30% | Day 7: 18% | Day 30: 8-9%
  • Marketplace apps: Day 1: 34% | Day 7: 16% | Day 30: 9%
  • E-commerce: Day 1: 25% | Day 7: 11% | Day 30: 6%
  • Gaming: Day 1: 27-33% | Day 7: 13% | Day 30: 5%
  • Social: Day 1: 26% | Day 7: 9% | Day 30: 3%

In practice

A social app had strong Day 1 (32%) but crashed to 8% by Day 7. Users explored features once but didn't return. Analysis showed most users never added friends. They added a "Find friends from contacts" prompt on Day 2, and Day 7 retention jumped to 18%. The feature existed before, but users needed a nudge at the right moment.

Cohort Retention Curves

How retention changes over time for groups of users who joined together. The gold standard for retention analysis.

Plot % of cohort still active at Week 1, 2, 3... through Week N

What it measures

Retention tracked by user cohorts (typically grouped by signup week or month) over their entire lifecycle. The curve shape matters more than any single number: healthy products flatten into a horizontal line; struggling products decline continuously toward zero.

6-month retention benchmarks by business type

  • Consumer Social: Good: 25% | Great: 45%
  • Consumer Transactional: Good: 30% | Great: 50%
  • Consumer SaaS: Good: 40% | Great: 70%
  • SMB/Mid-market SaaS: Good: 60% | Great: 80%
  • Enterprise SaaS: Good: 75% | Great: 90%

In practice

A fitness app saw overall retention of 15% at 6 months but noticed cohort curves never flattened, just declined more slowly. When they segmented by workout type, users who tried strength training in week 1 had curves that flattened at 35%, while cardio-only users declined to 5%. They redesigned onboarding to introduce strength training earlier.

Customer Churn Rate

Percentage of customers who leave. Critical insight: 5% monthly churn compounds to 46% annual churn.

Logo Churn = Customers lost / Starting customers
Revenue Churn = Lost MRR / Starting MRR (more important)

What it measures

Percentage of customers who cancel or stop using your product over a period. Revenue churn matters more than logo churn: losing one $10K customer differs vastly from losing ten $100 customers. Small monthly numbers compound dangerously—5% monthly churn means losing 46% of customers annually.

What to watch

  • B2C SaaS: Target 3-5% monthly (good), <2% (great)
  • B2B SMB/Mid-Market: Target 2.5-5% monthly (good), <1.5% (great)
  • B2B Enterprise: Target 1-2% monthly (good), <0.5% (great)

In practice

An online learning platform saw monthly churn spike from 6% to 11% after a price increase. But when they segmented by engagement, high-engagement users actually churned less. The spike came from "zombie" subscribers who rarely used the product. The team let them churn and focused on converting engaged free users instead.

Customer Retention Rate (CRR)

Percentage of existing customers who stay. Target: 90%+ monthly for subscriptions.

CRR = ((Customers at end − New customers) / Customers at start) × 100%

What it measures

Percentage of existing customers who remain active over a period. The key is excluding new customers: you're measuring whether people who were already customers stayed.

What to watch

  • Rising: Your existing customers are stickier. Even small retention improvements compound dramatically over time.
  • Falling: Something is driving existing customers away. Segment by cohort, tenure, and usage patterns to find who's leaving and when. Early-tenure churn points to onboarding issues; late-tenure churn suggests value erosion.

In practice

A fitness app saw CRR drop from 85% to 78% after adding new workout types. The new content overwhelmed the home screen, and existing users couldn't find their saved workouts. Restoring a "My Workouts" quick-access tab recovered retention to 87%.

DAU/MAU Ratio (Stickiness)

How many days per month users engage with your product. The simplest measure of habit strength.

Stickiness = (Daily Active Users / Monthly Active Users) × 100%

What it measures

The percentage of monthly users who engage on any given day. A 20% ratio means the average user engages about 6 days per month (20% × 30 days). Higher stickiness indicates stronger habits and product-market fit.

Benchmarks

  • B2B SaaS: Average: 13% | Good: 20-25% | Great: 40%+
  • B2C Apps: Average: 20% | Good: 25-35% | Great: 50%+
  • Messaging Apps: Good: 50%+ | Great: 60%+
  • Facebook: 68.7% (exceptional benchmark)

When NOT to use

Products designed for infrequent use—Airbnb, TurboTax, travel booking apps—will naturally show low stickiness without indicating problems. If your product solves a periodic need, low DAU/MAU is expected and healthy.

In practice

A note-taking app had 15% stickiness, solid but not habit-forming. Analysis showed users only opened the app when they had something specific to capture. They added a daily review feature that surfaced old notes, raising stickiness to 28%. Users now had a reason to open the app even without new input.

Customer Effort Score (CES)

How easy it is for customers to get what they need. A better predictor of churn than CSAT.

CES = Average score on "How easy was it to [complete task]?" (1-7 scale)

What it measures

The effort customers expend to accomplish a goal: getting support, completing a purchase, or using a feature. Measured via survey after specific interactions. Research shows CES predicts repurchase and loyalty better than satisfaction scores.

What to watch

  • Low effort (6-7): Customers find interactions easy. High-effort experiences drive 96% of disloyal customers, while low-effort experiences build loyalty.
  • High effort (1-4): Friction is driving customers away. Identify the specific touchpoints causing friction: complex forms, slow support, confusing navigation, or too many steps.

In practice

A SaaS company tracked CSAT at 85% but saw unexpected churn. When they added CES surveys after support tickets, they discovered resolution required an average of 2.3 contacts per issue. Customers were satisfied with agent interactions but exhausted by the process. They implemented first-contact resolution targets, and CES rose from 4.2 to 6.1 while churn dropped 18%.

Customer Satisfaction Rate (CSAT)

Satisfaction with a specific interaction. Target: 80%+ for B2C, 90%+ for B2B enterprise.

CSAT = (Satisfied responses / Total responses) × 100%

What it measures

Satisfaction score for a specific interaction or experience, typically via a survey ("How satisfied were you?" on a 1-5 scale). Unlike NPS, which gauges overall loyalty, CSAT is best for evaluating specific touchpoints.

What to watch

  • Rising: The specific experience you're measuring is improving. Watch response rates, as low participation can skew results toward extremes.
  • Falling: Something changed in that touchpoint. Because CSAT is context-specific, you can often pinpoint the issue. Compare before/after when you make changes.

In practice

An online education platform added video transcripts and saw course CSAT rise from 72% to 86%. But completion rates didn't improve. Learners were more satisfied but using transcripts to skim rather than engage. They redesigned transcripts as a supplement rather than an alternative to video.

Monthly Recurring Revenue (MRR)

Total predictable monthly revenue from subscriptions.

MRR = Sum of each customer's monthly subscription value

What it measures

Sum of all active subscription revenue, normalized to a monthly value. For annual plans, divide by 12. MRR is the heartbeat metric for subscription businesses.

What to watch

  • Rising: Growth is coming from new customers (New MRR), existing customers upgrading (Expansion MRR), or both. Break down MRR by source to understand what's driving growth.
  • Falling: Churn and downgrades are outpacing new business. Calculate Net MRR (New + Expansion − Churn − Contraction) to see the full picture.

In practice

A B2B SaaS company saw MRR grow 8% month-over-month, but when they decomposed it, 70% came from expansion revenue and only 30% from new sales. They doubled down on upsell features while rebuilding their top-of-funnel acquisition.

Average Revenue Per User (ARPU)

Revenue generated per user over a period.

ARPU = Total revenue / Total users

What it measures

Revenue per user over a specific period, typically monthly. Clarify whether you're measuring paying users only (ARPPU) or all users including free tier, as these are very different numbers.

What to watch

  • Rising: Users are paying more through upgrades, add-ons, or price increases. But watch for declining user count: ARPU can rise while total revenue falls if you're losing lower-value customers.
  • Falling: Could indicate successful expansion into a lower-price segment (growth dilution), increased discounting, or downgrades. Segment by customer tier to understand the cause.

In practice

A streaming service launched a lower-priced ad-supported tier, causing ARPU to drop from $14 to $11. But total revenue grew 40% because the subscriber base doubled. The ARPU decline was intentional, a trade-off for market expansion.

Lifetime Value (LTV)

Total projected revenue from a customer over their entire relationship.

LTV = ARPU × Gross Margin % × (1 / Churn Rate)
E-commerce: Avg. order value × Purchase frequency × Avg. customer lifespan

What it measures

Projected total revenue from a customer over their entire relationship with your product. LTV is an estimate based on historical patterns.

What to watch

  • Rising: Customers are staying longer, spending more, or both. The critical ratio is LTV:CAC. Most healthy businesses target at least 3:1 (every dollar spent acquiring a customer returns three dollars).
  • Falling: Investigate whether it's driven by shorter lifespans (retention problem), lower spending (engagement or pricing problem), or both. Segment by acquisition channel, as some sources may deliver lower-quality customers.

In practice

An e-commerce company launched a loyalty program and saw LTV rise from $180 to $245, a 36% increase. But CAC also rose 25% because the loyalty program's marketing costs weren't attributed. When they calculated LTV:CAC, it only improved from 2.4:1 to 2.6:1, prompting them to optimize the program's costs.

Net Revenue Retention (NRR)

Revenue retained from existing customers, including expansion and churn. The single best predictor of sustainable growth.

NRR = ((Starting MRR + Expansion - Contraction - Churn) / Starting MRR) × 100%

What it measures

How much revenue you retain and grow from your existing customer base, independent of new sales. NRR above 100% means existing customers generate more revenue over time, even without acquiring anyone new. Median private SaaS NRR dropped to 101% in 2024 (down from 105% in 2021)—the bar is getting harder to clear.

Benchmarks

  • Below 100%: Losing money from existing customers—urgent problem
  • 100-110%: Solid retention with modest expansion
  • 110-120%: Strong expansion, good product-market fit
  • 120%+: Exceptional (usage-based or platform companies)

What to watch

Companies with NRR above 100% grow at twice the rate of those below. Best-in-class SaaS companies hit 120%+ through usage-based pricing or strong upsell motions. If you're below 100%, no amount of acquisition can outrun a leaky bucket at scale—prioritize reducing churn and contraction before investing in growth.

In practice

A B2B SaaS company had 95% NRR, meaning they lost 5% of revenue from existing customers each year. After analyzing cohorts, they found mid-market accounts churned at 2× the rate of enterprise. They rebuilt onboarding for mid-market and added a customer success tier, raising NRR to 108%. The same sales team now generated faster growth because each new customer compounded rather than leaked.

LTV:CAC Ratio

Return on acquisition investment. The fundamental unit economics equation.

LTV:CAC = Customer Lifetime Value / Customer Acquisition Cost

What it measures

How much value you get back for each dollar spent acquiring a customer. A 3:1 ratio means every $1 in acquisition generates $3 in lifetime value. This is the most fundamental measure of business model health.

What to watch

  • Below 1:1: Unsustainable. You're paying more to acquire customers than they're worth. Either reduce CAC or increase LTV immediately.
  • 3:1: The standard target. Enough margin to cover operations and generate profit. Most VCs expect at least this ratio.
  • Above 5:1: Potentially underinvesting in growth. You have room to spend more aggressively on acquisition without hurting unit economics.

In practice

A B2C subscription had 2.4:1 LTV:CAC, breakeven territory. They couldn't profitably scale paid acquisition. Instead of cutting CAC, they added a premium tier that increased average LTV by 40%, pushing the ratio to 3.4:1. The same acquisition spend now generated profitable returns.

CAC Payback Period

Months to recover customer acquisition costs. Determines how fast you can reinvest in growth.

CAC Payback = CAC / (Monthly ARPA × Gross Margin %)

What it measures

How many months of customer revenue are needed to recover the cost of acquiring that customer. Shorter payback means faster reinvestment and more efficient growth. This metric directly impacts cash flow and fundraising needs.

What to watch

  • Under 12 months: Healthy for SMB SaaS. You can reinvest acquisition costs within a year, enabling self-funding growth.
  • 12-18 months: Acceptable for mid-market. Requires more capital but still sustainable.
  • Over 24 months: Dangerous unless you have strong retention guarantees. Long payback strains cash and increases risk if churn accelerates.

In practice

A SaaS company had 18-month payback, limiting growth to what fundraising allowed. They analyzed segments and found enterprise deals had 24-month payback but SMB was 10 months. They launched a self-serve SMB tier with 8-month payback, using that cash flow to fund slower-burning enterprise sales. Blended payback dropped to 13 months.

Quick Ratio (SaaS)

Revenue growth efficiency: how much you gain versus lose. The health check for scaling.

Quick Ratio = (New MRR + Expansion MRR) / (Churned MRR + Contraction MRR)

What it measures

The ratio of revenue added to revenue lost in a period. A Quick Ratio of 4 means you add $4 for every $1 lost. This single number captures growth efficiency better than growth rate alone, which can mask underlying churn problems.

What to watch

  • Above 4: Excellent. This is the VC benchmark for healthy early-stage companies. You're adding revenue much faster than losing it.
  • 2-4: Healthy growth. Sustainable for most stages, though later-stage companies should trend higher.
  • Below 2: Barely sustainable. You're working hard just to replace lost revenue. Below 1 means you're shrinking.

In practice

A startup celebrated 40% YoY growth but had a Quick Ratio of 1.8. For every $1.80 they added, $1 churned out. When they reduced churn by 20%, Quick Ratio jumped to 2.9 with the same sales effort. They realized churn reduction was higher-leverage than acquisition.

Net Promoter Score (NPS)

How likely customers are to recommend you. Benchmark: 30+ good, 50+ excellent, 70+ world-class.

NPS = % Promoters − % Detractors (ranges from −100 to +100)

What it measures

A loyalty metric based on one question: "How likely are you to recommend us?" (0-10 scale). Respondents are grouped as Promoters (9-10), Passives (7-8), or Detractors (0-6).

What to watch

  • Rising: Improving loyalty. But NPS varies by industry, so compare to competitors rather than absolute benchmarks.
  • Falling: Investigate qualitative feedback from Detractors. A small NPS drop may reflect a vocal minority; a sustained decline signals systemic issues.

In practice

A hospitality app redesigned its booking flow and saw NPS jump from 32 to 48. But when they segmented by user type, power users' NPS actually dropped (they missed removed shortcuts). The team added back keyboard shortcuts for frequent bookers while keeping the simplified flow for casual users.

Product-Market Fit Score (Sean Ellis Test)

How disappointed users would be without your product. The earliest reliable signal of PMF.

PMF Score = % of users answering "Very disappointed" if they could no longer use the product

What it measures

A survey-based leading indicator of product-market fit. Ask users: "How would you feel if you could no longer use [product]?" Options: Very disappointed, Somewhat disappointed, Not disappointed. The percentage answering "Very disappointed" predicts growth potential.

What to watch

  • Above 40%: Strong product-market fit indicator. Slack scored 51% when validating PMF. Companies crossing this threshold typically see organic growth accelerate.
  • Below 40%: Keep iterating. Your product solves a problem but isn't yet a must-have. Focus on understanding what "very disappointed" users love and double down on that.

In practice

A productivity tool launched with 28% PMF score. The team analyzed the "very disappointed" segment and found they all used one specific feature: automated time blocking. They rebuilt the entire product around that feature, and PMF score rose to 47%. The pivot was informed by users who already loved them, not average users.

Strategic Framework

Avoid Vanity Metrics

Vanity metrics look impressive but don't drive decisions. The test: "If this metric goes up, down, or stays flat, what will you do differently?" If the answer is unclear, it's likely vanity.

Common vanity metrics to deprioritize: total registered users (without activity context), page views (without conversion data), total downloads (without activation data), social followers (without engagement context), time on site (without understanding why), and cumulative signups (the number only goes up).

Transform vanity into actionable: Replace total users with Monthly Active Users and retention rate. Replace page views with conversion rate. Replace downloads with activation rate. Replace total MQLs with MQL-to-SQL conversion rate.

Pick Your North Star

A North Star Metric is the single metric that best captures core customer value. Products typically play one of three "games":

  • Attention games (time in product): North Star = watch time, time spent listening
  • Transaction games (number of transactions): North Star = bookings, purchases
  • Productivity games (efficiency of work): North Star = tasks completed, documents created

Famous examples: Airbnb uses nights booked. Spotify tracks time spent listening. Netflix measures median view hours per month. Slack focuses on messages sent. About 50% of growth-stage companies use revenue as their NSM, but Airbnb, Netflix, and Spotify explicitly avoid revenue as their primary metric—arguing it leads to suboptimal decisions.

Match Metrics to Your Stage

What matters shifts dramatically based on company stage:

  • Pre-product-market fit: Focus on Sean Ellis Test (target 40%+ "very disappointed"), cohort retention curves, and qualitative feedback. Avoid optimizing revenue, CAC/LTV, or scaling metrics—it's too early.
  • Growth stage (post-PMF): Define your North Star Metric. Focus on activation rate, funnel conversion, and acquisition by channel.
  • Scale/mature stage: Optimize LTV:CAC (target 3:1+), CAC payback (<18 months), Net Revenue Retention (>100%), and gross margin.

Conclusion

The most effective product teams don't track more metrics—they track fewer, better metrics tied directly to customer value. The research reveals consistent patterns: retention metrics predict success better than growth metrics, leading indicators beat lagging indicators, and ratios outperform absolute numbers.

Three principles emerge: Retention is foundational—without it, acquisition becomes waste. Net Revenue Retention is the closest thing to a universal success metric for subscription businesses. And the Sean Ellis Test and activation rate provide the earliest reliable signals of product-market fit, allowing intervention before lagging metrics reveal problems.

For teams building their metrics stack: start with a North Star Metric aligned to customer value, support it with 3-5 input metrics you can directly influence, and ruthlessly eliminate vanity metrics that feel good but change nothing. The goal isn't comprehensive measurement—it's actionable insight that drives better products.