Using Experiments to Validate your Startup Idea
What to track from 0 to early momentum—and how to build conviction before the numbers “look good."
Figuring out what to track as an early-stage founder is hard.
You’re still finding your ideal customer. Revenue isn’t consistent or recurring yet.
I remember this firsthand.
People told me to report weekly KPIs and graph them for investors.
But before we had annual contracts or a clear revenue and user growth curve, I kept asking myself: What do I measure for myself and team vs. for my investors?
This playbook is my attempt to answer that—both for the founder I was, and the founders I now back.
Here’s the truth: Early-stage metrics aren’t about scale. They’re about signal.
As a founder-turned-VC, I’ve raised money, built product, and now fund companies at the earliest stages through FoundersEdge. Whether you’re pre-product or just starting to scale, this guide will help you focus on what matters—at the right time.
🧭 Why Metrics Matter (Even Before You Have “Traction”)
Startups live or die on whether you’re solving a real problem and can reach the people who have it.
The key is understanding:
#1 = What assumptions need to be true for you to be successful? (that you can reach customers profitably, that your product works, etc.)
#2 = What have you gotten signal/validation on vs. not?
#3 = Which are the most important and unvalidated assumptions. Hint: it's usually not that you can build the product.
Metrics validate assumptions.
The best founders don’t wait for revenue to measure progress.
They treat discovery like an experiment, using early metrics to build conviction—not to impress investors, but to sharpen instincts and guide their next move.
🔁 How to Operationalize Metrics in Your Startup
Knowing what to track is one thing but making it part of how you run the company is where the magic happens.
Here’s a simple monthly and weekly rhythm:
1. Start with your riskiest assumption.
Each month pick 1–3 to focus on.
2. On Mondays, set a measurable weekly goal.
Each week, as a team decide which experiments you'll run. Make sure to define up front:
What the assumption is you're trying to validate
Scope of experiment
How you'll measure it
Definition of success & learning goals
3. Review every Friday.
What did we learn? What moved the needle? What’s next?
4. Track it in one place.
Keep it light—Notion, a slide, or a simple spreadsheet is enough.
This rhythm builds clarity and momentum. It’s not about doing more—it’s about learning faster.
🧪 Early Metrics to Track Across 5 Common Assumptions
Here are some of the most common assumptions, along with metrics to help you measure what’s working (and what’s not) at each stage of development.
Assumption: Customers Have X Pain Point
Conversion rate of outreach to discovery call
% mention X problem when asked about what's the #1 thing causing them pain
% of discovery calls that convert into waitlist signups, or even better, early “symbolic” payment for solution (e.g., $20 to join priority beta)
% of waitlist signups that convert to a second meeting to give product mockup / demo feedback
% that pre-pay after seeing product mock-ups
Time from offering customer onboarding to them completing it T(they make it a priority)
Assumption: Product Solves Customer Pain Point
Which prototype concept gets “that’s exactly what I need” reactions
Activation rate (% of users that signup and complete the key meaningful action - e.g., first file upload, first week tracked, etc.)
Engagement retention (% of users using core features as often as you'd expect them to)
Net Promoter Score (NPS) / Product-Market Fit Score (article here)
Delivering measurable ROI - time saved, customer conversion rate improvement, etc.
Assumption: We Can Reach Customers Profitably and Repeatedly
Channel conversion rates (% of cold phone calls that book & show up for a meeting or % of cold emails / warm intros that book & show for a customer discovery call)
% of visitors who sign up (landing page → signup)
Time-to-first-conversion (how long from first touchpoint to signup and is it shortening?)
Customer acquisition cost (how much you've spent / revenue expected from those customers)
Assumption: We Have Enough Resources to Hit Key Milestones (profitability / next raise)
Cash on hand
Monthly burn rate
Revenue (recurring vs. one-time, and % of revenue collected vs. booked)
Actual vs. projected costs per experiment or initiative
Cash flow of customer acquisition (customer acquisition cost vs. payback period)
Customer support costs per user (is this scalable?)
MRR / ARR trends
Assumption: Our Team Can Execute Well Together
Time from insight to action, or experiments run per week (and what was learned)
Weekly retros: What moved us forward? What did we learn? How can we operate better?
% of team time spent on highest-risk assumptions
% of roadmap shipped vs. planned (per sprint / month / quarter)
Bug-to-fix cycle time (how fast are we resolving issues?)
🌳Example: From Idea to Traction — A Landscaping SaaS Startup
Let’s say you’re building a B2B SaaS platform that uses AI to automate backend operations for landscaping companies—scheduling, invoicing, routing, customer communications, etc.
You don’t have a product yet—but you’ve got the insight and conviction to start testing. Here’s how that journey might look:
1️⃣ Step 1: Validate the Pain
What to test: Do landscaping companies feel real pain around backend operations? Is that the most important problem they have?
Actions:
1) Cold outreach to landscaping businesses via email, calls, LinkedIn
Track: % who reply and book time (signal of interest)
Example: → Cold outreach to 100, 30 interviews booked with decision-makers (30% conversion)
2) Book customer discovery calls
Use interviews to uncover biggest operational time sinks and frustrations (see my discovery playbook for a deeper dive 👇)
Track: % who describe the same 1–2 problems in their own words
Example: → 80% mention invoicing and following up for payment as well as time spent doing estimates that don't covert as their #1 or #2 problem
Track: % who ask to stay in the loop and be early users
Example: → 70% ask to stay in the loop on what you’re building and join waitlist
2️⃣ Step 2: Validate the Solution Direction
What to test: Do landscapers see your product vision as a solution worth paying for?
Actions:
1) Create clickable mockups or lo-fi demo of your platform
Track: % of customer discovery interviewees who book time to see product demo
Example: → 70% of those that asked to stay in the loop book a second meeting with you to see early product vision
2) Share product vision in follow-up discovery or demo calls and pre-sell (early deposit)
Track: % who see mockups that put down a deposit
Example: → 65% of customers put down a deposit for early product access
3) Gather direct feedback
Track: Top 1-3 features that seem most important / are missing
Example: → Clarity on what your minimal lovable product is to build
3️⃣ Step 3: Test Your Go-to-Market
What to test: Can you consistently reach and convert landscapers?
Actions:
1) Experiment with 3 outbound channels like cold email, walk-ins, and cold calls
Track: Outreach-to-demo conversion rate
Example: → Knowing your best channel and messaging: cold calls with message B convert at 20% to booked demo
2) Test 3 core messages across 20 customers for each channel and see which value props converts best
Track: Channel & message comparison: which source brings the most qualified leads?
Example: → 1 out of 3 demos convert to paid customer
4️⃣ Step 4: Test Usage and Onboarding
What to test: Can you get landscapers live and using your product?
Actions:
1) Set up onboarding flows and define what "onboarded" means
Track: Time from signup → first active use
Example: →
2) Handhold customers through setup and observe friction
Track: Support tickets or confusion points during onboarding
Example: → Avg time from offering to onboard someone to being live and using product independently is 1 week, indicating they onboard right away and are able to get value quickly
3) Track time-to-value (first successful job scheduled, invoice generated, etc.) and product adoption
Track: % that use product daily / fully adopt the solution
Example: → 90% of users that complete onboarding use it for 90% of relevant use cases
5️⃣ Step 5: Early Revenue Traction
What to test: Are landscapers willing to pay—and is the product sticky?
Actions:
1) Convert deposits into fully paid plans
Track: % of deposits that convert to paid
Example: → 9 of 14 deposits sign up for monthly subscription (ideally get annual commitment!)
Track: % who sign up for a subscription and successfully process payment
Example: → $9,000 in monthly recurring revenue, with 100% of signed contracts paid
2) Measure product usage and success
Track: Monthly usage (are they still using in month 2, 3?) indicating they're getting value and fully adopting the product
Example: → 100% of customers are engaging in the 2 most important product features daily at month 2
Track: Monthly subscription retention
Example → 100% retention from month 1 to month 2 customers
Bonus: 3 customers referred another business in the first month!
💡 Investor POV: What We Actually Care About
At FoundersEdge, Greg and I invest in clarity of thought.
We're asking ourselves:
Do you know what needs to be true for your business to work?
Are you validating those things in a measurable, focused way?
We’re not looking for vanity metrics.
Clarity of communication = clarity of focus.
We’re looking for founders who can tell a story with their numbers—and build confidence in what’s next. Startups drown in opportunity more often than lacking it, and require immense focus.
A few things we love to see:
A clear target customer and evidence of early signal
Users engaging weeks and months after sign-up
A clear plan of future experiments to double down on what's working and grow
❌ What to Avoid
These might look good in a pitch—but often signal a lack of clarity:
• “1,200 users” (How many are active? How did you get them? Over what time? Do you have momentum?)
• “$100K in revenue” (How much is reoccurring? If pilots, what's the timeline for conversion?)
• “10 features launched” (Which ones are used? Which ones do your customers sign up for?)
Great metrics help you make decisions and validate your business model and path forward.
🎯 Final Thought
If you’re early and unsure, focus on running and measuring experiments to produce signal on what to test next. Remember, that might mean going back to the drawing board and pivoting.
Metrics don’t need to be impressive, they need to be honest. That’s how you build something real.
Co-Founder, President & CTO at Wisdom | Innovating in AgeTech
4moThanks for sharing. This is super helpful.
STEM MBA Candidate @ Babson College | Co-founder @ HARDS | Climate Startups | Startup Investment | Startup Communities | 🇧🇷🇮🇹🇺🇸
4moThis is gold, thank you Jess for sharing! We were talking about metrics at Les' class this week here at Babson.
Founder & Patent Attorney, Stake® | Defending deep tech IP relentlessly so you can disrupt fearlessly. 🇺🇸
4moMetrics are very hard with a "move fast and break things" attitude.
Jess Lynch thanks so much for putting this together. You had a lot of great insights during the class and this puts everything into a digestible framework. Appreciate the simplicity and clarity! Early on, when very little is repeatable, it’s nice to see what’s important to measure.
Building Global Remote Teams | Co-founder at Simera | Former P&G, Gillette, Ralph Lauren | Strategic Growth & Partnerships
4moOnce you find the right metrics, it’s like having a compass : team knows the direction, and more importantly, realize when things are off and course correct quickly.