Most MVPs fail before the first line of code is written. The failure is usually in the decisions that happen before development starts — what the MVP is supposed to prove, what’s in scope, and what success looks like.

This checklist is for founders who are about to hire a development team or start building. It’s deliberately pre-code. Getting these 12 steps right means your MVP will answer the question it’s supposed to answer.

Before You Touch the Tech Stack

1. State the single hypothesis you’re testing

An MVP is an experiment, not a product. Write one sentence: “We believe that [customer segment] will [take this action] because [this reason]. We’ll know we’re right if [this measurable outcome] happens within [this timeframe].”

If you can’t write that sentence clearly, you don’t have an MVP hypothesis — you have a product idea. That’s different.

2. Identify the riskiest assumption

Your business model has multiple assumptions. The MVP should test the one that, if wrong, kills the business.

Common riskiest assumptions:

  • Will anyone pay for this? (demand risk)
  • Can we actually build this? (technical risk)
  • Can we reach the customers at reasonable acquisition cost? (distribution risk)

The MVP should attack the riskiest assumption first. Not the most fun-to-build assumption.

3. Define what “validated” looks like

Before you build, define the threshold that counts as evidence. Not “users like it” — that’s vague. Something like: “20 users complete the core workflow without dropping off” or “5 customers pre-pay for the annual plan.”

This number exists before you build. If you set it after you see the results, you’re rationalising, not validating.

4. Check if you need to build at all

Some hypotheses can be tested without code. A landing page with a waitlist, a manual service delivered to 10 users, a Notion doc shared with your network — these are MVPs too.

If a no-code or manual approach can test your hypothesis, do that first. Build only when you’ve confirmed demand that requires a real product.


Scoping the Build

5. List features, then cut by half

Write down every feature you think the MVP needs. Then cut the list in half. Then cut it again.

What remains is probably still too much. An MVP has one core workflow that tests the hypothesis. Everything else — analytics dashboards, admin panels, user settings, email notifications — is nice to have. Ship it after you validate.

A useful frame: “What is the minimum a user needs to experience the core value we’re offering?” Only build that.

6. Decide on data requirements up front

What data does the MVP need to function? What data does it need to generate for you to evaluate the hypothesis?

This matters for architecture. If your hypothesis requires tracking specific user actions, the logging infrastructure needs to exist from day one. Retrofitting analytics is expensive and you’ll lose early data.

7. Define the user journey, not the feature list

Instead of a feature list, map the specific steps a user takes from landing on your product to experiencing the core value. Every step in that journey is scope. Everything that doesn’t appear in that journey is out of scope for the MVP.

This is a more disciplined way to define MVP scope than a feature list.

8. Decide your tech stack with migration in mind

For an MVP, the technology choice matters less than most founders think. Speed of development matters most.

The question isn’t “what’s the best stack” — it’s “what can our team ship fastest, and what won’t prevent us from rebuilding properly if we validate?”

AI-assisted development has changed this: a senior engineer using Cursor or Claude can build a production-quality MVP in technologies outside their primary expertise. This expands the practical options. But still: pick boring, proven technology over novel frameworks when possible.


Before You Start Development

9. Hire or contract the right profile

For an MVP, you want a small team of senior generalists over a large team of specialists. A single senior full-stack engineer who can make decisions independently will out-ship a 5-person team that needs coordination overhead.

The MVP phase is not the time for process-heavy development. You want engineers who can hold the whole system in their head and move fast without breaking things that matter.

10. Agree on what “done” looks like for launch

Before development starts, document the acceptance criteria for the MVP launch. What does the product need to do for you to consider it shippable? This is not the same as the validation criteria — it’s the minimum quality bar.

Common MVP launch criteria:

  • Core user journey works end-to-end without errors
  • Basic security implemented (auth, no exposed API keys, HTTPS)
  • Data is persisted correctly
  • Works on the target device/browser

If your team and any external developers don’t have a shared written definition of done, scope will drift.

11. Plan for data and feedback collection

The MVP should generate signal. At minimum: basic usage analytics (Posthog, Mixpanel), user interview capacity, and a way to contact early users directly.

Many founders build the MVP, launch it, and realise they can’t tell whether users are using the product correctly because they didn’t build in visibility. This is a fixable problem that’s much easier to fix before launch.

12. Set a timeline with a hard cutoff

MVPs have a tendency to expand. The feature list grows, the launch date slips, and months later you’re still “almost ready.”

Set a hard launch date — 6–10 weeks out for a lean MVP — and don’t move it. If you haven’t shipped in that window, you either have scope creep or the wrong team. Both are important to find out early.


The Common Mistakes

Building for an imagined user. Everything above assumes you’ve talked to real potential users before writing code. If you haven’t done customer discovery, the checklist still applies — but start there first.

Optimising for edge cases. The MVP doesn’t need to handle every scenario. It needs to work for the core case. Spending a week on edge cases that 2% of users will encounter is misallocated time.

Building infrastructure for a product that doesn’t exist yet. Microservices, elaborate CI/CD pipelines, multi-region deployment — these solve problems you don’t have yet. An MVP should run on a simple monolith. Solve the scale problems when you have the scale.

Treating the MVP as version 1.0. An MVP is throwaway code designed to generate learning. If it validates, you’ll rebuild the product properly. If you try to make the MVP production-quality, you’ll either over-engineer it or be stuck with technical debt that slows down the real product.


Building Your MVP with Kodework

We specialise in lean, fast MVP builds for startups. Our approach is senior engineers using AI tooling — which means faster development with fewer people and lower cost than traditional agencies.

A typical Kodework MVP engagement is 6–10 weeks from brief to live product. We work through the scoping decisions with you before touching code, and we push for the simplest version that tests the hypothesis.

If you’re at the planning stage or ready to start, get in touch to discuss your project. Or review our MVP development pricing to understand the options.