Skip to main content
All posts
case study MVP development Norwegian startup AI development startup

Case Study: How We Helped a Norwegian Startup Ship an MVP in 3 Weeks

How Kodework helped a Norwegian early-stage startup go from validated idea to working software in 21 days — and what the project looked like in practice.

Kodework

6 min read

In February 2026, a Norwegian founder came to us with a clear problem and a tight constraint: he needed a working MVP of his logistics coordination tool within four weeks, before a key investor meeting. His previous development agency had quoted six months and a budget he didn’t have.

We shipped in three weeks.

Here’s what the project looked like, what we did, and what the founder said about the process.

Note: client details are presented with permission; identifying information has been generalised at the client’s request.


The brief

The product: a coordination tool for small logistics operators to manage vehicle assignments, route tracking, and driver communication from a single dashboard. Think of it as a lightweight operations hub — not a full TMS (transportation management system), but something that replaced the current workflow of WhatsApp messages, phone calls, and spreadsheets.

The target user: operations managers at small logistics companies (5–30 vehicles). The problem: real-time coordination was happening across three different communication channels with no central source of truth. Assignments got lost. Status updates were missed. Managers were spending 2–3 hours daily just chasing information.

The MVP requirement: a web dashboard that showed vehicle locations on a map, allowed assignment of routes to drivers, and gave drivers a mobile-friendly interface to see their current assignment and update status.

Scope of v1 (agreed in spec document):

  • Web dashboard for operations managers
  • Real-time map view with vehicle locations (using GPS from driver mobile)
  • Route assignment interface
  • Driver mobile web app (not native — browser-based)
  • Driver status updates (available, en route, delivered)
  • Basic authentication (manager accounts, driver accounts)

Out of scope for v1 (explicitly deferred):

  • Native mobile apps
  • Automated route optimisation
  • Invoice generation
  • Customer tracking portal
  • Historical reporting

The explicit out-of-scope list was as important as the scope list. It protected the timeline.


Week 1: Specification and architecture

Days 1–2: Discovery

We ran a two-day spec process: a three-hour call on day one to map the user journeys in detail, followed by a written spec document shared by end of day two.

The spec included:

  • User journey maps for both manager and driver flows
  • Data model (vehicles, drivers, routes, assignments, status events)
  • Integration requirements (Google Maps API for mapping, GPS from browser geolocation)
  • Authentication approach (JWT, simple role model)
  • Infrastructure decisions (React frontend, Node.js backend, PostgreSQL, Railway for hosting)

Day 3: Architecture review and sign-off

The founder reviewed the spec and had one revision: add the ability for managers to send a free-text note to drivers alongside an assignment. Twenty minutes to update the spec, then sign-off.

Days 4–5: Project scaffolding

With AI-assisted tooling: authentication system, database schema and migrations, project structure, core API endpoints stubbed, React app initialised with routing. By end of week one, a developer could log in, see a blank dashboard, and the data model was in place.


Week 2: Core feature development

This is where AI-assisted development shows its clearest advantage over traditional development.

Days 6–8: Backend API development

Vehicle management, route management, assignment API, status update endpoints, driver location reporting. Each endpoint: generate from spec, review for correctness, test against database.

Driver location reporting was the one piece that required careful engineering — real-time updates from potentially many drivers needed to be efficient and not create database load issues at scale. The engineer made the architecture decision here; AI handled the implementation once the approach was defined.

Days 9–11: Frontend dashboard

React components: map view (Leaflet.js), vehicle list, route management UI, assignment workflow. Again, AI generated substantial portions of the component code; the engineer reviewed for correctness and UI quality.

Days 11–12: Driver mobile interface

Mobile-optimised web app: current assignment display, status update buttons, location permission request. Designed to work on any modern smartphone browser without installation.


Week 3: Integration, testing, and deployment

Days 13–15: Integration testing

End-to-end test scenarios covering all user journeys. AI-generated test cases covered the happy paths; engineers wrote the edge case tests for the status logic (what happens when a driver goes offline mid-route? When an assignment is cancelled after a driver accepts it?).

Days 16–17: Staging deployment and client review

Deployed to a staging environment, shared with the founder for review. Two rounds of feedback: minor UI adjustments and one logic change to how “available” status was displayed on the map.

Days 18–19: Production setup

Railway environment, domain setup, basic monitoring. Database backups configured.

Day 20: Handoff

Code repository transferred to client. Deployment documentation provided. One-hour walkthrough call covering the architecture, how to make common changes, and how to onboard new developers if needed.

Day 21: Production deployment

Live. The founder had a working product to demo at his investor meeting.


The outcome

The investor meeting went well. The founder raised his seed round in April 2026.

More immediately: he used the MVP with three pilot logistics operators in Norway during March and April. The feedback was directional rather than validating — the core coordination workflow worked; the operators wanted automated route optimisation (deferred to v2) and better reporting.

Those are the right problems to have after three weeks of development.

What the founder said:

“I had a previous agency tell me this product would take six months. Kodework shipped in three weeks and I had something I could put in front of real users. The spec process they run at the beginning made the speed possible — because we knew exactly what we were building before any code was written.”


What made this project work

Looking back, four things mattered most:

1. Clear requirements before development started. The two-day spec process was not overhead — it was what made three-week delivery possible. Every hour spent in spec saves three hours in rework.

2. Explicit out-of-scope decisions. The founder’s first instinct was to include native mobile apps in v1. He had good reasons. We pushed back because native mobile development would have doubled the timeline. Deferring it saved the whole project.

3. Fast feedback loops. The founder was available to give feedback quickly. We ran staging demos at end-of-week, not end-of-project. Issues got caught early.

4. AI-assisted development at every layer. Scaffolding took days not weeks. Standard CRUD endpoints took hours not days. Test generation was automated. Documentation was produced alongside code. None of these individually were transformative; together they compressed a 12-week project into three.


If you’re a founder evaluating development partners for an MVP, see our pricing or get in touch to discuss your project. We’ll tell you honestly whether the timeline you need is achievable and what the project would involve.

case study MVP development Norwegian startup AI development startup

Ready to build your product?

Ship your MVP in weeks, not months — with AI-powered development.

Book a Free Call