210 High Street, Rangiora, Canterbury, New Zealand

What to ask a developer before you sign

Ten direct questions to ask a software developer before signing. What good answers sound like, and what the red flags are.

choosing vendor-selection owner software nz
What to Ask a Developer Before You Sign Anything
22 April 2026 Urban Lightbulb

Most business owners walk into a software developer meeting with one question prepared: "how much?" It's the question a good developer can answer. It's also the question a bad one can fake — round number, confident delivery, and the surprise scope shows up six weeks in.

The questions that actually filter vendors are different. They aren't about price. They're about whether the person across the table can build and maintain software for a business like yours over a multi-year horizon. We wrote the cost article about pricing. This one's about everything else.

Nine questions. Ask all of them. For each, we'll say what a good answer sounds like and what the red-flag answer sounds like. The red flags are usually louder than the good answers, which turns out to be useful.

A business owner at a desk reviewing several proposal documents spread out, pencil in hand, visibly comparing — the decision moment

1. "What's your ballpark range?"

Just a number. You don't need the final quote yet. You need to know if this is a $20k project or a $150k project, so you can walk away before you've wasted anyone's time.

Good answer: "Based on what you've described, this is probably in the $30k to $60k band. There's too much unknown in [X] for me to tighten that yet, but we're not in six-figure territory."

Red-flag answer: "It depends on scope. Let's book a paid discovery session and we'll come back with a number."

Anyone who's built ten of these knows whether your project is a $20k thing or a $150k thing within twenty minutes of hearing about it. Refusing to commit means they can't, or they don't want you comparing them. Both are reasons to walk.

2. "Who specifically will write my code?"

Names, and a clear picture of who does what.

Good answer: "We'll tell you who architects it, who writes it, and who reviews it. The architect is usually the same person across our projects; the developer depends on who's free when you book. We'll name them during spec once the start date is locked, and you'll be on email with them directly from kickoff."

Red-flag answer: "We'll assign it to whoever's available." Or: "We have a team." And no names, ever.

A studio with a real roster can tell you who writes, who reviews, and who you'll email. A studio that can't is often a sales-first shop that outsources the build once the contract lands, sometimes offshore, sometimes to freelancers they haven't worked with before. You find out in month three, when a bug lands and nobody can answer a question about that part of the code because the person who wrote it has moved on.

A note on when names actually firm up: at the two-page ballpark stage there's no booked timeline yet, so no honest developer can guarantee which engineer will be free in four months. What you want is a commitment to name them during spec, once a real start date is on the table — and for the same roster to still be in the room when the work actually begins.

At Urban Lightbulb the structure doesn't change. Nick architects every project. Ben or Sheila builds. Nick reviews before anything ships. Clients get names and are on the email thread from kickoff. That's a deliberate call, not standard industry practice.

3. "Can I see something you built three-plus years ago that's still running?"

Not the latest portfolio case study. Something old. Something that's survived.

Good answer: "Yeah, we built that for [client] in 2022, they still use it daily, we've done a framework upgrade and a couple of feature rounds since. Happy to put you in touch with them."

Red-flag answer: "We have lots of recent projects I can show you." Or a portfolio that's all from the last 18 months.

A three-year-old project is proof the team can maintain software, not just ship it. Plenty of agencies demo a recent build. Fewer can point to one they still work on years later — because the work dried up, the client moved on, or the codebase was too fragile to keep running. Survival is a better signal than novelty. We still maintain systems we wrote in 2021. Bugs have come up, features have been added, frameworks have been upgraded — that's normal for software this old. What matters is we're still the team running them.

4. "What's your default stack, and why?"

They should have an answer. A strong one.

Good answer: "We default to Laravel on the backend and Vue on the frontend, with TypeScript. Laravel has the richest ecosystem for business apps at our scale, Vue pairs cleanly with it for the kind of interfaces we build most often, and TypeScript catches a class of bugs before production. When a project needs React's wider component ecosystem or a specific library we won't find elsewhere, we'll switch. The default isn't dogma; it's where we're fastest when nothing else overrides it."

Red-flag answer: "We're tech-agnostic — whatever the client wants." Or: "React and Node, because that's what everyone uses."

"Tech-agnostic" usually means no deep experience in any one stack. The Stack Overflow 2024 Developer Survey shows the top stacks have genuinely different trade-off profiles — they're not interchangeable. A developer who can't defend their default hasn't shipped enough production code to have a preference. That's a problem.

A single contract document on a desk with a magnifying glass hovering over one clause, sketchy question marks above it — scrutiny and interrogation

5. "How do you use AI in your workflow, and how do you keep it from shipping to production?"

This one's new in the shortlist. It's also the question most likely to separate teams right now.

Good answer: "We use AI assistants heavily. Claude Code is part of our daily workflow, and the majority of our modern code starts as AI output. What makes that work is the pipeline around it: every line is read by the engineer, reviewed by a separate AI pass, run through tests and CI, and refactored to fit the codebase before it merges. We don't ship prompt output directly."

Red-flag answer: Either extreme. "We don't use AI at all, we handcraft everything" (behind the curve). Or "We just prompt it and ship what comes out, it's way faster now" (no review pipeline — that's the actual danger).

The useful distinction is developer-led AI versus prompt-only builds. The volume of AI-written code isn't the signal; the review pipeline around it is. Developer-led teams can run 80–90% AI-assisted and still ship well-architected code, because engineers read, test, and refactor every line before it goes anywhere near production. Prompt-only is a non-developer asking an AI to "build me a SaaS" and shipping the output. It produces code that demos well and falls over in production. Security holes, missing edge cases, architectural dead-ends. These show up in months two through six, after the money's spent and the team's moved on.

You want a builder who uses AI with engineering judgment. Not one who outsources judgment to it.

6. "What happens when scope changes?"

Scope will change. Nobody hits the first-brief target exactly. The question is what happens when it does.

Good answer: "Change requests are scoped and priced separately. Small ones come out of a buffer we build into the project. Anything bigger gets a written change order with hours, cost, and timeline impact before we touch it. You approve it or we don't do it."

Red-flag answer: "We're flexible, we just roll with it." Or: "Any change reopens the whole contract."

The first sounds accommodating but isn't — "rolling with it" is how projects blow budget without anyone having to sign off. The second is the fight version. You want the boring middle: a documented, priced change process that respects both sides.

7. "What exclusions are in your quote?"

Not what's included — what's out. A trustworthy quote spells out the boundaries.

Good answer: A written list. "Not included: content creation, hosting after month 12, SSO integration, third-party licence fees, additional user roles beyond the three specified, native mobile apps."

Red-flag answer: "It's all in there." Or a single round number with no breakdown.

Exclusions are the inverse of scope creep. A developer who can't write down what isn't included is either hoping you won't ask for it (and will charge you when you do) or hasn't thought about it yet. Either way your budget's exposed.

8. "What does the maintenance model look like after launch?"

The build is the visible part. The five-year bill is the rest — Gartner puts maintenance at 55 to 80 percent of total IT spend over a system's life.

Good answer: "Most of our clients sit on a monthly retainer, typically between $2k and $8k a month depending on the system — covering hosting, security patching, dependency updates, monitoring, and incremental improvements. No retainer is fine too; you'd pay hourly as things come up. Either way we'll still be here in three years if you need us."

Red-flag answer: "Once we hand over, it's yours to manage." Or: "Just flick us an email if you need us."

The first is abandonment dressed as handover. The second is abandonment without admitting it — the email goes unanswered in eighteen months because the team's pivoted, shut down, or moved the person who knew your system. Custom software needs someone who still knows it five years from now. That's a deliberate commitment, not an afterthought.

Two hands shaking firmly across a desk over a signed open contract — sketch style, the 'good sign' moment after due diligence

9. "Who owns the code, and can I leave if I need to?"

Code ownership. Exit rights. What you can get back, and how fast.

Good answer: "You own the code and the data. We run the infrastructure day to day, but the contract gives you the right to request the full codebase, database, and files at any time. If you want to leave, we hand it all over."

Red-flag answer: "We own the code and license it to you." Or: "We host it, and there's nothing in writing about you getting the code or database back."

The first is the single worst clause in this industry. It means you don't own what you paid to build. You rent it, and leaving costs you the entire build. The second sounds like a managed-hosting arrangement but isn't — the difference is the export clause in the contract. A good managed host runs your infrastructure for convenience and hands it back on request. A hostage host runs it because control is the business model. Ask it plainly: "can I get the full codebase, database, and files any time I want, spelled out in the contract?" If yes, managed. If no, walk.

One more layer: owning the code only matters if another team can actually pick it up. Code written like a black box, with everything in one engineer's head and no spec or tests, is technically yours but practically unusable by anyone else. We build for handover from day one. Readable structure, the spec as the source of truth, tests and documentation that let any competent studio keep going. If a relationship ever turned sour, another team could take over without a rewrite. It's never happened to us, but it's the test we write code against.

What to do with the answers

These questions aren't tricks. They're the baseline. A competent developer answers all nine without flinching — not because they've rehearsed, but because they've built and maintained enough software that the answers are just true.

A shaky answer on one question is a data point. Two or three and you're looking at the wrong shop.

Ask all nine. Write down the answers. Compare across three quotes. The shop that gives you clean, specific answers is almost always the one worth choosing, even when they aren't the cheapest. The other eight factors dwarf the price difference on a five-year horizon.