Field notes
·8 min read

You're not buying software. You're hiring an employee.

The fastest way to waste six figures on an AI project in a portfolio company is to scope it the way your team scopes a software purchase.

You know that mental model. A software buy starts with the problem ("our CRM is clunky"), becomes a list of desired features, becomes a vendor comparison grid, ends with a two-year contract and a seat count. It is a procurement motion. The buyer evaluates the product on functionality, price, implementation speed, and the reviews on G2.

That model fails catastrophically when you're hiring an AI agent. Not because agents can't run a CRM. They can. It fails because scoping an agent against a feature checklist produces a tool that can do many things, none of them particularly well, and nobody on the team is actually responsible for any of its outputs.

Here is the reframe. When you're scoping an AI agent, you are not buying a product. You are hiring an employee. And the quality of the hire depends entirely on how well you scope the role before you start.

The scoping mistake most operators make

Walk into any portfolio company that has tried an AI project and failed. Ask what happened. You will hear some version of this:

  • "We tried ChatGPT Enterprise. Nobody really used it."
  • "We hired a vendor to build us a chatbot. It kind of worked, but nobody owns it now."
  • "The CEO wanted AI, so we paid for Copilot licenses. The team is mixed on it."

What all three have in common: nobody at any point asked the question you would ask if you were hiring a person. What role does this fill? What is on the offer letter? What does day one look like, what does week four, what does quarter one?

Without that scoping you get a powerful tool with no job description. It runs meetings nobody asked it to run, produces reports nobody reads, and becomes the thing the team complains about in the next employee survey. That is not the AI's fault. It is the scoping.

AI agents are roles, not tools

The mental shift that fixes this is small and hard to internalize: treat the agent as a role on the org chart.

When you hire a human for a new role, you do specific things. You write a job description. You define outcome metrics. You pick a manager who is accountable for their performance. You decide what the first 30, 60, and 90 days look like. You set up a feedback loop. You understand the conditions under which you would let them go.

An AI agent deserves every one of those decisions. Before the agent gets written. Before the contract is signed. Before anyone touches a keyboard.

When we scope an engagement at Dune Creek AI, we literally walk through the org chart. Who does this agent report to? Whose work does it take off the plate? What does it own? What does it not own? Who reviews its output in the first 30 days? When does that review loosen? If we cannot answer those questions in discovery, the agent is not ready to be built. Half the value we provide in the first call is forcing those answers.

Apply the interview filter

Here is a test to run before you commit engineering budget. Imagine the agent is a person you are about to hire. Can you answer:

  • What is their title?
  • What is the very first thing they do each morning?
  • Who is their manager?
  • What does great look like at 30, 60, 90 days?
  • How would you know they were underperforming?
  • Under what conditions would you fire them?
  • Who picks up their work if they are sick for a week?

If these questions feel unanswerable for the workflow you had in mind, that is useful information. The workflow probably is not scoped enough to be a job. And if it is not a job, an agent should not be owning it yet. The fix is not to buy better software. The fix is to tighten the role definition, or to pick a different workflow entirely.

When the questions have clear answers, the build gets fast. You know exactly what to write prompts for, what metrics to instrument, what guardrails to enforce, and what the accountability chain looks like after deployment.

Why "AI can do so many things" is actually a trap

The single most common objection we hear when we propose tight scoping: "but the agent could alsodo X, Y, and Z."

Yes. It could. That is the problem, not the feature.

A human you hire for a role could also technically do many other jobs in the company. A Chief of Staff could cover reception, do bookkeeping, run the social media account, and clean the kitchen. You don't ask them to. Because if you did, none of the high-value work would get done, and you would eventually lose a good hire to scope creep.

AI agents have the same property, only more so. Unlike a human, an agent will genuinely attempt every task you give it, silently, without complaint, at 3 AM. The scope creep that takes a year to ruin a human role takes about three weeks to ruin an agent's effectiveness. Suddenly the inbox-triage agent is also drafting client newsletters, also doing ad-hoc research, also summarizing meetings. Quality of each drops. Nobody can tell what the agent is actually for anymore. The thing that was earning its keep now feels like a liability.

The discipline is the same as it is with human hires. One role. Clear outcomes. Measurable performance. The agent can do other things, eventually. Not in the first version.

How we scope an AI hire in discovery

Our week-one discovery call is intentionally a hiring conversation, not a technical one. The operator walks us through the portfolio company. We are listening for the role an agent should fill. Questions we ask:

  • Who on the team is currently doing the job we are talking about automating? What percentage of their week is it?
  • If they took a month off, what would fall apart first? That is usually the highest-leverage agent role.
  • What does "done" look like for one unit of that work? An email replied to? A document produced? A CRM field updated? Specificity here is everything.
  • What does success look like over a quarter? Time saved is a weak answer. Better answers: "inbound response time drops from 18 hours to under 1," or "pipeline conversion rate moves from 12 percent to 18."
  • Who on the team should own this hire's output and manage it? If the answer is "nobody, that is why we want AI," that is a warning sign, not a scope.

By the end of the call we usually have a one-page role definition. Who the agent reports to, what it owns, how we measure it, what the first quarter looks like. That is the proposal. The build happens after. You can run this conversation without us. The questions above are free. But most operators we talk to had never thought to ask them, because they were mentally scoping a software purchase. Once they start running a hiring process, the scope clarifies fast.

A concrete example

Consider Drake, a VP of Deal Origination built for PE funds. The role definition fits on a card. He owns broker relationships, scores inbound CIMs against the fund's thesis, drafts LOIs, runs outbound origination, and delivers a morning brief to the partner. Manager: the partner. Outcome metrics: inbound response time, CIM review turnaround, pipeline accuracy. Termination condition: three consecutive weeks of the partner losing trust in the thesis scoring.

That level of specificity is why Drake works. When a PE partner sees Drake in action, they usually ask if they can have one at their own fund or their portco. The honest answer: your agent will not look exactly like Drake. Your role definition is different. The scoping process is the same. The agent itself is the easy part.

Frequently Asked Questions

What is the difference between an AI agent and AI software?

AI software is a tool that runs when someone uses it. An AI agent is an always-on operator with a defined role. Same underlying technology, very different scoping and accountability model. You buy software. You hire an agent.

How do I know if AI is ready for a workflow in my portco?

Run the interview filter above. If you can write a job description for the work, specify the outcomes, and name a human who will manage the output, the workflow is ready. If any of those pieces are missing, fix them first. Hiring into an undefined role goes badly whether the hire is human or not.

Can one AI agent own several roles at once?

In principle yes. In practice, no, at least not initially. The scope creep that ruins human hires ruins agents faster. Start with one role, prove the impact, then expand. Do not let "the agent could also" conversations push you into broad mandates before narrow ones are proven.

Who supervises an AI agent day to day?

A human on the operator team. Same as any hire. For most engagements, the manager is the person whose work the agent is taking on, because they know what good looks like and have the incentive to make sure the agent produces it.

How long before an AI agent is really productive?

Figure six weeks from discovery to live, then another four to six weeks of supervised operation before the human manager can trust the agent at arm's length. Not unlike onboarding a junior hire. The first 90 days are hands-on. After that, the manager steps back.

The takeaway

If you take one thing from this piece, take this: when you are scoping AI in a portfolio company, you are not evaluating a product. You are writing an offer letter.

If the offer letter does not make sense (no role, no manager, no measurable outcomes), do not write the check. Fix the scoping first. The tech part is genuinely easy. The hiring part is what operators get wrong, and it is what separates AI projects that move the number from AI projects that become a line item nobody defends at the next budget meeting.

Related reading on what a scoped AI agent actually does day to day: How Drake runs 24/7 lead generation for PE funds

Work with us

Trying to scope the hire for your portco?

20 minutes. You describe how the team actually operates. We tell you which workflow is ready for an AI hire, which one isn't, and what the offer letter should look like.

Book a discovery call