Impulse tier · Prompt pack
Stop deciding by gut feel.
Score the pursuit on 8 dimensions — incumbent status, capability match, past performance fit, teaming need, price-to-win signal, agency relationship strength, SOW clarity, schedule feasibility — each rated 0–5. Total out of 40. Recommendation: Go (≥25), Go-with-teaming-pivot (16–24), or No-Go (below 16). Documented rationale per dimension. $49 one-time, lifetime updates.
Gut-feel pursuit decisions
no documented rationaleno repeatable disciplinefree, and costly
Generic ChatGPT pursuit advice
no scoring structureno dimension-level sourcingfree, but unsourced
Capture.kit Pursuit Decision Matrix
8 dimensions scored 0–5verbatim-sourced rationale per dimension[VERIFY] tags on inferred claims$49 + 15 min [VERIFY]
What it is
A prompt pack — a zip with a master prompt, a fillable input template, sample input, sample output, and supporting docs. You fill INPUT-TEMPLATE.md with free-form notes on the 8 pursuit dimensions, paste it into Claude.ai alongside the master prompt, and get back a scored matrix: one row per dimension, each with a score (0–5) and a one-sentence rationale citing verbatim text you supplied. The matrix then totals the scores and returns a Go / Go-with-teaming-pivot / No-Go recommendation, a confidence statement tied to how many dimensions you filled, and the top three reasons.
If 4 or more dimensions are blank, the prompt refuses to score — it returns a list of what's missing and stops. That's not a bug. A thin matrix produces a thin recommendation, and the pack is designed to surface that rather than fake confidence on partial data.
How it works
Fill INPUT-TEMPLATE.md with your opportunity facts
Open INPUT-TEMPLATE.md. Fill in the opportunity basics (title, notice ID, agency, NAICS, PSC, set-aside, estimated value, due date) and then write free-form notes for each of the 8 dimensions. Public data only — SAM.gov posting text, your own company facts, publicly available contract history. See ETHICS.md for what not to include.
Paste both into Claude.ai
Open a fresh Claude.ai chat. Paste the master prompt from PROMPT.md, then paste your filled INPUT-TEMPLATE.md immediately after. Send.
Read the scored matrix
The matrix scores each of the 8 dimensions 0–5 with a one-sentence rationale per row, citing verbatim text from what you provided. Each rationale is one sentence — no padding. Inferred claims (competitor presence guesses, relationship assumptions) are tagged [VERIFY].
Check the recommendation and confidence statement
Below the matrix: total score out of 40, a Go / Go-with-teaming-pivot / No-Go recommendation, a confidence statement that names how many dimensions were filled vs. blank, and the top three reasons. A score of 24/40 with one blank dimension is a Go-with-teaming-pivot — not a Go. That distinction matters before you commit proposal hours.
Take the output into your teaming conversation or pursuit decision
The matrix is decision support, not the decision. Bring the output into your capture review, your teaming call, or your BD pipeline stage gate. The documented rationale gives your team something concrete to push back on.
What you get
PROMPT.mdThe master prompt. Role, input expectations, output shape, guardrails, and banned-phrase list. Paste into Claude.ai alongside your filled input template.
INPUT-TEMPLATE.mdFillable input template. Opportunity basics at the top, then one free-form section per dimension. Fill with your public SAM.gov data and company facts.
SAMPLE-INPUT.mdA realistic DoD endpoint-protection pursuit — 7 of 8 dimensions filled, schedule feasibility intentionally blank to show the confidence-calibration behavior.
SAMPLE-OUTPUT.mdThe scored matrix and recommendation from the sample input. Reference for what a correct output shape looks like before relying on the pack.
Also included
- README.md — what the pack does, what it does not do, and 4-step usage flow
- ETHICS.md — public-data-only constraint, CUI/SSI prohibition, decision-boundary statement
- CHANGELOG.md — versioned for the lifetime-updates promise
- loom-script.md — recording outline for a walkthrough if you want to demo this to your team
See a scored example
Loom: 1:30
Fill INPUT-TEMPLATE.md for a real DoD endpoint-protection pursuit, paste into Claude.ai, read the 8-dimension matrix and Go-with-teaming-pivot recommendation. Under 2 minutes start to finish.
1:30Fill INPUT-TEMPLATE.md for a real DoD endpoint-protection pursuit, paste into Claude.ai, read the 8-dimension matrix and Go-with-teaming-pivot recommendation. Under 2 minutes start to finish.
Want to see the output before buying? Read the sample scored matrix — same input as the Loom, full matrix and recommendation, no email gate.
Pricing
If the matrix doesn't change how you make pursuit decisions, refund it inside 14 days. Run it on one real opportunity, compare the output to your gut call, decide. That's the bar.
How we handle your data
The matrix surfaces tradeoffs — it doesn't make the decision
The output is structured decision support for a human capture lead. A 24/40 Go-with-teaming-pivot means you have a conversation to have, not a bid to submit. The rationale per dimension is the input to that conversation.
Every score quotes verbatim text from your input
The rationale for each dimension is one sentence citing verbatim text from your filled INPUT-TEMPLATE.md. Scores aren't generated from nothing — they trace back to the facts you provided.
Inferred claims are tagged [VERIFY]
Competitor presence guesses, relationship-strength inferences, and price-to-win estimates that go beyond what you supplied are tagged [VERIFY]. The matrix is honest about the difference between what you told it and what it inferred.
AUP-aware — no fabrication of past performance or capability claims
The prompt does not generate, invent, or embellish past performance, certifications, or company capabilities. It scores the dimensions you fill. Buyers are responsible for the accuracy of what they paste in — see ETHICS.md and the Capture.kit AUP.
Who buys this
A solo BD consultant or capture lead at a small shop running 1–3 federal pursuit decisions per month. You're already doing a bid/no-bid process — reviewing the solicitation, talking to your team, checking your past performance — but those decisions live in your head or in a one-line email. This pack makes the decision explicit, scored, and documented. Decision under $50 on a personal credit card.
Who shouldn't buy this
- Anyone running 10+ pursuits per quarter who needs a CRM-integrated pipeline tracker. This is a one-decision-at-a-time prompt pack. It doesn't store state, doesn't track pursuit history, and doesn't integrate with GovWin or Salesforce.
- Anyone whose pursuit decisions already go through a formal scoring process with a pursuit review board. If you have a disciplined pipeline process, you don't need this — you need the process to be faster, which is a different problem.
- Anyone expecting the matrix to predict win probability. It scores pursuit fit on 8 dimensions. Win probability depends on factors (proposal quality, orals, final pricing) that happen after this decision and aren't in scope.
- Anyone whose opportunity data is CUI, SSI, or FOUO. The pack requires public SAM.gov data and your own company facts. If the solicitation is classified or the requirement involves controlled technical data, stop — see ETHICS.md.
Frequently asked
What are the 8 dimensions?
Per the master prompt: (1) incumbent status — is there one, who, and what is their relationship to the agency; (2) capability match — do your NAICS, certifications, and past performance align with the SOW; (3) past performance fit — have you done similar work, same agency, same NAICS; (4) teaming need — can you prime, or is this a teaming play; (5) price-to-win signal — any public indicator of expected value or competitive range; (6) agency relationship strength — prior dialogue with the contracting officer or agency; (7) SOW clarity — is the SOW well-scoped or ambiguous and mid-pivot; (8) schedule feasibility — given the proposal due date, can you realistically write a competitive response.
Why 0–5 per dimension and not a different scale?
0–5 per dimension gives a max of 40 total. The thresholds (≥25, 16–24, <16) split that range into three actionable bands with enough resolution to distinguish a marginal Go from a strong one. A score of 0 is valid — it means no signal or an explicit negative on that dimension. Leaving a dimension blank also results in 0, which is intentional: a blank is a data gap, and data gaps hurt the score to reflect real uncertainty.
Why those threshold splits — ≥25 Go, 16–24 Go-with-teaming-pivot, <16 No-Go?
≥25/40 (62.5%) means most dimensions have positive signal — a credible prime bid. 16–24 means the pursuit has real gaps that teaming can potentially address: weak past performance, thin agency relationship, or a teaming need for a specialized sub. Below 16/40 the gaps are structural, not patchable by teaming — the recommendation is No-Go. [VERIFY] — these thresholds are calibrated to the sample set used during pack development; apply judgment if your pursuit profile is unusual.
What does Go-with-teaming-pivot mean in practice?
It means the pursuit is viable with a specific teaming action — identify a partner who covers the gap the matrix surfaced. The matrix will name the gap (past performance fit, specialized technical capability, incumbent relationship) in the top three reasons. Your next step is a teaming conversation, not a proposal kickoff.
Can I customize the 8 dimensions for my firm's specific criteria?
Not through the pack as shipped. The 8 dimensions are fixed in PROMPT.md — the prompt is designed around this specific set, and the threshold math assumes all 8 are scored on the same 0–5 scale. If you need a different dimension set, you'd modify PROMPT.md directly, which you can do since it's a plain text file you own. That's outside the pack's supported use case, but the file is yours.
What if I leave more than 4 dimensions blank?
The prompt refuses to score. It returns a list of the blank dimensions and stops. This is intentional — a matrix with 4+ blanks produces a recommendation with too little signal to act on. Fill the missing dimensions (even partial notes are better than nothing) and re-run.
What's the refund policy?
14-day no-questions refund. Open the zip, run it on one real pursuit, decide. If it doesn't change how you make that decision, refund it.
Does this integrate with my existing pursuit tools?
It's standalone. You paste your filled INPUT-TEMPLATE.md into a Claude.ai chat alongside the master prompt. Output is markdown — copy it into your CRM note, your pursuit file, or your team channel. There's no API, no webhook, no native integration with GovWin, Salesforce, or your BD pipeline tool. That's by design at this price point.
Bundle
Solo Operator Kit
$149
1 seat
What's in it
- SAM.gov Daily Triage Pack
- Capability Statement Pack
- LinkedIn Networking Email Pack
- Pursuit Decision Matrix (Bid/No-Bid) (this product)
- FAR Clause Lookup Skill
- *+ bonus: Capability Match Score Lite*
Standalone total is $255. The kit is $149 — save $106. If you're running the full capture cycle (triage → bid/no-bid → capability statement → networking → clause lookup), the kit covers it.
Still deciding whether to bid?
14-day refund. Lifetime updates. Single-buyer license.