ADR-0025 — Combined M1+M2 as Primary Product (Unfair Advantage Audit)
Context
R27 run on 2026-04-20 achieved Judge score 8.0/10 on AI-esoterics niche analysis (cost $2.68, duration 24 min). This crossed the quality threshold originally defined in ADR-0013 as a gate for M2 development. With that gate reached, M2 moved from “aspirational future” to “actionable next product.”
During R27 review, Denis articulated a fundamental insight that reshapes M2 entirely.
The insight: M1 Navigator answers “is this niche attractive?” but does not answer “can this specific proposal succeed in it?” Users walk away with a treasure map but no validation of whether they have the equipment to extract value. Denis called this the “gold mine without a shovel” problem.
M2 was originally (ADR-0013) conceived as “Team Implementation Navigator” evaluating teams via CVs, track records, and digital presence. On review, Denis rejected this approach for three reasons:
- Teams cannot be reliably evaluated by LLM from credentials. Degrees, CVs, and past companies are either unverifiable, fabricatable, or weakly predictive of specific niche success.
- Budgets and timelines are user-declared and lie-prone. A user can claim any budget without consequence. Validation is impossible.
- The right question is not “are you good enough?” but “what do you uniquely have that this niche rewards?”
This shifts M2 scope from team-competency assessment to Unfair Advantage audit — verification that specific defensible assets exist and match the moat requirements of the chosen niche.
Decision
Primary decision
Synth Nova’s flagship product becomes Combined M1+M2 — an integrated two-stage audit:
- Stage 1 (M1): Niche Analysis (existing, Judge ≥ 8.0 validated on R27)
- Stage 2 (M2): Implementation Audit — verification of user’s claimed Unfair Advantages against the niche’s moat requirements
User flow: automatic offer of M2 after M1 CONDITIONAL GO or GO verdict. User can opt in or decline. M1 alone remains available as standalone for users who only want niche research.
M2 does NOT exist as standalone. It requires M1 output as context (the niche’s moat requirements come from M1 analysis).
Fundamental scope change vs ADR-0013
ADR-0013 scope (4 agents: ProfileHarvester, ClaimVerifier, DigitalPresenceAnalyzer, TrackRecordResearcher) is superseded. The team-competency assessment approach is abandoned.
Core design principles
1. No team competency assessment. M2 does not evaluate “is this team good enough.” Not reliably verifiable from LLM-accessible data.
2. No budget or timeline input. Unverifiable, lie-prone, not useful for audit.
3. Unfair Advantages verification is the core work. Per Ash Maurya’s Lean Canvas: Unfair Advantage = “something that can’t be easily copied or bought.” This IS verifiable — patents exist or don’t, partnerships are announced or aren’t, data sources are unique or public.
4. Swiss-neutrality tone. No empathy, no consolation, no softening. Facts + evaluation + reasoning. Negative verdicts include actionable gap analysis but never emotional support. Synth Nova’s brand identity = independent source of truth.
5. Entry threshold is numeric and explicit. UA-Niche alignment score (0.00-1.00). Default threshold 0.60. Below threshold → clear verdict “your UAs do not match what this niche rewards.”
Combined M1+M2 user flow
User query (raw brief)
↓
Agent-Intake — structure the brief
↓
M1 Navigator pipeline (24 min, $2.68) — CEO + Intel Director + 11 executors + Judge
↓
M1 Output: niche analysis with GO / CONDITIONAL GO / PASS verdict
↓
[Automatic trigger if verdict = CONDITIONAL GO or GO]
User offered: "Run M2 Implementation Audit on your proposed solution? [Yes] [Skip]"
↓
If Yes:
M2 Intake — collect:
- Solution description (product/service)
- Claimed Unfair Advantages (list with evidence URLs/uploads)
↓
M2 pipeline (target: 15-20 min, $2-4) — M2 Director + 3 new agents + Judge
↓
M2 Output: UA verification report + niche moat requirements + alignment score + gap analysis
↓
Combined Final Report: niche attractiveness × UA-Niche alignment
User input to M2 — what’s collected, what’s not
Collected:
- Solution description (free text or structured — what product/service you plan to build)
- Claimed Unfair Advantages — list where each item has:
- Category (patent / license / partnership / data / distribution / regulatory / brand / network / process)
- Description (specific claim)
- Evidence (URL, document upload, specific reference — REQUIRED for verification)
Deliberately NOT collected:
- Team composition, CVs, roles — cannot be reliably validated
- Budget figures — unverifiable, lie-prone
- Timeline claims — unverifiable
- Self-reported experience years — cannot validate
Rationale: if we collect it, we look like we’re using it. Collecting unverifiable inputs creates false-confidence output. Better to explicitly ignore.
M2 agent architecture
Four new agents. The first three run in parallel, the fourth synthesizes.
1. M2 Director
Orchestrates M2 pipeline. Receives M1 output + user’s M2 input (solution description + claimed UAs). Coordinates the three specialized agents. Writes final M2 section of Combined report.
Position: same orchestration tier as Intel Director (per ADR-0003 Three-Tier Hierarchy).
Model: Sonnet 4.5.
2. Unfair Advantage Verifier
For each claimed UA, attempt verification through public data and deterministic checks.
Verification strategies by UA category:
| Category | Verification method |
|---|---|
| Patent | Search USPTO / EPO / Rospatent / WIPO via public APIs |
| License / regulatory | Search relevant public registry per jurisdiction |
| Partnership / distribution | Scrape target company site, announcements, LinkedIn company page, press releases |
| Proprietary data | Evaluate uniqueness — do public alternatives exist? Search for equivalent datasets |
| Distribution channel | Verify channel exists, user has access (e.g., owned audience: social media account existence + metrics) |
| Brand / audience | Search for mentions, audience size via public profile data |
| Regulatory approval | Check relevant public registries (FDA, CE, Rospatent, industry bodies) |
| Network / relationships | Generally unverifiable → mark as “Unverifiable Claim” |
| Proprietary process / know-how | Generally unverifiable → mark as “Unverifiable Claim” |
Output per UA: Verdict ∈ {VERIFIED, PARTIALLY_VERIFIED, UNVERIFIED, UNVERIFIABLE, FABRICATED}
- VERIFIED: evidence matches claim
- PARTIALLY_VERIFIED: some claim components verified, others not
- UNVERIFIED: claim looks plausible but evidence insufficient
- UNVERIFIABLE: claim cannot be verified by available methods (not necessarily false, but cannot be used for scoring)
- FABRICATED: evidence contradicts claim (e.g., claimed patent number does not exist)
Model: Sonnet 4.5 with web search + potentially specialized APIs (requires ADR-0011 Integration Triage per API).
3. Niche Moat Requirements Agent
Analyzes M1 output to determine what categories of UAs historically matter for this niche.
Input: M1 output (niche analysis, competitor landscape, financial model)
Process:
- Examine what moat types are present in existing niche winners (from competitor analysis)
- Apply heuristic library covering common niche patterns:
- Consumer apps with network effects → audience/brand UAs critical
- Regulated industries (health, finance, education) → license/patent UAs critical
- B2B SaaS with enterprise sales → distribution + technical moat critical
- Commodity markets → cost/scale UAs critical
- Content platforms → proprietary content library + audience critical
- Can leverage Pipeline Memory (ADR-0018) for accumulated niche-pattern learnings from past runs
Output: Weighted UA category requirements summing to 1.0.
Example for AI-esoterics niche:
{
"audience_trust_and_brand": 0.40,
"proprietary_content_library": 0.25,
"platform_distribution": 0.20,
"tech_differentiation": 0.15
}Model: Sonnet 4.5.
4. UA-Market Fit Scorer
Compares verified UAs (from agent 2) against niche requirements (from agent 3). Produces final alignment score and gap analysis.
Scoring logic:
for each niche requirement category (with weight W):
find verified UAs matching that category
if match: contribution = W × verification_strength (VERIFIED=1.0, PARTIAL=0.5, others=0)
else: contribution = 0, category marked as GAP
sum contributions → overall UA-Niche alignment score [0.00-1.00]
Output:
- Overall alignment score
- Per-UA match breakdown (which niche requirement it satisfies)
- Gap list (unsatisfied requirements, ordered by weight)
- Entry verdict per threshold:
- ≥ 0.80: GO — clear alignment
- 0.60-0.79: CONDITIONAL — enterable with specific gap closures
- < 0.60: NO-GO — alignment too weak for this niche
Model: Sonnet 4.5.
M2 Judge
Same Judge infrastructure as M1 — GPT-4o via ADR-0019. Evaluates quality of UA verification, requirement assignment, and alignment scoring.
Shipped infrastructure, no new work.
Combined final report structure
Single deliverable with two parts:
Part 1 — Niche Analysis (M1, unchanged): 13 sections from existing M1 pipeline including executive summary, market, competitors, financial model, audience segments, funnels, content strategy, sales scripts, scorecards, go/no-go recommendation.
Part 2 — Implementation Audit (M2):
- Solution summary — how user described their proposal
- Niche moat requirements — what this specific niche rewards (weighted list with rationale)
- Your claimed Unfair Advantages — verification results (per-UA table with verdict + evidence assessment)
- UA-Niche alignment score — overall numeric result
- Gap analysis — unsatisfied niche requirements with weight impact
- Final verdict (Swiss tone — see example below)
- Gap closing recommendations (only if verdict ≠ GO) — specific UAs to develop, validate, or acquire
Example Swiss-tone verdict (negative):
UA-Niche alignment score: 0.38 / 1.00 Entry threshold: 0.60 Verdict: NO-GO
You can enter this niche IF AND ONLY IF you close the following gaps:
- Category “proprietary content library” (0.25 weight) — no verified UA. Your claimed tarot image library is UNVERIFIED (no evidence provided, no URL, no sample access).
- Category “tech differentiation” (0.15 weight) — no UA claimed at all. You have not articulated what is technically unique about your approach versus competitors Co-Star, Sanctuary, or The Pattern.
- Category “audience trust and brand” (0.40 weight) — your strongest claim (“12 years experience in occult community”) is UNVERIFIABLE. This is not disqualifying but cannot be used for scoring.
Your “Synergia RK distribution partnership” UA is VERIFIED and contributes to “platform distribution” category (0.20 weight). This alone provides 0.20 alignment.
Without the three gap closures above, alignment remains below entry threshold. We do not recommend proceeding.
If gaps are closed, expected alignment: 0.60-0.75 depending on strength of validation.
No softening. No “you’ve done great work here.” No consolation. The verdict IS the product.
Positioning implications
Brand identity shift
Before: “AI-powered niche analysis for entrepreneurs” After: “Swiss-neutral implementation audit for market entry decisions. AI-operated. Independent source of truth.”
Competitive landscape shift
Before (M1-only competition):
- Perplexity Pro + manual synthesis (~$20/month)
- Individual consultants ($200-500/hour)
- Basic market research reports ($500-2000/report)
After (Combined M1+M2 competition):
- Big Four DD services (Deloitte, PwC, KPMG, EY) — $50K-500K, 1-4 weeks timeline
- Venture-scout platforms (mostly VC-facing, not founder-accessible)
- Private DD consultancies ($25K-100K, network-based access)
Synth Nova’s unique position: Big Four-grade structured output at AI cost structure. $5-20K per audit. 45-60 minutes runtime. Self-serve after intake.
Target customer shift
Primary (new):
- Founders with specific solution in mind, seeking validation
- Angel investors evaluating one deal at a time
- Small VC funds lacking in-house DD capability
- Corporate innovation teams validating new-market entry
- Accelerator programs pre-screening cohort applicants
Secondary (existing):
- Individual founders exploring niches — M1 standalone serves them
Pricing tier proposal
| Tier | Product | Price | Target |
|---|---|---|---|
| Basic | M1 standalone | $500-1500 per analysis | Individual founders |
| Combined | M1 + M2 audit | $5000-20000 per audit | Angels, small VCs, serious founders |
| Enterprise | Combined + custom | $25000+ negotiated | Corporate innovation, accelerator cohorts |
Combined pricing represents 50x-200x cost of compute (~$5 per run all-in), standard margin for expert-level DD service.
Fundraise narrative implications
Evolved Monopreneur story
Before: “Solo founder builds vertically integrated agency with AI workforce”
After: “Solo founder builds the first Swiss-neutral, AI-operated due diligence firm. Output quality approaches Big Four DD. Cost structure is ~100x lower. Human overhead is 1 person.”
This is a cleaner, more investable positioning than vertically-integrated-agency. DaaS (Due diligence as a Service) is a recognized market with clear willingness-to-pay benchmarks.
Moat depth vs Chamber-alone positioning
Chamber (3 LLM panel + arbiter) is rapidly commoditizing — Manus, Dify, CrewAI can replicate in 3-6 months.
Combined M1+M2 requires:
- M1 high-quality niche analysis (Judge 8.0+ consistently — R20-R27 iteration work, not trivially replicable)
- UA verification infrastructure (patent APIs per jurisdiction, partnership verification scraping, data uniqueness evaluation)
- Niche moat requirement heuristics (domain knowledge about what each market rewards)
- Swiss-neutral editorial voice calibrated across niches
- Pipeline Memory + Cross-Model Validation + Learning Loops infrastructure (already shipped)
Equivalent build time estimate: 12-24 months. This is the actual moat.
Consequences
Positive
- Unique differentiated positioning (“Swiss-neutral AI DD”) in a large market with clear pricing benchmarks
- Higher willingness-to-pay per audit (500-2000 for M1 alone)
- Cleaner fundraise story (DaaS market, not agency with multiple product lines)
- Aligned with Monopreneur Principle (still solo-operable)
- M1 R27 validation provides real quality basis for the claim
- Forces articulation of what each niche rewards — a genuine research contribution
Negative / tradeoffs
- Significant build work — 4 new agents + UA verification infrastructure + heuristic library for niche moats
- Increased user input friction (UA claims with evidence required before value delivery)
- Negative verdicts may hurt commercial feedback short-term (“you told me NO, I want refund”)
- UA verification depends on third-party data (APIs, public registries) — ongoing maintenance
- Brand positioning shift requires new marketing (landing, copy, case studies, example outputs)
- Monetization decision needed: does user pay same price for NO verdict as for GO?
Neutral
- Dependencies on Pipeline Memory + Cross-Model Validation + Learning Loops — all shipped, low integration risk
- Existing M1 code remains valuable with no rewrite needed
- Chamber (ADR-0014-0016) still useful for high-criticality internal decisions within M2 agents (per ADR-0016 — Go/No-Go verdicts are L3)
Status: proposed
Acceptance gates
- Denis reviews this draft and confirms it matches his vision
- Denis resolves Open Questions 1-7 below
- ADR-0013 M2 scope explicitly marked as superseded
- Pricing tier structure directionally approved
Implementation sprint plan (after acceptance)
Sprint 1 (1 week): M2 Intake
- Structured form OR conversational intake agent for UA claims
- Evidence upload handling (URLs, files)
- Storage model for UA claims + M1 context linkage
Sprint 2 (2 weeks): Unfair Advantage Verifier
- Verification strategies per UA category
- Patent search integration (USPTO at minimum — triggers ADR-0011 Integration Triage)
- Partnership/distribution verification via web search + LinkedIn scraping
- Data claim uniqueness evaluation
- Verdict classification logic + tests
Sprint 3 (1 week): Niche Moat Requirements Agent
- Heuristic library for common niche types
- Pipeline Memory integration for past-niche learnings
- Weight assignment logic
Sprint 4 (1 week): UA-Market Fit Scorer + M2 Director synthesis
- Alignment scoring algorithm
- Gap analysis logic
- Swiss-tone final report generation
- Combined M1+M2 report rendering
Sprint 5 (1 week): Integration + end-to-end testing
- M1 → M2 flow orchestration
- Streamlit UI for M2 input + output display
- Testing on 3+ distinct niches from past M1 runs
Total: ~6 weeks focused development across 5 ship-able sprints.
Alternatives considered
Alt 1 — Keep M2 as ADR-0013 team assessment. Rejected. Team competency from CVs is unreliable signal. Cannot be validated by LLM. Produces fake-confidence output. Denis explicitly rejected this path.
Alt 2 — M2 as standalone product without M1 integration. Rejected. UA verification is only meaningful against niche moat requirements. M1 provides those. Separate product loses the combined moat.
Alt 3 — M2 includes both team AND UA verification. Rejected. Team assessment reintroduces the reliability problem. Better to explicitly ignore team and focus on what’s verifiable.
Alt 4 — Softer-tone verdicts (empathetic). Rejected. Swiss neutrality IS the brand. Softening dilutes positioning. If users want empathy, they can hire a coach — not a DD firm.
Alt 5 — Delay until 3 external testers validate M1. Considered. External testers remain a gate before scaling, but M2 design work can happen in parallel. No reason to gate design on testers — only implementation.
Alt 6 — Include budget/timeline as informational, not scoring input. Rejected. If we collect it, we look like we use it. Cleaner to explicitly exclude.
Open questions — resolved 2026-04-20
-
Q1: Does this ADR fully supersede ADR-0013?
Decision: YES — full supersede. Team competency assessment approach is abandoned. ADR-0013 marked as “superseded in scope by ADR-0025”. No coexistence — single M2 concept going forward is Unfair Advantage audit. -
**Q2: Is 1-3K to build case studies.
-
Q3: Budget model for UA verification APIs (patent API, partnership data)?
Decision: PARKED — requires separate business decision. Implementation gate: budget model must be defined before Sprint 2 (UA Verifier) begins. -
Q4: Does M2 Final Verdict synthesis trigger Chamber (ADR-0014-0016)?
Decision: YES. Go/No-Go verdicts are L3 criticality per ADR-0016 CriticalityPolicy. Chamber auto-fires on L3. Consistent with existing policy. -
Q5: Niches where moat requirements unclear (very new markets) — fallback strategy?
Decision: LLM best-guess with low-confidence flag. When heuristic library has no match, Agent 2.2 performs LLM reasoning from first principles + marks overall confidence ≤ 0.5. Surfacing unknown-niche-risk to user is acceptable; hard-failing is not. -
Q6: First external test customer for Combined M1+M2?
Decision: PARKED — pilot identification needed. Implementation gate: first pilot customer must be identified before Sprint 1 ends. -
Q7: Legal disclaimer required?
Decision: YES. Standard DD disclaimer applies: “This is not legal or financial advice. Decisions are at the user’s own risk. Verdicts are based on AI analysis of public data at time of audit. Past performance not indicative of future outcomes.” Exact wording to be drafted in separate legal template document. -
Q8: Evidence upload file size/format limits?
Decision: 10MB per file. Accepted formats: PDF, DOCX, DOC, MD, TXT. URLs also accepted for web-referencable evidence. -
Q9: M2 input UX — structured form vs conversational agent?
Decision: Structured form for Sprint 1 MVP. Conversational upgrade (similar to Agent-Intake pattern) in later iteration after learning from first 10-20 audits. -
Q10: Can M2 be re-run on same M1 with updated UA claims?
Decision: YES. Each re-run is new artifact, old artifacts retained. Common case: user closes gaps identified by first M2, re-submits with new evidence.
Remaining parked items (re-surfacing later)
- Q3: UA verification API budget model — revisit before Sprint 2
- Q6: First external test customer — revisit before Sprint 1 completion
- Q7: Legal disclaimer exact wording — requires legal template doc (separate deliverable)
References
- ADR-0013 — M2 Team Implementation Navigator — partially superseded (team assessment removed, M2 concept retained)
- ADR-0018 — M1 RAG Memory Layer — M2 leverages same memory infrastructure
- ADR-0019 — Cross-Model Validation — M2 Judge reuses GPT-4o
- ADR-0022 — Learning Loops — outcome labeling extends to M2 verdicts
- ADR-0023 — Monopreneur Principle — design constraint maintained
- ADR-0016 — Chamber v2 Vision — L3 criticality applies to M2 Final Verdict
- Ash Maurya — Lean Canvas / Unfair Advantage concept
Denis-confirmed direction (2026-04-20 session)
Denis reviewed this ADR draft and confirmed the following:
-
Swiss neutrality tone is the brand voice. Negative verdicts must not include softening language, consolation, or emotional support. Verdicts are the product. Synth Nova = independent source of truth.
-
Unfair Advantages (not team/budget/timeline) is the correct focus. Team competency from CVs cannot be reliably validated by LLM. Budgets are user-declared and lie-prone. UAs (patents, partnerships, proprietary data, distribution, regulatory) are verifiable through public data and defensible.
-
Combined M1+M2 becomes the primary product. Standalone M1 remains available for users wanting research only. M2 alone does not exist — requires M1 niche context.
-
Pricing tier structure accepted directionally: 5000-20000 (Combined), $25000+ (Enterprise). Validate with first 3 external customers.
-
Positioning shift approved: “AI agency” → “Swiss-neutral AI-operated due diligence firm.” Independent source of truth. Big Four-grade output at AI cost structure.
-
ADR-0013 fully superseded in scope. Team assessment approach abandoned, not kept as alternative path.
Next actions
- Immediate: Denis reviews this draft. Either approves direction or proposes edits.
- If approved: Convert status to “accepted”. Add row to Decision-Log. Update ADR-0013 header with “superseded in scope by ADR-0025.”
- Resolve Open Questions 1-7 (some in this session, others later).
- Update Active-Plan.md to reference Combined M1+M2 as next major sprint target (after current R27 validation stabilizes and after at least 2 more clean M1 runs on different niches).
- Begin Sprint 1 (M2 Intake) when all acceptance gates closed.