Module M2: Team Implementation Navigator

One-liner: Может ли эта команда реализовать этот проект в этой нише?

Governance: Operates under Constitution. All 10 Laws apply, with Law 3 (Reputation Over Speed) particularly relevant — reputation of assessed team members is protected by Evidence Verification Policy.

Product Vision

M2 дополняет M1 (Investment Navigator) вторым критическим вопросом инвестиционной оценки:

  • M1 answers: “Is this niche worth entering?” (market, competition, financials)
  • M2 answers: “Can this team pull it off?” (capabilities, credibility, fit)
  • Combined M1+M2 answers: “Is this team + this niche a good bet?” — the complete investment picture

Target users

  • Founders — self-assessment before pitching to investors (“как я выгляжу в глазах инвестора”)
  • Angel investors — due diligence on pitched teams
  • VCs — screening large deal flows
  • Combined use — founder uploads team info alongside niche query, gets integrated fit assessment

Differentiation

  • Perplexity / ChatGPT: answer questions, no structured team assessment
  • Consulting due diligence: 50K, 2-4 weeks
  • Existing team assessment SaaS: checklist-based, no verification layer
  • M2: Evidence-based verification + structured scorecards + integration with niche analysis (when combined with M1), minutes, cents

Core Philosophy: Evidence-Based Team Assessment

Principle: Trust but Verify.

We do not assess teams based on self-description. We verify claims against public sources.

Claim types handled:

  • Professional background (employment, roles, duration)
  • Educational background
  • Previous ventures (founded/joined, outcomes)
  • Social reach (followers, engagement, audience quality)
  • Technical output (GitHub activity, open source contributions)
  • Public recognition (press, speaking, awards)

Delta-based discrepancy detection (from founder consultation):

  • <10% delta from claim → Normal variance, no flag
  • 10-30% delta → Yellow flag, requires clarification in report
  • 30% delta → Red flag, significant discrepancy noted (never called “lie”)

  • Unverifiable → Grey flag, “not verifiable from public sources” (not a penalty)

Reputation safeguard (Constitution Law 3): Report language always neutral. “Discrepancy detected” not “founder lied.” Tone is diagnostic, not accusatory.

v1 operates on explicit consent only.

Team members provide:

  • Public profile URLs they authorize us to analyze (LinkedIn, GitHub, Crunchbase, social media)
  • Optional: PDF exports (LinkedIn profile export, CV) for richer analysis
  • Explicit checkbox: “I authorize Synth Nova to analyze my public profiles for team assessment purposes”

We do NOT:

  • Scrape profiles without consent
  • Access private/locked accounts
  • Infer team composition from indirect sources
  • Store personal data beyond the assessment session (см. Retention policy)

Rationale (legal + ethical):

  • GDPR Article 6 requires lawful basis — consent is cleanest
  • Russian 152-ФЗ and UAE PDPL align
  • This is a self-assessment tool, not a doxxing service
  • Teams WANT to be verified — it enhances credibility with investors
  • Constitution Law 5 (Human Veto) and Law 9 (Default to Safety) forbid acting without consent in uncertain situations

Data Source Hierarchy (v1 scope)

Tier 1 (primary sources)

  • LinkedIn — employment history, education, endorsements, connections (via user-provided public URL or PDF export; LinkedIn API too restrictive for scraping)
  • GitHub — for technical founders: commit history, repos, stars, languages, collaboration patterns (free GitHub API, generous limits)
  • Crunchbase — track record: founded companies, funding rounds, exits (paid API $299/mo Basic, or free tier limited)

Tier 2 (v1.1, Russian market)

  • Telegram — channel statistics via TGStat API (free tier available) — critical for D2C/courses niches in РУ

Tier 3 (future: v2+)

  • Instagram / TikTok / YouTube — engagement metrics, audience authenticity
    • Requires official Business API partnerships
    • User-provided insights screenshots as interim
  • AngelList / ProductHunt — community track record
  • Public press mentions — via web search with verification
  • Patent / publication records — for technical/scientific founders

Report Structure (10 sections)

  1. Team Snapshot — who is in the team, roles, summary bio (from provided profiles)
  2. Background Verification — verified employment/education/ventures with evidence links and confidence scores
  3. Red Flags / Discrepancies — any deltas >10%, unverifiable claims, suspicious patterns (inflated follower counts, gaps in history, conflicting data)
  4. Digital Presence — reach and engagement per member per platform, with authenticity indicators (engagement rate, comment quality, follower quality)
  5. Track Record — previous ventures: founded, joined, outcomes (exits, failures, current status)
  6. Domain Fit — (only in Combined M1+M2 mode) how team background/skills align with target niche requirements
  7. Skill Coverage Matrix — what technical/business skills are represented, what’s missing (hiring gaps)
  8. Visual Scorecards — Execution Capability, Credibility, Coverage, Domain Fit (0-10 each), Overall Implementation Score
  9. Gaps & Hiring Recommendations — prioritized list of roles to hire first, with rationale
  10. Overall Verdict — GO / GO WITH CONDITIONS / CAUTION / NO-GO with reasoning

Same rendering quality as M1: Unicode visual bars for scorecards, proper markdown tables for matrix, no JSON dumps.

Under the Hood (Architecture Plan)

Reuse where possible from M1:

  • CEO agent — routes team assessment queries
  • Director — decomposes assessment into verification tasks per platform/claim
  • python_calc — engagement rate calculations, delta percentages
  • Policy Engine — enforces Evidence Verification Policy
  • Judge — quality gate on M2 outputs
  • Compressor — for aggregation
  • Aggregate — builds final M2 report

New agents required:

(Individual agent ADRs to be created during implementation sprint, not now.)

Integration with M1

Standalone M2: “Оцени эту команду” — без привязки к нише Combined M1+M2: “Оцени эту нишу + эту команду” — integrated fit assessment

Architecturally: Standalone M2 = subset of Combined. Implementing Combined gives Standalone for free (just skip M1 context passing).

Priority: Combined first. This is the killer feature and the moat. Standalone M2 is commodity — many competitors exist. M1+M2 Combined is unique.

Inputs

Required:

  • List of team members (names, primary roles)
  • Per member: consented profile URLs (LinkedIn, GitHub, Crunchbase, etc.) OR PDF exports
  • Explicit consent checkbox

Optional:

  • Target niche (triggers Combined M1+M2 mode with niche context from M1)
  • Self-described claims to verify (e.g., “we built X product that reached Y users”)
  • Preferred output language

Timeline Placeholder

Implementation sprint: after M1 is tested by 3+ external users and stabilized based on feedback.

Estimated effort:

  • ProfileHarvester + per-platform adapters: ~3-5 days
  • ClaimVerifier + discrepancy logic: ~2 days
  • DigitalPresenceAnalyzer (auth metrics): ~2 days
  • TrackRecordResearcher: ~2 days
  • M2 Director + Aggregate + Renderer: ~2 days
  • Integration with M1 (Combined mode): ~1-2 days
  • Testing + Judge calibration for M2: ~2 days

Total: ~2-3 weeks work when sprint begins.

Success Metrics (for M2 rollout)

  • Coverage: ≥ 8/10 sections covered on real team assessment
  • Cost: ≤ $2.00 per team assessment (smaller scope than M1)
  • Duration: ≤ 10 minutes (fewer LLM calls than M1 — less research, more verification)
  • Founder self-assessment score: ≥ 7/10 (founders try M2 on own teams, rate usefulness)
  • Combined M1+M2 coherence: niche assessment + team assessment → integrated verdict without contradictions

Cross-References