Evidence Verification Policy

Scope: All M2 (Team Implementation) agents. May also apply to M1 agents when handling founder-provided claims about their team.

Governance: Operates under Constitution. Enforces Law 3 (Reputation Over Speed), Law 6 (No Important Decision Without Trace), Law 7 (Verify Instead of Guessing).

Core Principle

Every claim about a team or team member must be either:

  1. Verified — corroborated by at least one independent public source
  2. Unverified — explicitly labeled as “not verifiable from public sources”

No claim is accepted at face value.

Before any verification activity:

  • Team member must provide explicit consent for analysis
  • Consent is per-platform (user can authorize LinkedIn but deny social media)
  • Consent is per-assessment-session (not stored beyond retention period)
  • Revocation is possible at any time before report generation

Without explicit consent: agent MUST refuse to verify (Constitution Law 5, Law 9).

Discrepancy Classification

When a claim and an observation differ:

DeltaClassificationAction
<10%Normal varianceNo flag, log but don’t report as issue
10-30%Yellow flagReport with neutral language: “Claim states X, observed Y, requires clarification”
>30%Red flagReport prominently as significant discrepancy; impact overall score
UnverifiableGrey flagReport as “Not verifiable from public sources” — not a penalty, just noted

Language rules:

  • NEVER “the founder lied”
  • NEVER “fake claim” or “false”
  • ALWAYS “discrepancy detected” or “observed value differs from stated”
  • ALWAYS provide the specific numbers and sources so user can judge

Constitution Law 3: the assessed team’s reputation is protected too. Our credibility depends on neutral framing.

Evidence Hierarchy

When verifying a claim, prefer sources in this order:

  1. Primary authoritative — official company pages, SEC/regulatory filings, published peer-reviewed papers
  2. Platform-native verified data — LinkedIn profile (user-provided), GitHub API, Crunchbase entries with founder-updated data
  3. Third-party aggregators — journalist articles, industry reports, verified news
  4. Social media / forums — only for engagement/reach metrics, not for fact claims
  5. Inference from patterns — used only as supporting evidence, never as primary verification

Handling Missing Data

When a claim cannot be verified:

  • Do NOT assume the claim is false
  • Do NOT use absence of evidence as evidence of absence
  • DO label as “not verifiable from public sources”
  • DO note the specific sources checked
  • DO rate overall credibility slightly lower but not punitive

Example: “Claim: ‘Exited previous startup to Google in 2020.’ Not verifiable from Crunchbase, TechCrunch, or public news search. Note: may be NDA-protected exit; absence of public record ≠ false claim.”

Authenticity Metrics (Digital Presence)

When analyzing social media reach:

Red flags for inflated metrics:

  • Engagement rate <0.5% (typical 1-5% for authentic)
  • Follower growth spikes without content events
  • Comment-to-like ratio <1:100 (too few comments for engagement)
  • Follower geography mismatch with content language
  • High follower count with low view count on posts
  • Posts closed for comments (common accusation mechanism for inauthentic)

Never automatically conclude fraud. Flag, provide evidence, let user decide.

Privacy and Data Handling

We do:

  • Analyze public data from consented sources
  • Store assessment results for session duration + retention period
  • Include data sources in reports for transparency

We do NOT:

  • Access private or locked accounts
  • Attempt to bypass platform privacy controls
  • Store personal data beyond assessment session (see Retention in ObservabilityContract)
  • Share data with third parties
  • Use data for purposes beyond the specific assessment

Retention (overrides default ObservabilityContract for M2)

  • Raw profile data (snapshots): 30 days, then deleted
  • Assessment reports: retained per ObservabilityContract (indefinite local+git, reviewed Week 8 or 10GB)
  • Consent records: retained per legal requirement (GDPR Article 30 — as long as data is held)

Agent Compliance

Every M2 agent MUST:

  1. Read this Policy before processing any team claim
  2. Verify consent exists before accessing any source
  3. Apply discrepancy classification to every claim-observation pair
  4. Use neutral language in all output
  5. Log evidence trail for every verified/unverified claim
  6. Escalate to human if consent scope unclear

Cross-References