πŸŽ“ MEDHAVY LLC β€” FOR DISCUSSION PURPOSES ONLY

WHITE LABEL ARCHITECTURE

Technical Specification, Pedagogical Framework & Business Model

How the Medhavy adaptive learning platform can be deployed under institutional branding for business schools, think tanks, curriculum organizations, K-12 networks, and professional education programs.

"You sell it. We build it. Learners benefit from it."

Version 1.0 | March 2026

Medhavy LLC, in association with Bear Brown, LLC and Humanitarians AI

Contact: bear@bearbrown.co | bear@humanitarians.ai

PART I: WHAT WHITE LABEL MEANS IN PLAIN TERMS

🎯 The One-Paragraph Version

White labeling means an institution β€” a business school, think tank, curriculum organization, or private K-12 network β€” presents the Medhavy platform to their clients under their own brand. Their logo. Their colors. Their domain. Their name on the certificate. The adaptive engine, the AI tutor, the expert-vetted content pipeline, and the pedagogical testing architecture are invisible infrastructure. The institution owns the client relationship. We own the technology. Students get a sophisticated adaptive learning tool. The institution looks like an AI thought leader. We get recurring licensing revenue.

Institutional Client Types

Institution Type Primary Use Case Typical Audience Content Focus
Business School Executive Education Corporate AI literacy, leadership upskilling C-suite, managers, HR leaders AI strategy, analytics, organizational transformation
Think Tank / Policy Organization Researcher training, evidence synthesis Fellows, researchers, policy analysts Research methodology, AI policy, data literacy
Curriculum Organization Educator professional development Teachers, instructional designers AI pedagogy, curriculum frameworks, classroom integration
Private K-12 Network Student AI literacy, teacher upskilling Students (secondary), faculty AI fundamentals, ethics, creative applications
Professional Association Member credentialing, continuing education Licensed professionals, members Domain-specific AI applications, responsible use

βœ… What the Institution Gets

  • Their own branded AI learning platform β€” no "powered by Medhavy" unless they want it
  • Custom domain β€” e.g., learn.[institution].edu
  • Logo, colors, persona name β€” fully theirs
  • Content tailored to their audience
  • A platform deployable in days, not months
  • Cohort analytics dashboard β€” their data, their reporting
  • A credential with actual learning evidence behind it

πŸ”§ What We Retain

  • Platform architecture β€” Next.js, Orama, OpenAI pipeline
  • Bandit-based adaptive engine β€” the research instrument
  • Content pipeline tools β€” Bookie, Popper, Figure Architect
  • AI persona system β€” swappable per deployment
  • Anonymized aggregated learning data β€” for research
  • All platform IP and code
  • Right to deploy to other institutions

πŸ“‹ What Changes Per Deployment

⏱️ Deployment Speed

PART II: THE FIVE LEARNING APPROACHES

ℹ️ How the Bandit Architecture Uses These The multi-armed bandit engine continuously tests which of these five pedagogical approaches is most effective for a specific learner at a specific point in their learning journey. It identifies the most likely approach, delivers it, observes whether learning occurred, and updates its model. When an approach stops working for a learner, it tries another. Over thousands of learners, this generates empirical evidence about which approaches work for which people in which contexts β€” turning the platform into a research instrument as well as a teaching tool.

The Five Approaches

1. Direct Instruction
Explicit Β· Structured Β· Sequential

What it is: Clear, structured explanation of concepts with explicit learning objectives, worked examples, and step-by-step guidance. The AI presents information directly rather than prompting the learner to discover it.

When the bandit chooses it: Early in a topic, when prior knowledge assessment suggests foundational gaps, when interaction patterns show confusion rather than engagement.

Platform implementation: System prompt instructs the persona to lead with definitions, provide sequenced explanations, use numbered steps, and confirm understanding before proceeding. Search pipeline prioritizes foundational content sections.

Content requirement: Textbook must include explicit definition blocks, worked examples, and clear section sequencing. Assertion density should be high to ground each instructional step.

Ideal for: New concepts, technical foundations, compliance training, credentialing programs where accuracy is non-negotiable.

2. Socratic Questioning
Dialogic Β· Inquiry-Based Β· Reflective

What it is: Rather than providing answers, the AI asks questions that guide the learner toward understanding through their own reasoning. The persona challenges assumptions, requests justification, and probes the depth of stated knowledge.

When the bandit chooses it: When the learner demonstrates surface-level confidence, when responses suggest pattern-matching without understanding, when the learner is mid-topic rather than beginning.

Platform implementation: SOCRIT tool informs the system prompt. The persona is instructed to respond to answers with follow-up questions, resist providing direct answers until the learner has reasoned aloud, and flag reasoning gaps rather than filling them. Powered by the Paul-Elder critical thinking framework.

Content requirement: Textbook must include discussion questions, case studies with open-ended elements, and scenarios that reward reasoning over recall. The AI needs retrievable "question anchors" in the source material.

Ideal for: Think tanks, policy training, graduate-level programs, leadership development, any context where analytical depth matters more than factual recall.

3. Case-Based Learning
Applied Β· Contextual Β· Decision-Focused

What it is: Learning is anchored in realistic scenarios or cases that mirror the professional contexts learners will actually face. The AI presents or references a case, then guides analysis through structured questions about what happened, why, what should have been done differently, and what principles apply.

When the bandit chooses it: When learner interaction patterns suggest stronger engagement with concrete examples than abstract explanations, when the institutional context is professional/executive, when the topic has high real-world stakes.

Platform implementation: CAZE tool informs case construction. The persona is prompted to anchor all responses to the case context, resist abstracting prematurely, and push the learner to apply concepts to the case before generalizing. The search pipeline prioritizes case study sections and real-world application examples.

Content requirement: Textbook must include domain-specific case studies with sufficient factual richness to support multi-turn analysis. Cases must be "assertion-dense" β€” grounded in verifiable claims the AI can cite. For business schools this means business cases; for policy organizations, policy scenarios; for K-12, age-appropriate applied scenarios.

Ideal for: Business school executive education, medical training, legal education, policy analysis, any program where professional judgment is the core outcome.

4. Spaced Retrieval Practice
Memory-Consolidating Β· Interval-Based Β· Cumulative

What it is: Deliberately returning to previously covered material at increasing intervals to strengthen long-term retention. The AI surfaces earlier concepts at strategic points in the learning sequence, prompting recall before re-explanation.

When the bandit chooses it: When session data shows that learners are completing chapters without retaining earlier material, when the topic is cumulative (each concept depends on prior ones), when the program spans multiple sessions or weeks.

Platform implementation: The analytics pipeline tracks chapter completion and interaction history per learner. At configurable intervals, the persona is prompted to test recall of earlier material before introducing new content. Mouse trace and dwell time data inform which sections need reinforcement.

Content requirement: Content must be structured with explicit dependencies β€” module 3 should reference module 1 concepts so the AI can surface retrieval prompts naturally. The search pipeline must support cross-chapter retrieval, not just within-chapter.

Ideal for: Credentialing programs, multi-week professional development, any curriculum where foundational concepts must be mastered before advanced ones, language learning, technical skills training.

5. Project-Based / Generative Learning
Creative Β· Applied Β· Self-Directed

What it is: The learner produces something β€” an analysis, a plan, a critique, a design β€” using the knowledge from the course. The AI acts as a coach and reviewer rather than a tutor, giving feedback on the learner's output rather than teaching concepts directly.

When the bandit chooses it: When direct instruction and Socratic approaches have established foundational knowledge, when the learner is in the final stage of a module, when the institutional context values applied skill over theoretical knowledge.

Platform implementation: The persona is shifted from "tutor" mode to "coach" mode. System prompt instructs the AI to respond to learner outputs with structured feedback: what works, what's missing, what the course material says about the gap, what to try next. CRITIQ peer review protocol informs feedback structure.

Content requirement: Textbook must include project prompts, rubrics or success criteria, and reference material the learner can consult while working. The AI needs content to ground its feedback β€” it cannot evaluate a business plan without verified frameworks to reference.

Ideal for: Executive education capstone projects, curriculum design programs, think tank policy memos, private school applied learning programs, any context where the credential is based on demonstrated output rather than test scores.

Pedagogy Requirements by Institution Type

Institution Type Primary Approach Secondary Approach Content Requirement
Business School Executive Ed Case-Based Project-Based Business cases, ROI frameworks, decision scenarios
Think Tank / Policy Org Socratic Case-Based Policy scenarios, debate prompts, evidence frameworks
Curriculum Organization Direct Instruction Project-Based Framework documentation, lesson plan templates, rubrics
Private K-12 Network Spaced Retrieval Project-Based Cumulative modules, age-appropriate cases, creative prompts
Professional Association Direct Instruction Spaced Retrieval Standards-based content, compliance requirements, credentialing rubrics

PART III: TECHNICAL ARCHITECTURE

Current Platform Stack

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     MEDHAVI HUB (hub.medhavy.com)                    β”‚
β”‚  Authentication Β· Registry Β· Analytics Β· Access Control Β· Dashboard  β”‚
β”‚  Stack: Next.js Β· Clerk Β· JWT (24hr sessions)                        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                        β”‚
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β–Ό             β–Ό              β–Ό
   [deployment-A]  [deployment-B]  [deployment-N]
   (Live: Science)  (Live: Physics)  (White Label Slot)
          β”‚              β”‚              β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚         AI CHAT PIPELINE (Per Deployment)    β”‚
          β”‚  Stage 1: /api/analyze-context               β”‚
          β”‚    β†’ question + page context + history       β”‚
          β”‚    β†’ keywords + tags + follow-ups + plan     β”‚
          β”‚    β†’ active pedagogy mode injected here      β”‚
          β”‚  Stage 2: /api/chat                          β”‚
          β”‚    β†’ Orama semantic search (isolated index)  β”‚
          β”‚    β†’ Sources + pedagogy prompt injected      β”‚
          β”‚    β†’ SSE stream: sources β†’ tokens β†’ done     β”‚
          β”‚  Rule: Not in textbook β†’ "Not in textbook."  β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚    BANDIT LAYER (Pedagogical Selection)      β”‚
          β”‚  Tracks: approach β†’ outcome β†’ update         β”‚
          β”‚  Selects from: Direct Β· Socratic Β· Case Β·   β”‚
          β”‚    Retrieval Β· Project                       β”‚
          β”‚  Updates: per learner, per session, per topicβ”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚    PERSONA LAYER (Swappable Per Client)      β”‚
          β”‚  Tone Β· Depth Β· Examples Β· Disclaimer        β”‚
          β”‚  Stored in tenant registry, not code         β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Multi-Tenant Registry β€” The Core Architecture

Each institutional client is a Clerk Organization (B2B pattern). Learners see only their institution's content. All API routes filter by orgId. Orama indexes are fully isolated per client.

// tenants.json β€” Minimum viable fix for TD-001
// Maps institutional domain β†’ content + credentials + persona + pedagogy

{
  "[client-a].medhavy.com": {
    "orgId":          "org_clienta_2026",
    "textbooks":      ["ai-literacy-executive-v1"],
    "oramaIndex":     "[ENV: ORAMA_INDEX_CLIENTA]",
    "oramaKey":       "[ENV: ORAMA_KEY_CLIENTA]",
    "persona":        "corporate-strategy-advisor",
    "pedagogy":       ["case-based", "project-based"],
    "badgeIssuer":    "[Institution Name]",
    "textbookOnly":   true
  },
  "[client-b].medhavy.com": {
    "orgId":          "org_clientb_2026",
    "textbooks":      ["ai-policy-research-v1"],
    "oramaIndex":     "[ENV: ORAMA_INDEX_CLIENTB]",
    "oramaKey":       "[ENV: ORAMA_KEY_CLIENTB]",
    "persona":        "policy-research-advisor",
    "pedagogy":       ["socratic", "case-based"],
    "badgeIssuer":    "[Institution Name]",
    "textbookOnly":   true
  }
}

Theme Configuration Layer

// Client Theme Config β€” zero code changes per deployment
// CSS variables injected at root layout via next-themes + Tailwind

const clientTheme = {
  institutionName:  "[Institution Full Name]",
  platformName:     "[Institution] AI Learning Lab",
  primaryColor:     "[Brand Hex]",
  secondaryColor:   "[Brand Hex]",
  logoUrl:          "/logos/[client]-logo.png",
  faviconUrl:       "/favicons/[client]-favicon.ico",
  fontFamily:       "'Inter', sans-serif",
  domain:           "[client].medhavy.com",

  persona: {
    name:           "[Institution] AI Advisor",
    tone:           "[authoritative | empathetic | inquisitive | structured]",
    depth:          "[decision-support | conceptual | foundational | applied]",
    examples:       "[finance | policy | curriculum | clinical | general]",
    disclaimer:     "Responses sourced from [Institution]-verified materials only.",
    textbookOnly:   true
  },

  pedagogy: {
    active:         ["case-based", "project-based"],
    bandwidthTest:  true,   // run bandit across all five if true
    defaultApproach: "direct-instruction"
  }
};

Analytics Architecture

// Sync-to-Database β€” bypasses 16KB Clerk ceiling (TD-008)
// Supabase webhook receiver triggered on all learner events

POST /api/analytics/ingest
{
  "orgId":        "org_clienta_2026",
  "userId":       "[anonymized-hash]",
  "event":        "ai_interaction",
  "pedagogy":     "case-based",           // which approach was active
  "chapter":      "module-3-application",
  "query":        "[learner question]",
  "response":     "grounded",             // or "not_in_textbook"
  "mouseTrace":   { "path": [...], "dwellMs": 4200 },
  "bandyUpdate":  { "approach": "case-based", "outcome": "positive" },
  "timestamp":    "2026-04-01T14:23:00Z"
}
Institutional Analytics Dashboard β€” What the Client Sees
MetricSourceValue to Institution
Cohort completion rateChapter open eventsProgram delivery proof for corporate client
Most effective pedagogyBandit outcome logEvidence that adaptive approach works
AI interaction qualitygrounded / not_in_textbook ratioContent gap identification
Engagement heatmapMouse trace + dwell timeFriction point identification
Skill mastery by moduleInteraction + retrieval patternsLearning outcome evidence for credential

Technical Gap Priority Matrix

Gap Description Priority Minimum Viable Fix Sprint
TD-001 Registry in single Clerk metadata CRITICAL tenants.json or Edge Config KV mapping domains β†’ Orama keys Sprint 1
TD-003 Hard-coded Orama credentials HIGH Migrate to env vars per tenant Sprint 1
TD-008 16KB Clerk analytics ceiling HIGH Supabase webhook sync for interaction events Sprint 1
No theme config Branding requires code changes HIGH CSS variable injection + tenant config object Sprint 1
No pedagogy selector Bandit approach not yet wired to system prompt HIGH Pedagogy mode injected in Stage 1 context analysis Sprint 1
TD-005 TEXTBOOK_ONLY gap in Science deployment ⚠ Not a blocker New deployments built on Template/Physics logic Backlog
AI-003 Hub verify on every page load LOW Session caching β€” defer until after first pilot Sprint 2
TD-009 Sync user iteration on delete LOW Defer β€” not triggered at pilot scale Sprint 3

PART IV: CONTENT PIPELINE

Production Timeline β€” 5 Modules

Stage Activity Duration Tools
Source Gathering OER, case studies, institutional frameworks, existing syllabi 2 days Manual / CAZE
Content Generation Chapter drafting and assertion writing 2 days Bookie the Bookmaker
Factual Verification Assertion scanning + expert review 3 days Popper + human expert
Visual Asset Creation Figures for assertion zones 2 days Figure Architect
Publishing & Audit MDX formatting, Orama indexing, Hub integration 1 day Textbook Auditor / Fumadocs
QA & Persona Tuning AI grounding tests, tone calibration, pedagogy testing 2 days Hub Dashboard
Total 12 days

Content Requirements by Pedagogy

Approach Required Content Elements Bookie Prompt Focus
Direct Instruction Definition blocks, worked examples, numbered steps, explicit objectives Phenomenon-first, sequential, assertion-dense
Socratic Discussion questions, open-ended cases, debate prompts, reasoning anchors Question-rich, multi-perspective, evidence-based dispute
Case-Based Domain-specific cases with factual richness, decision points, analysis frameworks Scenario-first, failure-case mandatory, real-world grounding
Spaced Retrieval Explicit concept dependencies, cross-module references, retrieval prompts Cumulative structure, backward references, recall checkpoints
Project-Based Project prompts, rubrics, success criteria, reference frameworks Output-focused, feedback-structured, applied deliverables

Persona Calibration by Institution Type

Institution Type Persona Tone Example Depth TEXTBOOK_ONLY
Business School Executive Ed Authoritative, ROI-oriented, concise Business cases, competitive analysis βœ“ Always
Think Tank / Policy Inquisitive, evidence-demanding, rigorous Policy precedents, research findings βœ“ Always
Curriculum Organization Collaborative, framework-fluent, educator-aware Classroom scenarios, curriculum examples βœ“ Always
Private K-12 Encouraging, age-appropriate, curiosity-first Age-appropriate scenarios, creative applications βœ“ Always
Professional Association Precise, standards-aware, compliance-conscious Professional standards, regulatory context βœ“ Always

PART V: BUSINESS MODEL

Pricing Structure

ℹ️ Note on Pricing The figures below represent starting points for discussion, not fixed rates. All engagements begin with a conversation about scope, timeline, and institutional needs. Pricing is determined collaboratively based on what the partnership actually requires. In addition to the full white label platform model, we also build simpler and custom tools on a contract basis β€” not every institution needs the full stack, and not every problem requires a textbook.
Setup Fee (one-time)
TBD
Branding, domain, persona, infrastructure fixes, initial content module set. Typically a 2–3 week turnaround.
Platform License
TBD
Hub access, AI engine, analytics dashboard, ongoing maintenance and updates. Recurring.
Content Module Fee
TBD
Per new module set ingested through the content pipeline. Scales with scope and verification requirements.
What Else We Build

Not every institutional need requires a full adaptive textbook platform. As a result of partnership discussions, we also design and build:

The right structure depends on what the institution actually needs. We start with that conversation.

Competitive Landscape

Platform Model Typical Cost Adaptive Engine? Source-Grounded? White Label?
Sana Labs Per license (min 300) ~$47K/year ⚠ Partial βœ— ⚠ Enterprise only
360Learning Per user/month $8/user/month βœ— βœ— ⚠ Limited
Docebo Annual platform $25K–$100K+/year ⚠ AI tagging only βœ— βœ“
Coursera for Business Per user/year $399/user/year βœ— βœ— βœ—
Mindsmith Authoring tool Custom βœ— βœ— βœ—
Medhavy White Label Setup + monthly $12.5K + $2K/mo βœ“ Bandit-based βœ“ TEXTBOOK_ONLY βœ“ Core offering
βœ… The Market Gap No platform currently combines: source-grounded AI tutoring, bandit-based pedagogical adaptation across five approaches, expert-vetted content pipeline from source to live in under 14 days, and institutional white-labeling as a core offering. This is a genuine blue ocean. The pitch is not "cheaper than Docebo." It is a different product category.

Data Ownership Framework

Data Type Owner Notes
Individual learner identities (PII) Institution exclusively Never retained by Medhavy beyond session tokens
Cohort-level learning outcomes Institution + Medhavy (shared) Institution gets dashboard; Medhavy retains anonymized for research
Anonymized interaction + bandit data Medhavy Used to improve pedagogical model; no PII; aggregated across deployments
Content (textbook modules) Medhavy pipeline + Institution domain content Joint-use license β€” institution owns final product for their cohorts; Medhavy retains pipeline rights
Platform architecture and code Medhavy exclusively Licensed not sold β€” not transferable
Research publications from deployment data Joint β€” Medhavy + Institution + academic partners Co-authorship by agreement; institution gets first review

PART VI: IMPLEMENTATION ROADMAP

Week 1 β€” Infrastructure & Branding

WL-001: Fix TD-001 β€” Tenant Registry
CRITICAL | Architecture
Implement tenants.json or Edge Config KV store. Map institutional domain β†’ Orama index + credentials + persona + pedagogy config + content manifest. Foundation everything else depends on.
WL-002: Fix TD-003 β€” Credential Security
CRITICAL | Architecture
Migrate all hard-coded Orama keys to environment variables per tenant. Non-negotiable before any external institutional deployment.
WL-003: Theme Config System
HIGH | Architecture
CSS variable injection at root layout keyed to incoming hostname via next-themes + Tailwind. Enables zero-code branding per deployment. Without this, every new client requires a developer.
WL-004: Pedagogy Mode Injection
HIGH | Architecture Pedagogy
Wire active pedagogy mode from tenant config into Stage 1 context analysis system prompt. The bandit selects the approach; the Stage 1 prompt tells the AI how to behave in that mode. Five prompt variants required: Direct Β· Socratic Β· Case Β· Retrieval Β· Project.

Week 2 β€” Content & Persona

WL-005: First Client Content Sprint
HIGH | Architecture
Assign Fellows to 5-module content pipeline for first institutional client. Source gathering starts immediately. 12-day timeline is achievable only if content starts in Week 1.
WL-006: Persona Calibration
HIGH | Architecture
Write and test system prompt for client persona. Calibrate tone, depth, example types, disclaimer language. Test against 20 representative learner queries before launch.

Week 3 β€” Launch & Analytics

WL-007: Fix TD-008 β€” Analytics Sync
HIGH | Architecture
Supabase webhook receiver for interaction events including pedagogy mode and bandit outcome. Cohort-level dashboard for institutional client. This is what makes the credential meaningful and the platform worth the licensing fee.
WL-008: Launch First White Label Deployment
Business
Deploy first institutional client to production. Onboard first cohort. Document everything for subsequent deployments β€” the goal is for the second deployment to cost 10% of the first.

The Key Question for Every Institutional Conversation

Ask This Before Naming a Price

"Is this a one-time program or something you're planning to run with multiple client cohorts over time?"

If one-time β†’ workshop fee conversation. If repeatable β†’ platform licensing conversation. The answer determines everything about the structure of the agreement and the value proposition to present.