How the Medhavy adaptive learning platform can be deployed under institutional branding for business schools, think tanks, curriculum organizations, K-12 networks, and professional education programs.
White labeling means an institution β a business school, think tank, curriculum organization, or private K-12 network β presents the Medhavy platform to their clients under their own brand. Their logo. Their colors. Their domain. Their name on the certificate. The adaptive engine, the AI tutor, the expert-vetted content pipeline, and the pedagogical testing architecture are invisible infrastructure. The institution owns the client relationship. We own the technology. Students get a sophisticated adaptive learning tool. The institution looks like an AI thought leader. We get recurring licensing revenue.
| Institution Type | Primary Use Case | Typical Audience | Content Focus |
|---|---|---|---|
| Business School Executive Education | Corporate AI literacy, leadership upskilling | C-suite, managers, HR leaders | AI strategy, analytics, organizational transformation |
| Think Tank / Policy Organization | Researcher training, evidence synthesis | Fellows, researchers, policy analysts | Research methodology, AI policy, data literacy |
| Curriculum Organization | Educator professional development | Teachers, instructional designers | AI pedagogy, curriculum frameworks, classroom integration |
| Private K-12 Network | Student AI literacy, teacher upskilling | Students (secondary), faculty | AI fundamentals, ethics, creative applications |
| Professional Association | Member credentialing, continuing education | Licensed professionals, members | Domain-specific AI applications, responsible use |
What it is: Clear, structured explanation of concepts with explicit learning objectives, worked examples, and step-by-step guidance. The AI presents information directly rather than prompting the learner to discover it.
When the bandit chooses it: Early in a topic, when prior knowledge assessment suggests foundational gaps, when interaction patterns show confusion rather than engagement.
Platform implementation: System prompt instructs the persona to lead with definitions, provide sequenced explanations, use numbered steps, and confirm understanding before proceeding. Search pipeline prioritizes foundational content sections.
Content requirement: Textbook must include explicit definition blocks, worked examples, and clear section sequencing. Assertion density should be high to ground each instructional step.
Ideal for: New concepts, technical foundations, compliance training, credentialing programs where accuracy is non-negotiable.
What it is: Rather than providing answers, the AI asks questions that guide the learner toward understanding through their own reasoning. The persona challenges assumptions, requests justification, and probes the depth of stated knowledge.
When the bandit chooses it: When the learner demonstrates surface-level confidence, when responses suggest pattern-matching without understanding, when the learner is mid-topic rather than beginning.
Platform implementation: SOCRIT tool informs the system prompt. The persona is instructed to respond to answers with follow-up questions, resist providing direct answers until the learner has reasoned aloud, and flag reasoning gaps rather than filling them. Powered by the Paul-Elder critical thinking framework.
Content requirement: Textbook must include discussion questions, case studies with open-ended elements, and scenarios that reward reasoning over recall. The AI needs retrievable "question anchors" in the source material.
Ideal for: Think tanks, policy training, graduate-level programs, leadership development, any context where analytical depth matters more than factual recall.
What it is: Learning is anchored in realistic scenarios or cases that mirror the professional contexts learners will actually face. The AI presents or references a case, then guides analysis through structured questions about what happened, why, what should have been done differently, and what principles apply.
When the bandit chooses it: When learner interaction patterns suggest stronger engagement with concrete examples than abstract explanations, when the institutional context is professional/executive, when the topic has high real-world stakes.
Platform implementation: CAZE tool informs case construction. The persona is prompted to anchor all responses to the case context, resist abstracting prematurely, and push the learner to apply concepts to the case before generalizing. The search pipeline prioritizes case study sections and real-world application examples.
Content requirement: Textbook must include domain-specific case studies with sufficient factual richness to support multi-turn analysis. Cases must be "assertion-dense" β grounded in verifiable claims the AI can cite. For business schools this means business cases; for policy organizations, policy scenarios; for K-12, age-appropriate applied scenarios.
Ideal for: Business school executive education, medical training, legal education, policy analysis, any program where professional judgment is the core outcome.
What it is: Deliberately returning to previously covered material at increasing intervals to strengthen long-term retention. The AI surfaces earlier concepts at strategic points in the learning sequence, prompting recall before re-explanation.
When the bandit chooses it: When session data shows that learners are completing chapters without retaining earlier material, when the topic is cumulative (each concept depends on prior ones), when the program spans multiple sessions or weeks.
Platform implementation: The analytics pipeline tracks chapter completion and interaction history per learner. At configurable intervals, the persona is prompted to test recall of earlier material before introducing new content. Mouse trace and dwell time data inform which sections need reinforcement.
Content requirement: Content must be structured with explicit dependencies β module 3 should reference module 1 concepts so the AI can surface retrieval prompts naturally. The search pipeline must support cross-chapter retrieval, not just within-chapter.
Ideal for: Credentialing programs, multi-week professional development, any curriculum where foundational concepts must be mastered before advanced ones, language learning, technical skills training.
What it is: The learner produces something β an analysis, a plan, a critique, a design β using the knowledge from the course. The AI acts as a coach and reviewer rather than a tutor, giving feedback on the learner's output rather than teaching concepts directly.
When the bandit chooses it: When direct instruction and Socratic approaches have established foundational knowledge, when the learner is in the final stage of a module, when the institutional context values applied skill over theoretical knowledge.
Platform implementation: The persona is shifted from "tutor" mode to "coach" mode. System prompt instructs the AI to respond to learner outputs with structured feedback: what works, what's missing, what the course material says about the gap, what to try next. CRITIQ peer review protocol informs feedback structure.
Content requirement: Textbook must include project prompts, rubrics or success criteria, and reference material the learner can consult while working. The AI needs content to ground its feedback β it cannot evaluate a business plan without verified frameworks to reference.
Ideal for: Executive education capstone projects, curriculum design programs, think tank policy memos, private school applied learning programs, any context where the credential is based on demonstrated output rather than test scores.
| Institution Type | Primary Approach | Secondary Approach | Content Requirement |
|---|---|---|---|
| Business School Executive Ed | Case-Based | Project-Based | Business cases, ROI frameworks, decision scenarios |
| Think Tank / Policy Org | Socratic | Case-Based | Policy scenarios, debate prompts, evidence frameworks |
| Curriculum Organization | Direct Instruction | Project-Based | Framework documentation, lesson plan templates, rubrics |
| Private K-12 Network | Spaced Retrieval | Project-Based | Cumulative modules, age-appropriate cases, creative prompts |
| Professional Association | Direct Instruction | Spaced Retrieval | Standards-based content, compliance requirements, credentialing rubrics |
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β MEDHAVI HUB (hub.medhavy.com) β
β Authentication Β· Registry Β· Analytics Β· Access Control Β· Dashboard β
β Stack: Next.js Β· Clerk Β· JWT (24hr sessions) β
βββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββΌβββββββββββββββ
βΌ βΌ βΌ
[deployment-A] [deployment-B] [deployment-N]
(Live: Science) (Live: Physics) (White Label Slot)
β β β
ββββββββββββββββΌβββββββββββββββ
β
ββββββββββββββββΌβββββββββββββββββββββββββββββββ
β AI CHAT PIPELINE (Per Deployment) β
β Stage 1: /api/analyze-context β
β β question + page context + history β
β β keywords + tags + follow-ups + plan β
β β active pedagogy mode injected here β
β Stage 2: /api/chat β
β β Orama semantic search (isolated index) β
β β Sources + pedagogy prompt injected β
β β SSE stream: sources β tokens β done β
β Rule: Not in textbook β "Not in textbook." β
ββββββββββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββΌβββββββββββββββββββββββββββββββ
β BANDIT LAYER (Pedagogical Selection) β
β Tracks: approach β outcome β update β
β Selects from: Direct Β· Socratic Β· Case Β· β
β Retrieval Β· Project β
β Updates: per learner, per session, per topicβ
ββββββββββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββΌβββββββββββββββββββββββββββββββ
β PERSONA LAYER (Swappable Per Client) β
β Tone Β· Depth Β· Examples Β· Disclaimer β
β Stored in tenant registry, not code β
ββββββββββββββββββββββββββββββββββββββββββββββββ
Each institutional client is a Clerk Organization (B2B pattern). Learners see only their institution's content. All API routes filter by orgId. Orama indexes are fully isolated per client.
// tenants.json β Minimum viable fix for TD-001
// Maps institutional domain β content + credentials + persona + pedagogy
{
"[client-a].medhavy.com": {
"orgId": "org_clienta_2026",
"textbooks": ["ai-literacy-executive-v1"],
"oramaIndex": "[ENV: ORAMA_INDEX_CLIENTA]",
"oramaKey": "[ENV: ORAMA_KEY_CLIENTA]",
"persona": "corporate-strategy-advisor",
"pedagogy": ["case-based", "project-based"],
"badgeIssuer": "[Institution Name]",
"textbookOnly": true
},
"[client-b].medhavy.com": {
"orgId": "org_clientb_2026",
"textbooks": ["ai-policy-research-v1"],
"oramaIndex": "[ENV: ORAMA_INDEX_CLIENTB]",
"oramaKey": "[ENV: ORAMA_KEY_CLIENTB]",
"persona": "policy-research-advisor",
"pedagogy": ["socratic", "case-based"],
"badgeIssuer": "[Institution Name]",
"textbookOnly": true
}
}
// Client Theme Config β zero code changes per deployment
// CSS variables injected at root layout via next-themes + Tailwind
const clientTheme = {
institutionName: "[Institution Full Name]",
platformName: "[Institution] AI Learning Lab",
primaryColor: "[Brand Hex]",
secondaryColor: "[Brand Hex]",
logoUrl: "/logos/[client]-logo.png",
faviconUrl: "/favicons/[client]-favicon.ico",
fontFamily: "'Inter', sans-serif",
domain: "[client].medhavy.com",
persona: {
name: "[Institution] AI Advisor",
tone: "[authoritative | empathetic | inquisitive | structured]",
depth: "[decision-support | conceptual | foundational | applied]",
examples: "[finance | policy | curriculum | clinical | general]",
disclaimer: "Responses sourced from [Institution]-verified materials only.",
textbookOnly: true
},
pedagogy: {
active: ["case-based", "project-based"],
bandwidthTest: true, // run bandit across all five if true
defaultApproach: "direct-instruction"
}
};
// Sync-to-Database β bypasses 16KB Clerk ceiling (TD-008)
// Supabase webhook receiver triggered on all learner events
POST /api/analytics/ingest
{
"orgId": "org_clienta_2026",
"userId": "[anonymized-hash]",
"event": "ai_interaction",
"pedagogy": "case-based", // which approach was active
"chapter": "module-3-application",
"query": "[learner question]",
"response": "grounded", // or "not_in_textbook"
"mouseTrace": { "path": [...], "dwellMs": 4200 },
"bandyUpdate": { "approach": "case-based", "outcome": "positive" },
"timestamp": "2026-04-01T14:23:00Z"
}
| Metric | Source | Value to Institution |
|---|---|---|
| Cohort completion rate | Chapter open events | Program delivery proof for corporate client |
| Most effective pedagogy | Bandit outcome log | Evidence that adaptive approach works |
| AI interaction quality | grounded / not_in_textbook ratio | Content gap identification |
| Engagement heatmap | Mouse trace + dwell time | Friction point identification |
| Skill mastery by module | Interaction + retrieval patterns | Learning outcome evidence for credential |
| Gap | Description | Priority | Minimum Viable Fix | Sprint |
|---|---|---|---|---|
| TD-001 | Registry in single Clerk metadata | CRITICAL | tenants.json or Edge Config KV mapping domains β Orama keys | Sprint 1 |
| TD-003 | Hard-coded Orama credentials | HIGH | Migrate to env vars per tenant | Sprint 1 |
| TD-008 | 16KB Clerk analytics ceiling | HIGH | Supabase webhook sync for interaction events | Sprint 1 |
| No theme config | Branding requires code changes | HIGH | CSS variable injection + tenant config object | Sprint 1 |
| No pedagogy selector | Bandit approach not yet wired to system prompt | HIGH | Pedagogy mode injected in Stage 1 context analysis | Sprint 1 |
| TD-005 | TEXTBOOK_ONLY gap in Science deployment | β Not a blocker | New deployments built on Template/Physics logic | Backlog |
| AI-003 | Hub verify on every page load | LOW | Session caching β defer until after first pilot | Sprint 2 |
| TD-009 | Sync user iteration on delete | LOW | Defer β not triggered at pilot scale | Sprint 3 |
| Stage | Activity | Duration | Tools |
|---|---|---|---|
| Source Gathering | OER, case studies, institutional frameworks, existing syllabi | 2 days | Manual / CAZE |
| Content Generation | Chapter drafting and assertion writing | 2 days | Bookie the Bookmaker |
| Factual Verification | Assertion scanning + expert review | 3 days | Popper + human expert |
| Visual Asset Creation | Figures for assertion zones | 2 days | Figure Architect |
| Publishing & Audit | MDX formatting, Orama indexing, Hub integration | 1 day | Textbook Auditor / Fumadocs |
| QA & Persona Tuning | AI grounding tests, tone calibration, pedagogy testing | 2 days | Hub Dashboard |
| Total | 12 days |
| Approach | Required Content Elements | Bookie Prompt Focus |
|---|---|---|
| Direct Instruction | Definition blocks, worked examples, numbered steps, explicit objectives | Phenomenon-first, sequential, assertion-dense |
| Socratic | Discussion questions, open-ended cases, debate prompts, reasoning anchors | Question-rich, multi-perspective, evidence-based dispute |
| Case-Based | Domain-specific cases with factual richness, decision points, analysis frameworks | Scenario-first, failure-case mandatory, real-world grounding |
| Spaced Retrieval | Explicit concept dependencies, cross-module references, retrieval prompts | Cumulative structure, backward references, recall checkpoints |
| Project-Based | Project prompts, rubrics, success criteria, reference frameworks | Output-focused, feedback-structured, applied deliverables |
| Institution Type | Persona Tone | Example Depth | TEXTBOOK_ONLY |
|---|---|---|---|
| Business School Executive Ed | Authoritative, ROI-oriented, concise | Business cases, competitive analysis | β Always |
| Think Tank / Policy | Inquisitive, evidence-demanding, rigorous | Policy precedents, research findings | β Always |
| Curriculum Organization | Collaborative, framework-fluent, educator-aware | Classroom scenarios, curriculum examples | β Always |
| Private K-12 | Encouraging, age-appropriate, curiosity-first | Age-appropriate scenarios, creative applications | β Always |
| Professional Association | Precise, standards-aware, compliance-conscious | Professional standards, regulatory context | β Always |
Not every institutional need requires a full adaptive textbook platform. As a result of partnership discussions, we also design and build:
The right structure depends on what the institution actually needs. We start with that conversation.
| Platform | Model | Typical Cost | Adaptive Engine? | Source-Grounded? | White Label? |
|---|---|---|---|---|---|
| Sana Labs | Per license (min 300) | ~$47K/year | β Partial | β | β Enterprise only |
| 360Learning | Per user/month | $8/user/month | β | β | β Limited |
| Docebo | Annual platform | $25Kβ$100K+/year | β AI tagging only | β | β |
| Coursera for Business | Per user/year | $399/user/year | β | β | β |
| Mindsmith | Authoring tool | Custom | β | β | β |
| Medhavy White Label | Setup + monthly | $12.5K + $2K/mo | β Bandit-based | β TEXTBOOK_ONLY | β Core offering |
| Data Type | Owner | Notes |
|---|---|---|
| Individual learner identities (PII) | Institution exclusively | Never retained by Medhavy beyond session tokens |
| Cohort-level learning outcomes | Institution + Medhavy (shared) | Institution gets dashboard; Medhavy retains anonymized for research |
| Anonymized interaction + bandit data | Medhavy | Used to improve pedagogical model; no PII; aggregated across deployments |
| Content (textbook modules) | Medhavy pipeline + Institution domain content | Joint-use license β institution owns final product for their cohorts; Medhavy retains pipeline rights |
| Platform architecture and code | Medhavy exclusively | Licensed not sold β not transferable |
| Research publications from deployment data | Joint β Medhavy + Institution + academic partners | Co-authorship by agreement; institution gets first review |
"Is this a one-time program or something you're planning to run with multiple client cohorts over time?"
If one-time β workshop fee conversation. If repeatable β platform licensing conversation. The answer determines everything about the structure of the agreement and the value proposition to present.