Files
vibn-frontend/prompts/foundation.md

487 lines
13 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
Prompt to paste into Cursor for Sonnet
You are helping me implement a clean, provider-agnostic AI foundation for a product called Vibn.
Vibns job:
- Pull a product idea out of a founders head
- Turn it into a structured product model
- Derive an MVP plan and marketing model from that
Runtime requirement:
- The *runtime* “Vibn agent” will use **Gemini 2.5 Pro** as the LLM.
- The architecture must be **model-agnostic**, so we can later add GPT / Claude if we want.
- Right now, you (Claude Sonnet in Cursor) are only a **coding assistant**, not part of the runtime.
I want you to design and implement the following foundation inside this repo.
==================================================
🏛 1. HIGH-LEVEL ARCHITECTURE
==================================================
We want a simple, explicit architecture with four core concepts:
1. Project a single product idea
2. Knowledge base all raw materials for a project (notes, chats, docs, etc.)
3. Canonical Product Model the structured representation of the product
4. Derived artifacts MVP plan and Marketing model
Plus:
- A small pipeline that goes:
1) knowledge → product insights
2) product insights → canonical model
3) canonical model → MVP plan
4) canonical model → marketing model
- A provider-agnostic **LlmClient** that can talk to Gemini now and later GPT/Sonnet.
Use the existing stack / frameworks in this repo (TypeScript / Next / Prisma / etc. as appropriate).
==================================================
🧱 2. DATA MODEL TO IMPLEMENT
==================================================
Please implement or adapt the following data model (adjust names to match repo conventions).
### 2.1 Project
Represents one product idea.
TypeScript shape:
```ts
export type Phase =
| "collector" // collecting knowledge
| "analyzed" // product_insights written
| "vision_ready" // canonical product model written
| "mvp_ready" // mvp_plan written
| "marketing_ready"; // marketing_model written
export interface Project {
id: string;
ownerId: string;
name: string;
description: string | null;
currentPhase: Phase;
createdAt: Date;
updatedAt: Date;
}
Create or align an existing database table with this shape (Prisma, SQL, or whatever ORM we use).
2.2 Knowledge base (all user raw materials)
All docs, past chats, notes, specs, etc. roll into a single table.
export type SourceType =
| "user_chat"
| "imported_chat"
| "doc"
| "note"
| "spec"
| "research"
| "other";
export interface KnowledgeItem {
id: string;
projectId: string;
sourceType: SourceType;
title: string | null;
content: string; // normalized text
sourceMeta: {
origin?: "chatgpt" | "notion" | "github" | "file_upload" | "manual";
url?: string;
filename?: string;
createdAtOriginal?: string;
};
createdAt: Date;
updatedAt: Date;
}
Implement a knowledge_items table (or equivalent) and any necessary migration.
2.3 Canonical Product Model (central brain)
This is the single source of truth for the products understanding in the system.
export interface CanonicalProductModel {
projectId: string; // PK/FK to Project.id
basics: {
productName: string | null;
oneLiner: string | null;
};
users: {
primaryUser: string | null;
userSegments: Array<{
id: string;
description: string;
jobsToBeDone: string[];
context: string | null;
}>;
};
problemSpace: {
primaryProblem: string | null;
secondaryProblems: string[];
currentAlternatives: string[];
pains: string[];
};
solutionSpace: {
coreSolution: string | null;
keyFeatures: string[];
differentiators: string[];
constraints: string[];
};
business: {
valueProposition: string | null;
whyNow: string | null;
roughPricingIdeas: string[];
};
meta: {
lastUpdatedFromExtraction: string | null; // ISO date
lastManualEditAt: string | null; // ISO date
};
}
Store this as a single row per project, with a JSON column if were using a relational DB. Name could be product_model or canonical_product_model.
2.4 Product Insights (evidence layer)
This is the “extracted signals” from raw knowledge.
export interface ProductInsights {
projectId: string;
problems: Array<{ id: string; description: string; evidence: string[] }>;
users: Array<{ id: string; description: string; evidence: string[] }>;
desires: Array<{ id: string; description: string; evidence: string[] }>;
features: Array<{ id: string; description: string; evidence: string[] }>;
constraints: Array<{ id: string; description: string; evidence: string[] }>;
competitors: Array<{ id: string; description: string; evidence: string[] }>;
uncertainties: Array<{ id: string; description: string; relatedTo?: string }>;
missingInformation: Array<{ id: string; question: string }>;
lastRunAt: string; // ISO date
modelUsed: "gemini" | "gpt" | "sonnet";
}
Again: one row per project, as JSON.
2.5 MVP Plan (derived from canonical model)
export interface MvpPlan {
projectId: string;
guidingPrinciples: string[];
coreUserFlows: Array<{
id: string;
name: string;
description: string;
steps: string[];
successCriteria: string[];
}>;
coreFeatures: Array<{
id: string;
name: string;
description: string;
userFlowIds: string[];
}>;
supportingFeatures: Array<{
id: string;
name: string;
description: string;
}>;
outOfScope: Array<{
id: string;
description: string;
reasonDeferred: string;
}>;
lastRunAt: string; // ISO date
modelUsed: "gemini" | "gpt" | "sonnet";
}
2.6 Marketing Model (also derived from canonical model)
export interface MarketingModel {
projectId: string;
icp: {
summary: string;
segments: Array<{
id: string;
description: string;
jobsToBeDone: string[];
keyPains: string[];
buyingTriggers: string[];
}>;
};
positioning: {
category: string;
targetAudience: string;
primaryBenefit: string;
reasonsToBelieve: string[];
};
messaging: {
hero: {
headline: string;
subheadline: string;
primaryCta: string;
bullets: string[];
};
sections: Array<{
id: string;
title: string;
body: string;
bullets: string[];
}>;
};
launchIdeas: {
initialChannels: string[];
angles: string[];
exampleCampaigns: string[];
};
lastRunAt: string; // ISO
modelUsed: "gemini" | "gpt" | "sonnet";
}
Please create appropriate types in something like /lib/ai/types.ts (or adapt to existing file) and DB schema/migrations to persist these models.
==================================================
🤖 3. LLM CLIENT ABSTRACTION (WITH GEMINI RUNTIME)
Implement a small, provider-agnostic LLM client.
3.1 Interface
Create something like /lib/ai/llm-client.ts:
export type LlmModel = "gemini" | "gpt" | "sonnet";
export interface LlmMessage {
role: "system" | "user" | "assistant";
content: string;
}
export interface StructuredCallArgs<TOutput> {
model: LlmModel;
systemPrompt: string;
messages: LlmMessage[];
schema: import("zod").ZodSchema<TOutput>;
temperature?: number;
}
export interface LlmClient {
structuredCall<TOutput>(args: StructuredCallArgs<TOutput>): Promise<TOutput>;
}
3.2 Gemini implementation (runtime)
Implement a GeminiLlmClient that:
Ignores other models for now (only supports model: "gemini").
Uses the existing Gemini 2.5 Pro client in this repo.
Requests JSON / structured responses if the API supports it (e.g., response_mime_type: "application/json").
Validates the result against the provided Zod schema.
On parse/validation error:
Optionally retry once with a “fix your JSON” system hint.
Otherwise throw a well-typed error.
Wire this into where we currently call Gemini so that future GPT/Sonnet support only touches this layer.
==================================================
🧠 4. AGENT FUNCTIONS / PIPELINE
Create a small pipeline that uses the LlmClient to transform data through four steps.
Place them in something like /lib/ai/agents/:
extractorAgent knowledge → ProductInsights
productModelAgent ProductInsights → CanonicalProductModel
mvpAgent CanonicalProductModel → MvpPlan
marketingAgent CanonicalProductModel → MarketingModel
4.1 extractorAgent
Signature:
export async function extractorAgent(
knowledge: KnowledgeItem[],
llm: LlmClient
): Promise<ProductInsights>
Behavior:
Serialize the knowledge array to JSON.
Use a clear system prompt that says: “You are a Product Insight Extraction Agent. Read the knowledge items and output only JSON matching this schema: (ProductInsights).”
Call llm.structuredCall<ProductInsights>({ model: "gemini", ... }).
Return the validated result.
4.2 productModelAgent
Signature:
export async function productModelAgent(
insights: ProductInsights,
existingModel: CanonicalProductModel | null,
llm: LlmClient
): Promise<CanonicalProductModel>
Behavior:
Prompt: “You are a Product Model Builder. Given product insights, create or refine the canonical product model JSON matching this schema…”
If existingModel is present, include it in the prompt and ask the model to refine rather than start from scratch.
Use structuredCall with the CanonicalProductModel schema.
4.3 mvpAgent
Signature:
export async function mvpAgent(
model: CanonicalProductModel,
llm: LlmClient
): Promise<MvpPlan>
Behavior:
Prompt: “You are an MVP Planner. Given this canonical product model, produce the smallest sellable V1 plan as JSON matching the MvpPlan schema…”
Uses structuredCall and returns MvpPlan.
4.4 marketingAgent
Signature:
export async function marketingAgent(
model: CanonicalProductModel,
llm: LlmClient
): Promise<MarketingModel>
Behavior:
Prompt: “You are a Marketing Strategist. Given this canonical product model, produce an ICP + positioning + messaging + launch ideas as JSON matching the MarketingModel schema…”
Uses structuredCall and returns MarketingModel.
==================================================
🔀 5. ORCHESTRATOR / PIPELINE RUNNER
Create a small pipeline runner, e.g. /lib/ai/pipeline.ts:
export async function runFullPipeline(projectId: string, llm: LlmClient) {
const project = await loadProject(projectId);
const knowledge = await loadKnowledgeItems(projectId);
const current = await loadAllModels(projectId); // insights, canonical, mvp, marketing
// 1) knowledge -> insights
const insights = await extractorAgent(knowledge, llm);
await saveProductInsights(projectId, insights);
await updateProjectPhase(projectId, "analyzed");
// 2) insights -> canonical product model
const canonical = await productModelAgent(
insights,
current.canonicalProductModel ?? null,
llm
);
await saveCanonicalProductModel(projectId, canonical);
await updateProjectPhase(projectId, "vision_ready");
// 3) canonical -> MVP
const mvp = await mvpAgent(canonical, llm);
await saveMvpPlan(projectId, mvp);
await updateProjectPhase(projectId, "mvp_ready");
// 4) canonical -> marketing
const marketing = await marketingAgent(canonical, llm);
await saveMarketingModel(projectId, marketing);
await updateProjectPhase(projectId, "marketing_ready");
return { insights, canonical, mvp, marketing };
}
Please implement the helper functions (loadProject, loadKnowledgeItems, saveProductInsights, etc.) against whatever DB / ORM we use.
==================================================
🌐 6. API ENDPOINTS
Implement or refactor API routes so that:
POST /api/projects
Creates a new project.
Sets currentPhase = "collector".
GET /api/projects/:id
Returns project and which artifacts exist (hasInsights, hasProductModel, hasMvp, hasMarketing).
GET /api/projects/:id/knowledge and POST /api/projects/:id/knowledge
Manage KnowledgeItems (list and add).
POST /api/projects/:id/run-pipeline
Calls runFullPipeline(projectId, llmClient).
Returns the updated artifacts + project phase.
GET /api/projects/:id/artifacts
Returns { productInsights, canonicalProductModel, mvpPlan, marketingModel }.
You may reuse existing routes where it makes sense; the important thing is that the data model and pipeline above exist and are wired to Gemini via the LlmClient abstraction.
==================================================
✅ 7. HOW TO PROCEED
Scan the repo and see what already exists that overlaps with this design.
Propose a mapping (which existing files/types youll reuse vs replace).
Implement the data model (types + DB schema) and LlmClient abstraction.
Implement the four agents and the runFullPipeline function.
Hook up the API endpoints.
Keep changes incremental and explain major decisions as code comments.
Throughout, remember:
At runtime, the Vibn agent uses Gemini 2.5 Pro via our LlmClient.
You (Sonnet in Cursor) are here to design and write the code, not to be the runtime model