Files
vibn-frontend/GEMINI_SETUP.md

3.8 KiB

Gemini AI Integration Setup

The Getting Started page uses Google's Gemini AI to provide an intelligent onboarding experience.

🔑 Get Your Gemini API Key

  1. Go to Google AI Studio
  2. Click "Get API Key" or "Create API Key"
  3. Copy your API key

🔧 Add to Environment Variables

Local Development

Add to your .env.local file:

GEMINI_API_KEY=your_gemini_api_key_here

Vercel Production

  1. Go to your Vercel project dashboard
  2. Navigate to SettingsEnvironment Variables
  3. Add:
    • Key: GEMINI_API_KEY
    • Value: Your Gemini API key
    • Environment: Production (and Preview if needed)
  4. Redeploy your application

🤖 How It Works

Project Context

When a user opens the Getting Started page, the AI automatically:

  1. Checks project creation method:

    • Local workspace path
    • GitHub repository
    • ChatGPT conversation URL
  2. Analyzes existing activity:

    • Counts coding sessions
    • Reviews recent work
    • Identifies what's been built
  3. Provides personalized guidance:

    • Acknowledges existing progress
    • Suggests next steps
    • Answers questions about the project
    • Helps break down goals into tasks

System Prompt

The AI is instructed to:

  • Welcome users warmly
  • Reference their specific project details
  • Check for existing sessions and code
  • Provide actionable, step-by-step guidance
  • Ask clarifying questions
  • Help users make progress quickly

Data Available to AI

The AI has access to:

  • Project name and product vision
  • Project type (manual, GitHub, ChatGPT, local)
  • Workspace path (if local folder was selected)
  • GitHub repository (if connected)
  • ChatGPT URL (if provided)
  • Session count and recent activity
  • Conversation history (during the chat session)

📊 API Endpoint

Endpoint: POST /api/ai/chat

Request:

{
  "projectId": "string",
  "message": "string",
  "conversationHistory": [
    { "role": "user|assistant", "content": "string" }
  ]
}

Response:

{
  "success": true,
  "message": "AI response here",
  "projectContext": {
    "sessionCount": 5,
    "hasWorkspace": true,
    "hasGithub": false,
    "hasChatGPT": false
  }
}

🔒 Security

  • API key is server-side only (never exposed to client)
  • User authentication required (Firebase ID token)
  • Project ownership verified
  • Rate limiting recommended (not yet implemented)

💡 Tips

Prompt Engineering

The system prompt can be modified in /app/api/ai/chat/route.ts to:

  • Change the AI's personality
  • Add specific instructions
  • Include additional context
  • Customize responses for different project types

Fallback Behavior

If Gemini API fails:

  • Shows a friendly default message
  • Allows user to continue chatting
  • Logs errors for debugging

Cost Management

Gemini Pro is currently free with rate limits:

  • 60 requests per minute
  • 1,500 requests per day

For production, consider:

  • Implementing rate limiting per user
  • Caching common responses
  • Using Gemini Pro 1.5 for longer context

🧪 Testing

  1. Start local dev server: npm run dev
  2. Navigate to any project's Getting Started page
  3. The AI should automatically greet you with context about your project
  4. Try asking questions like:
    • "What should I build first?"
    • "Help me understand my existing sessions"
    • "What's the best way to organize my code?"

📝 Customization

To customize the AI behavior, edit the systemPrompt in: /app/api/ai/chat/route.ts

You can:

  • Add more project context
  • Change the tone and style
  • Include specific frameworks or tools
  • Add code examples and templates
  • Integrate with other APIs or databases

Questions? Check the Gemini API documentation