3.8 KiB
3.8 KiB
Gemini AI Integration Setup
The Getting Started page uses Google's Gemini AI to provide an intelligent onboarding experience.
🔑 Get Your Gemini API Key
- Go to Google AI Studio
- Click "Get API Key" or "Create API Key"
- Copy your API key
🔧 Add to Environment Variables
Local Development
Add to your .env.local file:
GEMINI_API_KEY=your_gemini_api_key_here
Vercel Production
- Go to your Vercel project dashboard
- Navigate to Settings → Environment Variables
- Add:
- Key:
GEMINI_API_KEY - Value: Your Gemini API key
- Environment: Production (and Preview if needed)
- Key:
- Redeploy your application
🤖 How It Works
Project Context
When a user opens the Getting Started page, the AI automatically:
-
Checks project creation method:
- Local workspace path
- GitHub repository
- ChatGPT conversation URL
-
Analyzes existing activity:
- Counts coding sessions
- Reviews recent work
- Identifies what's been built
-
Provides personalized guidance:
- Acknowledges existing progress
- Suggests next steps
- Answers questions about the project
- Helps break down goals into tasks
System Prompt
The AI is instructed to:
- Welcome users warmly
- Reference their specific project details
- Check for existing sessions and code
- Provide actionable, step-by-step guidance
- Ask clarifying questions
- Help users make progress quickly
Data Available to AI
The AI has access to:
- Project name and product vision
- Project type (manual, GitHub, ChatGPT, local)
- Workspace path (if local folder was selected)
- GitHub repository (if connected)
- ChatGPT URL (if provided)
- Session count and recent activity
- Conversation history (during the chat session)
📊 API Endpoint
Endpoint: POST /api/ai/chat
Request:
{
"projectId": "string",
"message": "string",
"conversationHistory": [
{ "role": "user|assistant", "content": "string" }
]
}
Response:
{
"success": true,
"message": "AI response here",
"projectContext": {
"sessionCount": 5,
"hasWorkspace": true,
"hasGithub": false,
"hasChatGPT": false
}
}
🔒 Security
- API key is server-side only (never exposed to client)
- User authentication required (Firebase ID token)
- Project ownership verified
- Rate limiting recommended (not yet implemented)
💡 Tips
Prompt Engineering
The system prompt can be modified in /app/api/ai/chat/route.ts to:
- Change the AI's personality
- Add specific instructions
- Include additional context
- Customize responses for different project types
Fallback Behavior
If Gemini API fails:
- Shows a friendly default message
- Allows user to continue chatting
- Logs errors for debugging
Cost Management
Gemini Pro is currently free with rate limits:
- 60 requests per minute
- 1,500 requests per day
For production, consider:
- Implementing rate limiting per user
- Caching common responses
- Using Gemini Pro 1.5 for longer context
🧪 Testing
- Start local dev server:
npm run dev - Navigate to any project's Getting Started page
- The AI should automatically greet you with context about your project
- Try asking questions like:
- "What should I build first?"
- "Help me understand my existing sessions"
- "What's the best way to organize my code?"
📝 Customization
To customize the AI behavior, edit the systemPrompt in:
/app/api/ai/chat/route.ts
You can:
- Add more project context
- Change the tone and style
- Include specific frameworks or tools
- Add code examples and templates
- Integrate with other APIs or databases
Questions? Check the Gemini API documentation