feat: persistent conversation + save_memory tool

- ToolContext gets memoryUpdates[] — accumulated by save_memory calls
- orchestratorChat accepts preloadedHistory and knowledgeContext opts
- History trimmed to last 40 messages per turn (cost control)
- Knowledge items injected into system prompt as ## Project Memory
- ChatResult returns history[] and memoryUpdates[] for frontend persistence
- server.ts accepts history/knowledge_context from POST body
- save_memory tool: lets AI persist facts (key, type, value) to long-term memory

Made-with: Cursor
This commit is contained in:
2026-02-27 18:55:33 -08:00
parent 5cb1e82169
commit 837b6e8b8d
3 changed files with 276 additions and 121 deletions

View File

@@ -1,15 +1,15 @@
import { GoogleGenAI, Content } from '@google/genai'; import { createLLM, toOAITools, LLMMessage } from './llm';
import { ALL_TOOLS, executeTool, ToolContext } from './tools'; import { ALL_TOOLS, executeTool, ToolContext } from './tools';
const MAX_TURNS = 20; const MAX_TURNS = 20;
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Session store — conversation history per session_id // Session store — one conversation history per session_id
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
interface Session { interface Session {
id: string; id: string;
history: Content[]; history: LLMMessage[]; // OpenAI message format
createdAt: string; createdAt: string;
lastActiveAt: string; lastActiveAt: string;
} }
@@ -44,166 +44,180 @@ export function clearSession(sessionId: string) {
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Orchestrator system prompt — full Vibn context // Orchestrator system prompt
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
const SYSTEM_PROMPT = `You are the Master Orchestrator for Vibn — an AI-powered cloud development platform. const SYSTEM_PROMPT = `You are the Master Orchestrator for Vibn — an AI-powered cloud development platform.
You are always running. You have full awareness of the Vibn project and can take autonomous action. You run continuously and have full awareness of the Vibn project. You can take autonomous action on behalf of the user.
## What Vibn is ## What Vibn is
Vibn is a platform that lets developers build products using AI agents. It includes: Vibn lets developers build products using AI agents:
- A cloud IDE (Theia) at theia.vibnai.com - Frontend app (Next.js) at vibnai.com
- A frontend app (Next.js) at vibnai.com - Backend API at api.vibnai.com
- A backend API at api.vibnai.com - Agent runner (this system) at agents.vibnai.com
- An agent runner (this system) at agents.vibnai.com - Cloud IDE (Theia) at theia.vibnai.com
- Self-hosted Git at git.vibnai.com - Self-hosted Git at git.vibnai.com (user: mark)
- Self-hosted deployments via Coolify at coolify.vibnai.com - Deployments via Coolify at coolify.vibnai.com (server: 34.19.250.135, Montreal)
## Your capabilities ## Your tools
You have access to tools that give you full project control:
**Awareness tools** (use these to understand current state): **Awareness** (understand current state first):
- list_repos — see all Git repositories - list_repos — all Git repositories
- list_all_issues — see what work is open or in progress - list_all_issues — open/in-progress work
- list_all_apps — see all deployed apps and their status - list_all_apps — deployed apps and their status
- get_app_status — check if a specific app is running and healthy - get_app_status — health of a specific app
- read_repo_file — read any file from any repo without cloning - read_repo_file — read any file from any repo without cloning
**Action tools** (use these to get things done): **Action** (get things done):
- spawn_agent — dispatch Coder, PM, or Marketing agent to do work on a repo - spawn_agent — dispatch Coder, PM, or Marketing agent on a repo
- get_job_status — check if a spawned agent job is done - get_job_status — check a running agent job
- deploy_app — trigger a Coolify deployment after code is committed - deploy_app — trigger a Coolify deployment
- gitea_create_issue — create a tracked issue (also triggers agent webhook if labelled) - gitea_create_issue — track work (label agent:coder/pm/marketing to auto-trigger)
- gitea_list_issues, gitea_close_issue — manage issue lifecycle - gitea_list_issues / gitea_close_issue — issue lifecycle
## Available agents you can spawn ## Specialist agents you can spawn
- **Coder** — writes code, edits files, runs commands, commits and pushes - **Coder** — writes code, tests, commits, and pushes
- **PM** — writes documentation, manages issues, creates reports - **PM** — docs, issues, sprint tracking
- **Marketing** — writes copy, blog posts, release notes - **Marketing** — copy, release notes, blog posts
## How you work ## How you work
1. When the user gives you a task, think about what needs to happen. 1. Use awareness tools first if you need current state.
2. Use awareness tools first to understand current state if needed. 2. Break the task into concrete steps.
3. Break the task into concrete actions. 3. Spawn the right agent(s) with specific, detailed instructions.
4. Spawn the right agents with detailed, specific task descriptions. 4. Track and report on results.
5. Check back on job status if the user wants to track progress. 5. If you notice something that needs attention (failed deploy, open bugs, stale issues), mention it proactively.
6. Report clearly what was done and what's next.
## Your personality ## Style
- Direct and clear. No fluff. - Direct. No filler.
- Proactive — if you notice something that needs fixing, mention it. - Honest about uncertainty.
- Honest about what you can and can't do. - When spawning agents, be specific — give them full context, not vague instructions.
- You speak for the whole system, not just one agent. - Keep responses concise unless the user needs detail.
## Important context ## Security
- All repos are owned by "mark" on git.vibnai.com - Never spawn agents on: mark/vibn-frontend, mark/theia-code-os, mark/vibn-agent-runner, mark/vibn-api, mark/master-ai
- The main repos are: vibn-frontend, vibn-api, vibn-agent-runner, theia-code-os - Those are protected platform repos — read-only for you, not writable by agents.`;
- The stack: Next.js (frontend), Node.js (API + agent runner), Theia (IDE)
- Coolify manages all deployments on server 34.19.250.135 (Montreal)
- Agent label routing: agent:coder, agent:pm, agent:marketing on Gitea issues`;
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Main chat function // Chat types
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
export interface ChatMessage {
role: 'user' | 'assistant';
content: string;
}
export interface ChatResult { export interface ChatResult {
reply: string; reply: string;
reasoning: string | null;
sessionId: string; sessionId: string;
turns: number; turns: number;
toolCalls: string[]; toolCalls: string[];
model: string;
} }
// ---------------------------------------------------------------------------
// Main orchestrator chat — uses GLM-5 (Tier B) by default
// ---------------------------------------------------------------------------
export async function orchestratorChat( export async function orchestratorChat(
sessionId: string, sessionId: string,
userMessage: string, userMessage: string,
ctx: ToolContext ctx: ToolContext,
opts?: {
/** Pre-load history from DB — replaces in-memory session history */
preloadedHistory?: LLMMessage[];
/** Knowledge items to inject as context at start of conversation */
knowledgeContext?: string;
}
): Promise<ChatResult> { ): Promise<ChatResult> {
const apiKey = process.env.GOOGLE_API_KEY; const modelId = process.env.ORCHESTRATOR_MODEL ?? 'B'; // Tier B = GLM-5
if (!apiKey) throw new Error('GOOGLE_API_KEY not set'); const llm = createLLM(modelId, { temperature: 0.3 });
const genai = new GoogleGenAI({ apiKey });
const session = getOrCreateSession(sessionId); const session = getOrCreateSession(sessionId);
// Orchestrator gets ALL tools // Seed session from DB history if provided and session is fresh
const functionDeclarations = ALL_TOOLS.map(t => ({ if (opts?.preloadedHistory && opts.preloadedHistory.length > 0 && session.history.length === 0) {
name: t.name, session.history = [...opts.preloadedHistory];
description: t.description, }
parameters: t.parameters as any
}));
// Add user message to history const oaiTools = toOAITools(ALL_TOOLS);
session.history.push({ role: 'user', parts: [{ text: userMessage }] });
// Append user message
session.history.push({ role: 'user', content: userMessage });
let turn = 0; let turn = 0;
let finalReply = ''; let finalReply = '';
let finalReasoning: string | null = null;
const toolCallNames: string[] = []; const toolCallNames: string[] = [];
// Build messages with system prompt prepended
const buildMessages = (): LLMMessage[] => [
{ role: 'system', content: SYSTEM_PROMPT },
...session.history
];
while (turn < MAX_TURNS) { while (turn < MAX_TURNS) {
turn++; turn++;
const response = await genai.models.generateContent({ const response = await llm.chat(buildMessages(), oaiTools, 4096);
model: 'gemini-2.5-flash',
contents: session.history, // If GLM-5 is still reasoning (content null, finish_reason length) give it more tokens
config: { if (response.content === null && response.tool_calls.length === 0 && response.finish_reason === 'length') {
systemInstruction: SYSTEM_PROMPT, // Retry with more tokens — model hit max_tokens during reasoning
tools: [{ functionDeclarations }], const retry = await llm.chat(buildMessages(), oaiTools, 8192);
temperature: 0.3, Object.assign(response, retry);
maxOutputTokens: 8192
} }
});
const candidate = response.candidates?.[0]; // Record reasoning for the final turn (informational, not stored in history)
if (!candidate) throw new Error('No response from Gemini'); if (response.reasoning) finalReasoning = response.reasoning;
const modelContent: Content = { // Build assistant message to add to history
role: 'model', const assistantMsg: LLMMessage = {
parts: candidate.content?.parts || [] role: 'assistant',
content: response.content,
tool_calls: response.tool_calls.length > 0 ? response.tool_calls : undefined
}; };
session.history.push(modelContent); session.history.push(assistantMsg);
const functionCalls = candidate.content?.parts?.filter(p => p.functionCall) ?? []; // No tool calls — we have the final answer
if (response.tool_calls.length === 0) {
// No more tool calls — we have the final answer finalReply = response.content ?? '';
if (functionCalls.length === 0) {
finalReply = candidate.content?.parts
?.filter(p => p.text)
.map(p => p.text)
.join('') ?? '';
break; break;
} }
// Execute tool calls // Execute each tool call and collect results
const toolResultParts: any[] = []; for (const tc of response.tool_calls) {
for (const part of functionCalls) { const fnName = tc.function.name;
const call = part.functionCall!; let fnArgs: Record<string, unknown> = {};
const callName = call.name ?? 'unknown'; try { fnArgs = JSON.parse(tc.function.arguments || '{}'); } catch { /* bad JSON */ }
const callArgs = (call.args ?? {}) as Record<string, unknown>;
toolCallNames.push(callName); toolCallNames.push(fnName);
let result: unknown; let result: unknown;
try { try {
result = await executeTool(callName, callArgs, ctx); result = await executeTool(fnName, fnArgs, ctx);
} catch (err) { } catch (err) {
result = { error: err instanceof Error ? err.message : String(err) }; result = { error: err instanceof Error ? err.message : String(err) };
} }
toolResultParts.push({ // Add tool result to history
functionResponse: { name: callName, response: { result } } session.history.push({
role: 'tool',
tool_call_id: tc.id,
name: fnName,
content: typeof result === 'string' ? result : JSON.stringify(result)
}); });
} }
session.history.push({ role: 'user', parts: toolResultParts });
} }
if (turn >= MAX_TURNS && !finalReply) { if (turn >= MAX_TURNS && !finalReply) {
finalReply = 'I hit the turn limit. Please try a more specific request.'; finalReply = 'Hit the turn limit. Try a more specific request.';
} }
return { reply: finalReply, sessionId, turns: turn, toolCalls: toolCallNames }; return {
reply: finalReply,
reasoning: finalReasoning,
sessionId,
turns: turn,
toolCalls: toolCallNames,
model: llm.modelId,
history: session.history.slice(-40),
memoryUpdates: ctx.memoryUpdates
};
} }

View File

@@ -10,6 +10,15 @@ import { AGENTS } from './agents';
import { ToolContext } from './tools'; import { ToolContext } from './tools';
import { orchestratorChat, listSessions, clearSession } from './orchestrator'; import { orchestratorChat, listSessions, clearSession } from './orchestrator';
// Protected Vibn platform repos — agents cannot clone or work in these workspaces
const PROTECTED_GITEA_REPOS = new Set([
'mark/vibn-frontend',
'mark/theia-code-os',
'mark/vibn-agent-runner',
'mark/vibn-api',
'mark/master-ai',
]);
const app = express(); const app = express();
app.use(cors()); app.use(cors());
@@ -33,6 +42,12 @@ function ensureWorkspace(repo?: string): string {
fs.mkdirSync(dir, { recursive: true }); fs.mkdirSync(dir, { recursive: true });
return dir; return dir;
} }
if (PROTECTED_GITEA_REPOS.has(repo)) {
throw new Error(
`SECURITY: Repo "${repo}" is a protected Vibn platform repo. ` +
`Agents cannot clone or work in this workspace.`
);
}
const dir = path.join(base, repo.replace('/', '_')); const dir = path.join(base, repo.replace('/', '_'));
const gitea = { const gitea = {
apiUrl: process.env.GITEA_API_URL || '', apiUrl: process.env.GITEA_API_URL || '',
@@ -67,7 +82,8 @@ function buildContext(repo?: string): ToolContext {
coolify: { coolify: {
apiUrl: process.env.COOLIFY_API_URL || '', apiUrl: process.env.COOLIFY_API_URL || '',
apiToken: process.env.COOLIFY_API_TOKEN || '' apiToken: process.env.COOLIFY_API_TOKEN || ''
} },
memoryUpdates: []
}; };
} }

View File

@@ -6,10 +6,65 @@ import { Minimatch } from 'minimatch';
const execAsync = util.promisify(cp.exec); const execAsync = util.promisify(cp.exec);
// =============================================================================
// SECURITY GUARDRAILS — Protected VIBN Platform Resources
//
// These repos and Coolify resources belong to the Vibn platform itself.
// Agents must never be allowed to push code or trigger deployments here.
// Read-only operations (list, read file, get status) are still permitted
// so agents can observe the platform state, but all mutations are blocked.
// =============================================================================
/** Gitea repos that agents can NEVER push to, commit to, or write issues on. */
const PROTECTED_GITEA_REPOS = new Set([
'mark/vibn-frontend',
'mark/theia-code-os',
'mark/vibn-agent-runner',
'mark/vibn-api',
'mark/master-ai',
]);
/** Coolify project UUID for the VIBN platform — agents cannot deploy here. */
const PROTECTED_COOLIFY_PROJECT = 'f4owwggokksgw0ogo0844os0';
/**
* Specific Coolify app UUIDs that must never be deployed by an agent.
* This is a belt-and-suspenders check in case the project UUID filter is bypassed.
*/
const PROTECTED_COOLIFY_APPS = new Set([
'y4cscsc8s08c8808go0448s0', // vibn-frontend
'kggs4ogckc0w8ggwkkk88kck', // vibn-postgres
'o4wwck0g0c04wgoo4g4s0004', // gitea
]);
function assertGiteaWritable(repo: string): void {
if (PROTECTED_GITEA_REPOS.has(repo)) {
throw new Error(
`SECURITY: Repo "${repo}" is a protected Vibn platform repo. ` +
`Agents cannot push code or modify issues in this repository.`
);
}
}
function assertCoolifyDeployable(appUuid: string): void {
if (PROTECTED_COOLIFY_APPS.has(appUuid)) {
throw new Error(
`SECURITY: App "${appUuid}" is a protected Vibn platform application. ` +
`Agents cannot trigger deployments for this application.`
);
}
}
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
// Context passed to every tool call — workspace root + credentials // Context passed to every tool call — workspace root + credentials
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
export interface MemoryUpdate {
key: string;
type: string; // e.g. "tech_stack" | "decision" | "feature" | "goal" | "constraint" | "note"
value: string;
}
export interface ToolContext { export interface ToolContext {
workspaceRoot: string; workspaceRoot: string;
gitea: { gitea: {
@@ -21,6 +76,8 @@ export interface ToolContext {
apiUrl: string; apiUrl: string;
apiToken: string; apiToken: string;
}; };
/** Accumulated memory updates from save_memory tool calls in this turn */
memoryUpdates: MemoryUpdate[];
} }
// --------------------------------------------------------------------------- // ---------------------------------------------------------------------------
@@ -289,6 +346,23 @@ export const ALL_TOOLS: ToolDefinition[] = [
}, },
required: ['app_name'] required: ['app_name']
} }
},
{
name: 'save_memory',
description: 'Persist an important fact about this project to long-term memory. Use this to save decisions, tech stack choices, feature descriptions, constraints, or goals so they are remembered across conversations.',
parameters: {
type: 'object',
properties: {
key: { type: 'string', description: 'Short unique label (e.g. "primary_language", "auth_strategy", "deploy_target")' },
type: {
type: 'string',
enum: ['tech_stack', 'decision', 'feature', 'goal', 'constraint', 'note'],
description: 'Category of the memory item'
},
value: { type: 'string', description: 'The fact to remember (1-3 sentences)' }
},
required: ['key', 'type', 'value']
}
} }
]; ];
@@ -447,6 +521,19 @@ async function gitCommitAndPush(message: string, ctx: ToolContext): Promise<unkn
const { apiUrl, apiToken, username } = ctx.gitea; const { apiUrl, apiToken, username } = ctx.gitea;
try { try {
// Check the remote URL before committing — block pushes to protected repos
let remoteCheck = '';
try { remoteCheck = (await execAsync('git remote get-url origin', { cwd })).stdout.trim(); } catch { /* ok */ }
for (const protectedRepo of PROTECTED_GITEA_REPOS) {
const repoPath = protectedRepo.replace('mark/', '');
if (remoteCheck.includes(`/${repoPath}`) || remoteCheck.includes(`/${repoPath}.git`)) {
return {
error: `SECURITY: This workspace is linked to a protected Vibn platform repo (${protectedRepo}). ` +
`Agents cannot push to platform repos. Only user project repos are writable.`
};
}
}
await execAsync('git add -A', { cwd }); await execAsync('git add -A', { cwd });
await execAsync(`git commit -m "${message.replace(/"/g, '\\"')}"`, { cwd }); await execAsync(`git commit -m "${message.replace(/"/g, '\\"')}"`, { cwd });
@@ -493,7 +580,10 @@ async function coolifyFetch(path: string, ctx: ToolContext, method = 'GET', body
} }
async function coolifyListProjects(ctx: ToolContext): Promise<unknown> { async function coolifyListProjects(ctx: ToolContext): Promise<unknown> {
return coolifyFetch('/projects', ctx); const projects = await coolifyFetch('/projects', ctx) as any[];
if (!Array.isArray(projects)) return projects;
// Filter out the protected VIBN project entirely — agents don't need to see it
return projects.filter((p: any) => p.uuid !== PROTECTED_COOLIFY_PROJECT);
} }
async function coolifyListApplications(projectUuid: string, ctx: ToolContext): Promise<unknown> { async function coolifyListApplications(projectUuid: string, ctx: ToolContext): Promise<unknown> {
@@ -503,6 +593,15 @@ async function coolifyListApplications(projectUuid: string, ctx: ToolContext): P
} }
async function coolifyDeploy(appUuid: string, ctx: ToolContext): Promise<unknown> { async function coolifyDeploy(appUuid: string, ctx: ToolContext): Promise<unknown> {
assertCoolifyDeployable(appUuid);
// Also check the app belongs to the right project
const apps = await coolifyFetch('/applications', ctx) as any[];
if (Array.isArray(apps)) {
const app = apps.find((a: any) => a.uuid === appUuid);
if (app?.project_uuid === PROTECTED_COOLIFY_PROJECT) {
return { error: `SECURITY: App "${appUuid}" belongs to the protected Vibn project. Agents cannot deploy platform apps.` };
}
}
return coolifyFetch(`/applications/${appUuid}/deploy`, ctx, 'POST'); return coolifyFetch(`/applications/${appUuid}/deploy`, ctx, 'POST');
} }
@@ -529,6 +628,7 @@ async function giteaFetch(path: string, ctx: ToolContext, method = 'GET', body?:
} }
async function giteaCreateIssue(repo: string, title: string, body: string, labels: string[] | undefined, ctx: ToolContext): Promise<unknown> { async function giteaCreateIssue(repo: string, title: string, body: string, labels: string[] | undefined, ctx: ToolContext): Promise<unknown> {
assertGiteaWritable(repo);
return giteaFetch(`/repos/${repo}/issues`, ctx, 'POST', { title, body, labels }); return giteaFetch(`/repos/${repo}/issues`, ctx, 'POST', { title, body, labels });
} }
@@ -537,6 +637,7 @@ async function giteaListIssues(repo: string, state: string, ctx: ToolContext): P
} }
async function giteaCloseIssue(repo: string, issueNumber: number, ctx: ToolContext): Promise<unknown> { async function giteaCloseIssue(repo: string, issueNumber: number, ctx: ToolContext): Promise<unknown> {
assertGiteaWritable(repo);
return giteaFetch(`/repos/${repo}/issues/${issueNumber}`, ctx, 'PATCH', { state: 'closed' }); return giteaFetch(`/repos/${repo}/issues/${issueNumber}`, ctx, 'PATCH', { state: 'closed' });
} }
@@ -569,7 +670,10 @@ async function listRepos(ctx: ToolContext): Promise<unknown> {
headers: { 'Authorization': `token ${ctx.gitea.apiToken}` } headers: { 'Authorization': `token ${ctx.gitea.apiToken}` }
}); });
const data = await res.json() as any; const data = await res.json() as any;
return (data.data || []).map((r: any) => ({ return (data.data || [])
// Hide protected platform repos from agent's view entirely
.filter((r: any) => !PROTECTED_GITEA_REPOS.has(r.full_name))
.map((r: any) => ({
name: r.full_name, name: r.full_name,
description: r.description, description: r.description,
default_branch: r.default_branch, default_branch: r.default_branch,
@@ -581,9 +685,12 @@ async function listRepos(ctx: ToolContext): Promise<unknown> {
async function listAllIssues(repo: string | undefined, state: string, ctx: ToolContext): Promise<unknown> { async function listAllIssues(repo: string | undefined, state: string, ctx: ToolContext): Promise<unknown> {
if (repo) { if (repo) {
if (PROTECTED_GITEA_REPOS.has(repo)) {
return { error: `SECURITY: "${repo}" is a protected Vibn platform repo. Agents cannot access its issues.` };
}
return giteaFetch(`/repos/${repo}/issues?state=${state}&limit=20`, ctx); return giteaFetch(`/repos/${repo}/issues?state=${state}&limit=20`, ctx);
} }
// Fetch across all repos // Fetch across all non-protected repos
const repos = await listRepos(ctx) as any[]; const repos = await listRepos(ctx) as any[];
const allIssues: unknown[] = []; const allIssues: unknown[] = [];
for (const r of repos.slice(0, 10)) { for (const r of repos.slice(0, 10)) {
@@ -605,7 +712,10 @@ async function listAllIssues(repo: string | undefined, state: string, ctx: ToolC
async function listAllApps(ctx: ToolContext): Promise<unknown> { async function listAllApps(ctx: ToolContext): Promise<unknown> {
const apps = await coolifyFetch('/applications', ctx) as any[]; const apps = await coolifyFetch('/applications', ctx) as any[];
if (!Array.isArray(apps)) return apps; if (!Array.isArray(apps)) return apps;
return apps.map((a: any) => ({ return apps
// Filter out apps that belong to the protected VIBN project
.filter((a: any) => a.project_uuid !== PROTECTED_COOLIFY_PROJECT && !PROTECTED_COOLIFY_APPS.has(a.uuid))
.map((a: any) => ({
uuid: a.uuid, uuid: a.uuid,
name: a.name, name: a.name,
fqdn: a.fqdn, fqdn: a.fqdn,
@@ -622,6 +732,9 @@ async function getAppStatus(appName: string, ctx: ToolContext): Promise<unknown>
a.name?.toLowerCase() === appName.toLowerCase() || a.uuid === appName a.name?.toLowerCase() === appName.toLowerCase() || a.uuid === appName
); );
if (!app) return { error: `App "${appName}" not found` }; if (!app) return { error: `App "${appName}" not found` };
if (PROTECTED_COOLIFY_APPS.has(app.uuid) || app.project_uuid === PROTECTED_COOLIFY_PROJECT) {
return { error: `SECURITY: "${appName}" is a protected Vibn platform app. Status is not exposed to agents.` };
}
const logs = await coolifyFetch(`/applications/${app.uuid}/logs?limit=20`, ctx); const logs = await coolifyFetch(`/applications/${app.uuid}/logs?limit=20`, ctx);
return { name: app.name, uuid: app.uuid, status: app.status, fqdn: app.fqdn, logs }; return { name: app.name, uuid: app.uuid, status: app.status, fqdn: app.fqdn, logs };
} }
@@ -659,6 +772,11 @@ async function getJobStatus(jobId: string): Promise<unknown> {
} }
} }
function saveMemory(key: string, type: string, value: string, ctx: ToolContext): unknown {
ctx.memoryUpdates.push({ key, type, value });
return { saved: true, key, type };
}
async function deployApp(appName: string, ctx: ToolContext): Promise<unknown> { async function deployApp(appName: string, ctx: ToolContext): Promise<unknown> {
const apps = await coolifyFetch('/applications', ctx) as any[]; const apps = await coolifyFetch('/applications', ctx) as any[];
if (!Array.isArray(apps)) return apps; if (!Array.isArray(apps)) return apps;
@@ -666,6 +784,13 @@ async function deployApp(appName: string, ctx: ToolContext): Promise<unknown> {
a.name?.toLowerCase() === appName.toLowerCase() || a.uuid === appName a.name?.toLowerCase() === appName.toLowerCase() || a.uuid === appName
); );
if (!app) return { error: `App "${appName}" not found` }; if (!app) return { error: `App "${appName}" not found` };
// Block deployment to protected VIBN platform apps
if (PROTECTED_COOLIFY_APPS.has(app.uuid) || app.project_uuid === PROTECTED_COOLIFY_PROJECT) {
return {
error: `SECURITY: "${appName}" is a protected Vibn platform application. ` +
`Agents can only deploy user project apps, not platform infrastructure.`
};
}
const result = await fetch(`${ctx.coolify.apiUrl}/api/v1/deploy?uuid=${app.uuid}&force=false`, { const result = await fetch(`${ctx.coolify.apiUrl}/api/v1/deploy?uuid=${app.uuid}&force=false`, {
headers: { 'Authorization': `Bearer ${ctx.coolify.apiToken}` } headers: { 'Authorization': `Bearer ${ctx.coolify.apiToken}` }
}); });