refactor: implement three-layer agent architecture (agents / prompts / skills)

Layer 1 — src/agents/ (thin agent definitions, no prompt text)
  registry.ts   — AgentConfig, registerAgent(), getAgent(), AGENTS proxy, pick()
  orchestrator.ts, coder.ts, pm.ts, marketing.ts — one file each, just metadata + tool picks
  index.ts      — barrel: imports prompts then agents (correct registration order)

Layer 2 — src/prompts/ (prompt text separated from agent logic)
  loader.ts     — registerPrompt(), resolvePrompt() with {{variable}} substitution
  orchestrator.ts, coder.ts, pm.ts, marketing.ts — prompt templates as registered strings
  orchestrator.ts now uses resolvePrompt('orchestrator', { knowledge }) instead of
  inline SYSTEM_PROMPT const; {{knowledge}} variable injects project memory cleanly.
  agent-runner.ts uses resolvePrompt(config.promptId) per agent turn.

Layer 3 — src/tools/skills.ts (new skills capability)
  list_skills(repo)      — lists .skills/<name>/SKILL.md directories from a Gitea repo
  get_skill(repo, name)  — reads and returns the markdown body of a skill file
  Orchestrator and all agents now have get_skill in their tool sets.
  Orchestrator also has list_skills and references skills in its prompt.

Also fixed:
  - server.ts now passes history + knowledge_context from request body to orchestratorChat()
    (these were being sent by the frontend but silently dropped)
  - server.ts imports PROTECTED_GITEA_REPOS from tools/security.ts (no more duplicate)
  - Deleted src/agents.ts (replaced by src/agents/ directory)

Made-with: Cursor
This commit is contained in:
2026-03-01 15:38:42 -08:00
parent e91e5e0e37
commit e29dccf745
46 changed files with 759 additions and 272 deletions

View File

@@ -3,6 +3,7 @@ Object.defineProperty(exports, "__esModule", { value: true });
exports.runAgent = runAgent;
const llm_1 = require("./llm");
const tools_1 = require("./tools");
const loader_1 = require("./prompts/loader");
const job_store_1 = require("./job-store");
const MAX_TURNS = 40;
/**
@@ -23,8 +24,9 @@ async function runAgent(job, config, task, ctx) {
(0, job_store_1.updateJob)(job.id, { status: 'running', progress: `Starting ${config.name} (${llm.modelId})…` });
while (turn < MAX_TURNS) {
turn++;
const systemPrompt = (0, loader_1.resolvePrompt)(config.promptId);
const messages = [
{ role: 'system', content: config.systemPrompt },
{ role: 'system', content: systemPrompt },
...history
];
const response = await llm.chat(messages, oaiTools, 8192);