- Rule 3: replace 'get explicit confirmation' with 'summarize and keep moving'
- Checkpoint rule: append marker immediately, then continue to next phase in same message
- Brain dump edge case: save all phases, chain PHASE_COMPLETE markers, no pausing
Made-with: Cursor
When is_init=true, no user message was being added to history before
calling the LLM. Gemini requires at least one user turn — without it
the API returned "contents are required" and Atlas never sent its
opening greeting. Now adds the init message marked internally so it's
sent to the LLM but filtered out of returned/stored history.
Made-with: Cursor
- Rewrite system prompt to support dual-mode: COO for user projects
when knowledge_context provides project data, or platform orchestrator
when called without context
- Remove the "Project Memory" wrapper prefix so knowledge_context is
injected cleanly (the COO persona header in context is self-contained)
- Clarify tools, style, security rules
Made-with: Cursor
Accept optional github_token in POST /api/mirror and inject it into
the git clone URL so private repos can be cloned without interactive auth.
Made-with: Cursor
- POST /api/mirror: clones public GitHub repo and pushes to Gitea as-is
- ImportAnalyzer agent: reads unknown codebase, writes CODEBASE_MAP.md
and MIGRATION_PLAN.md in plain language for non-technical founders
- Register ImportAnalyzer agent and prompt in agents/index.ts
Made-with: Cursor
- theia-exec.ts: primary path is now HTTP sync (syncRepoToTheia) via
sync-server.js running inside Theia on port 3001 — no docker socket needed
- syncRepoToTheia(giteaRepo): POST /sync → Theia git-pulls latest committed code
- isTheiaSyncAvailable(): health check before attempting sync
- docker exec path preserved for future use when socket is mounted
- agent-session-runner: use syncRepoToTheia after auto-commit
- server.ts: log both docker exec + HTTP sync status at startup
Made-with: Cursor
- Dockerfile: install docker-ce-cli, run as root for socket access
- theia-exec.ts: container discovery (env > label > name), theiaExec(),
syncToTheia() via docker cp
- agent-session-runner: execute_command → docker exec into Theia (fallback to local)
- agent-session-runner: syncToTheia() before auto-commit so "Open in Theia"
shows agent files immediately after session completes
- server.ts: compute theiaWorkspaceSubdir from giteaRepo slug, log bridge status
at startup
- custom_docker_run_options already set to mount /var/run/docker.sock
Made-with: Cursor
- SessionRunOptions: autoApprove, giteaRepo, repoRoot, coolifyAppUuid/Url/Token
- autoCommitAndDeploy(): git add -A, commit with "agent: <task>", push,
trigger Coolify deploy, PATCH session → approved
- Falls back to done status if commit fails so manual approve still works
- /agent/execute: captures repoRoot before appPath scoping, defaults
autoApprove to true, passes coolify params from env
- System prompt: "do NOT commit" → "platform handles committing"
Made-with: Cursor
- @anthropic-ai/vertex-sdk: proper Anthropic Messages API on Vertex
- AnthropicVertexClient: converts OAI message format ↔ Anthropic format,
handles tool_use blocks, retries 429/503 with backoff
- createLLM: routes anthropic/* and claude-* models through new client
- Tier B/C default: claude-sonnet-4-6 via us-east5 Vertex endpoint
- /generate endpoint: accepts region param for regional endpoint testing
Made-with: Cursor
- VertexOpenAIClient: retry on 429/503 up to 4 times with exponential
backoff (2s/4s/8s/16s + jitter), respects Retry-After header
- Tier B/C default: zai-org/glm-5-maas → claude-sonnet-4-6 (much higher
rate limits, still Vertex MaaS)
- /agent/execute: accept continueTask param to run a follow-up within
the original task context without starting a fresh session
Made-with: Cursor
Receives giteaRepo + commitMessage, stages all workspace changes,
commits with the user-supplied message, pushes to Gitea, then
optionally calls Coolify /start to trigger a rolling redeploy.
Returns { committed, deployed, message } to the frontend.
Made-with: Cursor
- Add runSessionAgent: streaming variant of runAgent that PATCHes VIBN DB
after every LLM turn and tool call so frontend can poll live output
- Track changed files from write_file / replace_in_file tool calls
- Add /agent/execute: receives sessionId + giteaRepo + task, clones repo,
scopes workspace to appPath, runs Coder agent async (returns 202 immediately)
- Add /agent/stop: sets stopped flag; agent checks between turns and exits cleanly
- Agent does NOT commit on completion — leaves changes for user review/approval
Made-with: Cursor
- Add web_search to ATLAS_TOOLS filter (was only finalize_prd)
- Add Tools Available section to atlas prompt so it knows when/how to use it
Made-with: Cursor
- Add opening message instruction to atlas prompt
- Handle isInit flag in atlasChat() to not store the greeting trigger
as a user turn in conversation history
- Update server.ts to pass is_init through to atlasChat()
Made-with: Cursor
Atlas can now search the internet during product discovery conversations
to research competitors, pricing models, and market context. Uses Jina
AI's free search endpoint — no API key required.
Made-with: Cursor
Layer 1 — src/agents/ (thin agent definitions, no prompt text)
registry.ts — AgentConfig, registerAgent(), getAgent(), AGENTS proxy, pick()
orchestrator.ts, coder.ts, pm.ts, marketing.ts — one file each, just metadata + tool picks
index.ts — barrel: imports prompts then agents (correct registration order)
Layer 2 — src/prompts/ (prompt text separated from agent logic)
loader.ts — registerPrompt(), resolvePrompt() with {{variable}} substitution
orchestrator.ts, coder.ts, pm.ts, marketing.ts — prompt templates as registered strings
orchestrator.ts now uses resolvePrompt('orchestrator', { knowledge }) instead of
inline SYSTEM_PROMPT const; {{knowledge}} variable injects project memory cleanly.
agent-runner.ts uses resolvePrompt(config.promptId) per agent turn.
Layer 3 — src/tools/skills.ts (new skills capability)
list_skills(repo) — lists .skills/<name>/SKILL.md directories from a Gitea repo
get_skill(repo, name) — reads and returns the markdown body of a skill file
Orchestrator and all agents now have get_skill in their tool sets.
Orchestrator also has list_skills and references skills in its prompt.
Also fixed:
- server.ts now passes history + knowledge_context from request body to orchestratorChat()
(these were being sent by the frontend but silently dropped)
- server.ts imports PROTECTED_GITEA_REPOS from tools/security.ts (no more duplicate)
- Deleted src/agents.ts (replaced by src/agents/ directory)
Made-with: Cursor
Replaces the single 800-line tools.ts and its switch dispatcher with a
Theia-inspired registry pattern — each tool domain is its own file, and
dispatch is a plain Map.get() call with no central routing function.
New structure in src/tools/:
registry.ts — ToolDefinition (with handler), registerTool(), executeTool(), ALL_TOOLS
context.ts — ToolContext, MemoryUpdate interfaces
security.ts — PROTECTED_* constants + assertGiteaWritable/assertCoolifyDeployable
utils.ts — safeResolve(), EXCLUDED set
file.ts — read_file, write_file, replace_in_file, list_directory, find_files, search_code
shell.ts — execute_command
git.ts — git_commit_and_push
coolify.ts — coolify_*, list_all_apps, get_app_status, deploy_app
gitea.ts — gitea_*, list_repos, list_all_issues, read_repo_file
agent.ts — spawn_agent, get_job_status
memory.ts — save_memory
index.ts — barrel with side-effect imports + re-exports
Adding a new tool now requires only a new file + registerTool() call.
No switch statement, no shared array to edit. External API unchanged.
Made-with: Cursor
- Empty message fix: skip pushing assistant msg to history when both
content and tool_calls are absent (GLM-5 mid-reasoning token exhaustion).
Also filter preexisting empty assistant messages from returned history.
- System prompt now correctly injects knowledgeContext from opts into the
Tier-B system message (was missing in the loop's buildMessages).
- GITEA_API_TOKEN updated externally in Coolify (old token was invalid).
Made-with: Cursor
The VM's metadata server doesn't grant cloud-platform scope by default.
Read GOOGLE_APPLICATION_CREDENTIALS_JSON env var (service account key JSON)
and pass it directly to GoogleAuth. Falls back to metadata server if unset.
This restores GLM-5 access via Vertex AI.
Made-with: Cursor
gcloud is not available inside the Docker container. Use google-auth-library
instead, which reads credentials from the GCP metadata server (works on any
GCP VM) or GOOGLE_APPLICATION_CREDENTIALS env var. Also rebuilds dist/.
Made-with: Cursor
src/llm.ts was never committed — this caused the Docker build to fail
with "Cannot find module './llm'". Also commit updated agent-runner.ts,
agents.ts, and .env.example that reference the new LLM client.
Made-with: Cursor
- Dockerfile now runs tsc during build so committed dist/ is never stale
- ChatResult interface was missing history[] and memoryUpdates[] fields
- Re-add missing MemoryUpdate import in orchestrator.ts
- Rebuild dist/ with all new fields included
Made-with: Cursor
- ToolContext gets memoryUpdates[] — accumulated by save_memory calls
- orchestratorChat accepts preloadedHistory and knowledgeContext opts
- History trimmed to last 40 messages per turn (cost control)
- Knowledge items injected into system prompt as ## Project Memory
- ChatResult returns history[] and memoryUpdates[] for frontend persistence
- server.ts accepts history/knowledge_context from POST body
- save_memory tool: lets AI persist facts (key, type, value) to long-term memory
Made-with: Cursor