fix(coolify): strip is_build_time from env writes; add reveal + GCS
Coolify v4's POST/PATCH /applications/{uuid}/envs only accepts key,
value, is_preview, is_literal, is_multiline, is_shown_once. Sending
is_build_time triggers a 422 "This field is not allowed." — it's now
a derived read-only flag (is_buildtime) computed from Dockerfile ARG
usage. Breaks agents trying to upsert env vars.
Three-layer fix so this can't regress:
- lib/coolify.ts: COOLIFY_ENV_WRITE_FIELDS whitelist enforced at the
network boundary, regardless of caller shape
- app/api/workspaces/[slug]/apps/[uuid]/envs: stops forwarding the
field; returns a deprecation warning when callers send it; GET
reads both is_buildtime and is_build_time for version parity
- app/api/mcp/route.ts: same treatment in the MCP dispatcher;
AI_CAPABILITIES.md doc corrected
Also bundles (not related to the above):
- Workspace API keys are now revealable from settings. New
key_encrypted column stores AES-256-GCM(VIBN_SECRETS_KEY, token).
POST /api/workspaces/[slug]/keys/[keyId]/reveal returns plaintext
for session principals only; API-key principals cannot reveal
siblings. Legacy keys stay valid for auth but can't reveal.
- P5.3 Object storage: lib/gcp/storage.ts + lib/workspace-gcs.ts
idempotently provision a per-workspace GCS bucket, service
account, IAM binding and HMAC key. New POST /api/workspaces/
[slug]/storage/buckets endpoint. Migration script + smoke test
included. Proven end-to-end against prod master-ai-484822.
Made-with: Cursor
This commit is contained in:
98
app/api/workspaces/[slug]/storage/buckets/route.ts
Normal file
98
app/api/workspaces/[slug]/storage/buckets/route.ts
Normal file
@@ -0,0 +1,98 @@
|
||||
/**
|
||||
* GET /api/workspaces/[slug]/storage/buckets — describe the workspace's
|
||||
* provisioned GCS state (default bucket name, SA email, HMAC accessId,
|
||||
* provision status). Does NOT return the HMAC secret.
|
||||
*
|
||||
* POST /api/workspaces/[slug]/storage/buckets — idempotently provisions
|
||||
* the per-workspace GCS substrate:
|
||||
* 1. dedicated GCP service account (vibn-ws-{slug}@…)
|
||||
* 2. SA JSON keyfile (encrypted at rest)
|
||||
* 3. default bucket vibn-ws-{slug}-{6char} in northamerica-northeast1
|
||||
* 4. roles/storage.objectAdmin binding for the SA on that bucket
|
||||
* 5. HMAC key on the SA so app code can use AWS S3 SDKs
|
||||
* Safe to re-run; each step short-circuits when already complete.
|
||||
*
|
||||
* Auth: session OR `Bearer vibn_sk_...`. Same workspace-scope rules as
|
||||
* every other /api/workspaces/[slug]/* endpoint.
|
||||
*
|
||||
* P5.3 — vertical slice. The full storage.* tool family (presign,
|
||||
* list_objects, delete_object, set_lifecycle) lands once this
|
||||
* provisioning step is verified end-to-end.
|
||||
*/
|
||||
|
||||
import { NextResponse } from 'next/server';
|
||||
import { requireWorkspacePrincipal } from '@/lib/auth/workspace-auth';
|
||||
import {
|
||||
ensureWorkspaceGcsProvisioned,
|
||||
getWorkspaceGcsState,
|
||||
} from '@/lib/workspace-gcs';
|
||||
|
||||
export async function GET(
|
||||
request: Request,
|
||||
{ params }: { params: Promise<{ slug: string }> },
|
||||
) {
|
||||
const { slug } = await params;
|
||||
const principal = await requireWorkspacePrincipal(request, { targetSlug: slug });
|
||||
if (principal instanceof NextResponse) return principal;
|
||||
|
||||
const ws = await getWorkspaceGcsState(principal.workspace.id);
|
||||
if (!ws) {
|
||||
return NextResponse.json({ error: 'Workspace not found' }, { status: 404 });
|
||||
}
|
||||
|
||||
return NextResponse.json({
|
||||
workspace: { slug: ws.slug },
|
||||
storage: {
|
||||
status: ws.gcp_provision_status ?? 'pending',
|
||||
error: ws.gcp_provision_error ?? null,
|
||||
serviceAccountEmail: ws.gcp_service_account_email ?? null,
|
||||
defaultBucketName: ws.gcs_default_bucket_name ?? null,
|
||||
hmacAccessId: ws.gcs_hmac_access_id ?? null,
|
||||
location: 'northamerica-northeast1',
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
export async function POST(
|
||||
request: Request,
|
||||
{ params }: { params: Promise<{ slug: string }> },
|
||||
) {
|
||||
const { slug } = await params;
|
||||
const principal = await requireWorkspacePrincipal(request, { targetSlug: slug });
|
||||
if (principal instanceof NextResponse) return principal;
|
||||
|
||||
try {
|
||||
const result = await ensureWorkspaceGcsProvisioned(principal.workspace);
|
||||
return NextResponse.json(
|
||||
{
|
||||
workspace: { slug: principal.workspace.slug },
|
||||
storage: {
|
||||
status: result.status,
|
||||
serviceAccountEmail: result.serviceAccountEmail,
|
||||
bucket: result.bucket,
|
||||
hmacAccessId: result.hmac.accessId,
|
||||
location: result.bucket.location,
|
||||
},
|
||||
},
|
||||
{ status: 200 },
|
||||
);
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
// Schema-not-applied detection: makes the failure mode obvious in
|
||||
// dev before the operator runs scripts/migrate-workspace-gcs.sql.
|
||||
if (/column .* does not exist/i.test(message)) {
|
||||
return NextResponse.json(
|
||||
{
|
||||
error:
|
||||
'GCS columns missing on vibn_workspaces. Run scripts/migrate-workspace-gcs.sql.',
|
||||
details: message,
|
||||
},
|
||||
{ status: 503 },
|
||||
);
|
||||
}
|
||||
return NextResponse.json(
|
||||
{ error: 'GCS provisioning failed', details: message },
|
||||
{ status: 502 },
|
||||
);
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user