Skip to main content
All endpoints below live under /:project/api/pipelines. Permissions are enforced with fine-grained strings such as ${project.slug}.pipelines.read or .pipelineRuns.execute.<id>.

List & create

  • GET /.../pipelines – optional query params limit and offset. Returns { id, name, description, version, created, updated } per pipeline, scoped by the caller’s permissions.
  • PUT /.../pipelines – body { name, description, source? }. When source is present it is validated with the pipeline compiler and stored alongside any compile errors. The response is the new pipeline ID.

Inspect / update / delete

GET /.../pipelines/{pipelineId}

Requires pipelines.read. Returns the base pipeline fields plus:
  • version: current version ID (if any).
  • source, errors, and metadata when you can read the current version.
  • runs: each run contains { id, status: { pipeline, "pipeline-evaluation" }, version, session, resources[] }. Resources include VNC/remote URLs and status if you have permission to inspect their underlying session resources.

PATCH /.../pipelines/{pipelineId}

Accepts name and/or description updates.

DELETE /.../pipelines/{pipelineId}

Requires write permission and removes the pipeline plus all versions/runs.

Editing pipeline source

PATCH /.../pipelines/{pipelineId}/source

Body may include source (pipeline graph) and/or metadata. Supplying new source triggers compilation; compiler errors are stored with the version. Each patch creates a new pipeline version and updates current_version_id. The response is { version: "<new version id>", errors: [...] }.

Versions

EndpointDescription
GET /.../pipelines/versionsSupports limit, offset, and pipeline filters. Returns visible versions with shallow metadata.
GET /.../pipelines/versions/{versionId}Requires read access to the version and parent pipeline. Includes source, errors, metadata, and run references.
GET /.../pipelines/versions/{versionId}/history?depth=NWalks the linked list of previous versions (respecting access permissions) up to depth.
POST /.../pipelines/versions/{versionId}/forkBody { name, description, project }. Creates a new pipeline (optionally in another project) seeded with the version’s source.
POST /.../pipelines/versions/{versionId}/switchBody { pipeline }. If the target pipeline already owns the version, simply switches current_version_id; otherwise copies the version into the destination pipeline.

Runs

  • POST /.../pipelines/{pipelineId}/start – body { session, version? }. Checks that the session and version belong to the project, then enqueues a pipeline run via pipelineRunner. Response is the run ID.
  • GET /.../pipelines/{pipelineId}/{runId} – returns run status, pipeline metadata, session info, and the list of attached session resources (with URLs and status when permitted).
  • GET /.../pipelines/{pipelineId}/{runId}/logs – Supports Accept: text/event-stream (SSE stream of live logs) or application/json (paged list via offset/limit). Sensitive strings are redacted before returning.
  • POST /.../pipelines/{pipelineId}/{runId}/stop – sets the pipeline-evaluation service status to stopped, effectively cancelling the run.

Analysis jobs

Analysis jobs pair a pipeline version with a session to generate model-guided insights before committing changes.
  • GET /.../pipelines/{pipelineId}/analysis – paginated list of jobs with { id, version, session, prompt, status, agent, created, updated }.
  • PUT /.../pipelines/{pipelineId}/analysis – body { session, model, prompt }. Validates that model belongs to OpenAiProvider.models.analyze, then enqueues a job through pipelineRunner.
  • GET /.../pipelines/{pipelineId}/analysis/{analysisId} – full detail including session resources used by the analysis, attached generations, and streamed messages.
  • GET /.../pipelines/{pipelineId}/analysis/{analysisId}/logs – JSON or SSE log stream (same semantics as run logs).
  • POST /.../pipelines/{pipelineId}/analysis/{analysisId}/accept – body { model } (generation model). Only allowed when the analysis has finished (agent === "stopped" and an output exists). Creates a pending generation record and triggers pipelineRunner.
  • POST /.../pipelines/{pipelineId}/analysis/{analysisId}/reject – marks the analysis as rejected.
  • POST /.../pipelines/{pipelineId}/analysis/{analysisId}/stop – stops the analysis agent mid-flight.
  • DELETE /.../pipelines/{pipelineId}/analysis/{analysisId} – removes the record entirely.

Generation jobs

Generation jobs originate from accepted analyses and can apply their output back to the pipeline.
  • GET /.../pipelines/{pipelineId}/generation – list with { id, version, analysis, status, agent }.
  • GET /.../pipelines/{pipelineId}/generation/{generationId} – includes the analysis prompt/output, generation messages, and final output graph.
  • GET /.../pipelines/{pipelineId}/generation/{generationId}/logs – JSON or SSE logs.
  • POST /.../pipelines/{pipelineId}/generation/{generationId}/accept – Validates the generation is stopped, compiles the output, creates a new pipeline version, and switches the pipeline to it. Response is the new version ID.
  • POST /.../pipelines/{pipelineId}/generation/{generationId}/reject – Marks the generation as rejected.
  • POST /.../pipelines/{pipelineId}/generation/{generationId}/stop – Stops the generation agent.
  • DELETE /.../pipelines/{pipelineId}/generation/{generationId} – Removes the generation record.

Helper endpoints

EndpointDescription
GET /.../pipelines/nodesReturns either the pipeline JSON schema (Accept: application/schema+json) or the compiler’s node catalog (default).
GET /.../pipelines/analysis/modelsLists allowed analysis models from OpenAiProvider.models.analyze.
GET /.../pipelines/generation/modelsLists allowed generation models from OpenAiProvider.models.generate.
With these building blocks you can create pipelines programmatically, iterate on their source, accept or reject AI-generated changes, and stream execution telemetry in real time.