🎯

n8n-builder

🎯Skill

from bsamiee/parametric_forge

VibeIndex|
What it does

Generates compliant n8n workflow JSON with dynamic nodes, connections, and settings for programmatic workflow automation and AI agent scaffolding.

n8n-builder

Installation

Install skill:
npx skills add https://github.com/bsamiee/parametric_forge --skill n8n-builder
2
AddedJan 25, 2026

Skill Details

SKILL.md

>-

Overview

# [H1][N8N-BUILDER]

>Dictum: Schema compliance enables n8n import without runtime validation errors.


Generate valid n8n workflow JSON.

Tasks:

  1. Read [schema.md](./references/schema.md) β€” Root structure, settings
  2. Read [nodes.md](./references/nodes.md) β€” Node definition, typeVersion
  3. Read [connections.md](./references/connections.md) β€” Graph topology, AI types
  4. (dynamic values) Read [expressions.md](./references/expressions.md) β€” Variables, functions
  5. (specific nodes) Read [integrations.md](./references/integrations.md) β€” Node parameters
  6. Generate JSON β€” Apply template from [workflow.template.md](./templates/workflow.template.md)
  7. Validate β€” Run uv run .claude/skills/n8n-builder/scripts/validate-workflow.py

[REFERENCE]: [index.md](./index.md) β€” File listing.

---

[0][N8N_2.0]

>Dictum: Breaking changes invalidate pre-2025 patterns.


Breaking Changes (December 2025):

  • Database β€” PostgreSQL required; MySQL/MariaDB support dropped.
  • Python β€” "language": "python" removed; use "pythonNative" with task runners.
  • Security β€” ExecuteCommand and LocalFileTrigger disabled by default.
  • Code Isolation β€” Environment variable access blocked in Code nodes (N8N_BLOCK_ENV_ACCESS_IN_NODE=true).
  • Agent Type β€” Agent type selection removed (v1.82+); all agents are Tools Agent.

---

[1][SCHEMA]

>Dictum: Root structure enables n8n parser recognition and execution.


Guidance:

  • AI Workflows β€” Require executionOrder: "v1" in settings; async node ordering fails without.
  • Portability β€” Credential IDs and errorWorkflow UUIDs are instance-specific; expect reassignment post-import.
  • Optional Fields β€” Include empty objects ("pinData": {}) over omission; prevents import edge cases.
  • Sub-Workflow Typing β€” Use workflowInputs schema on trigger nodes to validate caller payloads before execution.
  • pinData Limits β€” Keep under 12MB; large payloads slow editor rendering and cannot contain binary data.

Best-Practices:

  • [ALWAYS] Set "active": false on generation; activation is a deployment decision.
  • [NEVER] Hardcode credential IDs; use placeholder names for cross-instance transfer.

[REFERENCE]: [β†’schema.md](./references/schema.md)

---

[2][NODES]

>Dictum: Unique identity enables deterministic cross-node references.


Guidance:

  • Name Collisions β€” n8n auto-renames duplicates (Setβ†’Set1); breaks $('NodeName') expressions silently.
  • Version Matching β€” typeVersion must match target n8n instance; newer versions may lack backward compatibility.
  • Error Strategy β€” Use onError: "continueErrorOutput" for fault-tolerant pipelines; default stops execution.
  • Node Documentation β€” Use notes field for inline documentation; notesInFlow: true displays on canvas.

Best-Practices:

  • [ALWAYS] Generate UUID per node before building connections; connections reference node.name.
  • [ALWAYS] Space nodes 200px horizontal, 150px vertical for canvas readability.

[REFERENCE]: [β†’nodes.md](./references/nodes.md)

---

[3][CONNECTIONS]

>Dictum: Connection types enable workflow mode distinction at parse time.


Guidance:

  • AI vs Main β€” AI nodes require specialized types (ai_tool, ai_languageModel); main causes silent tool invisibility.
  • Fan-out β€” Single output to multiple nodes executes in parallel; order within array is non-deterministic.
  • Multi-output β€” Array index maps to output port; IF node: index 0 = true branch, index 1 = false branch.
  • Single Model β€” Agent accepts exactly one ai_languageModel connection; multiple models conflict silently.
  • Memory Scope β€” ai_memory persists within single trigger execution only; no cross-session persistence.

Best-Practices:

  • [ALWAYS] Match connection key AND type property; mismatches cause silent failures.
  • [NEVER] Connect AI tools via main type; agent cannot discover them.
  • [NEVER] Connect multiple language models to single agent; use Model Selector node for dynamic selection.

[REFERENCE]: [β†’connections.md](./references/connections.md)

---

[4][EXPRESSIONS]

>Dictum: Dynamic evaluation eliminates hardcoded parameters.


Guidance:

  • Static vs Dynamic β€” Prefix = signals evaluation; without it, value is literal string including {{ }}.
  • Pinned Data β€” Test mode pins lack execution context; .item fails, use .first() or .all()[0] instead.
  • Complex Logic β€” IIFE pattern {{(function(){ return ... })()}} enables multi-statement evaluation.
  • Scope Confusion β€” $json accesses current node input only; use $('NodeName').item.json for other nodes.

Best-Practices:

  • [ALWAYS] Use $('NodeName') for cross-node data; $json only accesses current node input.
  • [ALWAYS] Escape quotes in JSON strings or use template literals to prevent invalid JSON.
  • [NEVER] Assume .item works in all contexts; pinned data testing requires explicit accessors.

[REFERENCE]: [β†’expressions.md](./references/expressions.md)

---

[5][INTEGRATIONS]

>Dictum: Node type selection determines integration capability.


Guidance:

  • Trigger Selection β€” Webhook for external calls, scheduleTrigger for periodic; choose based on initiation source.
  • AI Tool Visibility β€” Sub-workflow tools require description parameter; agent uses it for tool selection reasoning.
  • Code Language β€” Use "pythonNative" for Python; "python" is deprecated.
  • Error Propagation β€” Use stopAndError node for controlled failures; triggers designated error workflow.
  • 2025 Features β€” MCP nodes enable cross-agent interoperability; Guardrails nodes enforce AI output safety.
  • Output Parser β€” outputParserStructured jsonSchema must be static; expressions in schema are ignored silently.
  • Batch Processing β€” Use splitInBatches for large datasets to prevent memory exhaustion; process in chunks.

Best-Practices:

  • [ALWAYS] Set responseMode: "lastNode" for webhookβ†’response patterns; ensures output reaches caller.
  • [ALWAYS] Include description on HTTP nodes used as AI tools; undocumented tools are invisible to agent.
  • [ALWAYS] Include unique webhookId per workflow to prevent path collisions across workflows.

[REFERENCE]: [β†’integrations.md](./references/integrations.md)

---

[6][RAG]

>Dictum: RAG pipelines ground LLM responses in domain-specific knowledge.


Guidance:

  • Vector Store Selection β€” Simple for development; PGVector/Pinecone/Qdrant for production persistence.
  • Embedding Consistency β€” Same embedding model required for insert and query; mismatch causes semantic drift.
  • Chunk Strategy β€” Recursive Character splitter recommended; splits Markdown/HTML/code before character fallback.
  • Memory vs Chains β€” Only agents support memory; chains are stateless single-turn processors.
  • Retriever Modes β€” MultiQuery for complex questions; Contextual Compression for noise reduction.

Best-Practices:

  • [ALWAYS] Match embedding model between document insert and query operations.
  • [ALWAYS] Use ai_memory connection type for memory nodes; main silently fails.
  • [NEVER] Use Simple Vector Store in production; data lost on restart, global user access.

[REFERENCE]: [β†’rag.md](./references/rag.md)

---

[7][VALIDATION]

>Dictum: Pre-export validation prevents n8n import failures.


Script:

```bash

uv run .claude/skills/n8n-builder/scripts/validate-workflow.py workflow.json

uv run .claude/skills/n8n-builder/scripts/validate-workflow.py workflow.json --strict

```

Checks (12 automated):

  • root_required β€” name, nodes, connections present
  • node_id_unique / node_name_unique β€” no duplicates
  • node_id_uuid β€” valid UUID format
  • conn_targets_exist β€” connection targets reference existing nodes
  • conn_ai_type_match β€” AI connection key matches type property
  • settings_exec_order_ai β€” LangChain workflows require executionOrder: "v1"
  • settings_caller_policy / node_on_error β€” enum value validation

Guidance:

  • API Deployment β€” Use POST then PUT pattern; single POST may ignore settings due to API bug.
  • Performance β€” saveExecutionProgress: true triggers DB I/O per node; disable for high-throughput (>1000 RPM).
  • Source Control β€” Strip instanceId when sharing; credential files contain stubs only, not secrets.

[REFERENCE]: [β†’validation.md](./references/validation.md)

More from this repository10

🎯
nx-tools🎯Skill

Queries Nx workspace metadata, project configurations, affected projects, generators, and dependency graphs via unified Python CLI.

🎯
tavily-tools🎯Skill

Executes web searches, content extraction, website crawling, and site structure mapping using Tavily AI's web operations via a unified Python CLI.

🎯
hostinger-tools🎯Skill

Manages Hostinger cloud infrastructure and services through comprehensive API interactions for VPS, domains, billing, and hosting operations.

🎯
mermaid-diagramming🎯Skill

Generates comprehensive Mermaid diagrams across 22 types with advanced styling, layout, and semantic categorization for complex visual modeling.

🎯
perplexity-tools🎯Skill

perplexity-tools skill from bsamiee/parametric_forge

🎯
typescript-effect🎯Skill

Generates TypeScript code using functional patterns and Effect ecosystem, enforcing strict typing, error handling, and modular design principles.

🎯
command-builder🎯Skill

Streamlines complex command generation by dynamically constructing shell and CLI commands with flexible parameter mapping and validation

🎯
sonarcloud-tools🎯Skill

Queries SonarCloud API to retrieve code quality metrics, issues, hotspots, and analysis history via a unified Python CLI.

🎯
output-style-builder🎯Skill

Generates structured output formats and response style configurations for Claude, optimizing data serialization and agent communication.

🎯
context7-tools🎯Skill

Retrieves comprehensive library documentation via a unified Python CLI, enabling precise resolution and fetching of programming library references.