Skip to content

N8N Version and Community Nodes Management

Critical: How Our N8N Deployment Works

Our N8N runs in queue mode with two types of instances sharing the same Docker image:

  • Main instance (slidefactory-n8n): Serves the UI, API, webhooks, scheduling. Does NOT execute workflows.
  • Worker instances (slidefactory-n8n-worker): Execute workflows. Scale from 1 to 10 replicas.

Both instances must always run the exact same image with the exact same community nodes. If they diverge, workflows will fail with errors like:

Error: Unrecognized node type: @mendable/n8n-nodes-firecrawl.firecrawl

This happens because the main instance accepts a workflow trigger (e.g., form submission), but the worker that picks up the job cannot find the node type needed to execute it.

Never Install Community Nodes via the N8N UI

Do NOT use Settings → Community Nodes in the N8N UI.

Installing or updating community nodes through the N8N UI only affects the main instance. Workers run separate containers and will NOT receive the update. This puts main and workers out of sync and will break all workflows that use those nodes.

This applies to:

  • Installing new community nodes
  • Updating existing community nodes to a new version
  • Removing community nodes

All community node changes must go through the Docker image build process described below.

How Community Nodes Are Managed

Community nodes are baked into the custom Docker image at build time. The process:

  1. Nodes are installed to /opt/custom-nodes/ in the Docker image during build
  2. At container startup, an entrypoint wrapper script copies them into /home/node/.n8n/nodes/
  3. N8N discovers them on startup

This two-step approach exists because the main instance mounts a persistent Azure File Share at /home/node/.n8n (for N8N config/data persistence), which would shadow any nodes installed directly to that path during the Docker build.

Current community nodes

Package Purpose
n8n-nodes-slidefactory S5 Slidefactory integration (presentations, templates, results)
@mendable/n8n-nodes-firecrawl Web scraping and search via Firecrawl API
@nexrender/n8n-nodes-nexrender Video rendering via Nexrender

Update Procedures

Updating N8N Base Version

Two-step process: Build image, then deploy.

Step 1: Build Custom Image

  1. Go to GitHub Actions"N8N - Build Custom N8N Image"
  2. Click "Run workflow"
  3. Set the new N8N version (e.g., 1.125.0)
  4. Keep community node versions the same (unless also updating those)
  5. Run the workflow — it builds and pushes to Azure Container Registry

Step 2: Deploy

  1. Go to GitHub Actions"N8N - Deploy N8N Queue Mode to Azure"
  2. Click "Run workflow"
  3. Set the same N8N version you just built (e.g., 1.125.0)
  4. Action: deploy-all (updates both main and workers simultaneously)
  5. Run the workflow

Step 3: Verify

  1. Open N8N UI and check version in Settings → About
  2. Run a test workflow that uses community nodes
  3. Check worker logs for any "Unrecognized node type" errors:
az containerapp logs show --name slidefactory-n8n-worker \
  --resource-group rg-slidefactory-prod-001 --tail 50

Updating Community Node Versions

Same two-step process. The only difference is which inputs you change in Step 1.

Step 1: Build Custom Image (community nodes)

  1. Go to GitHub Actions"N8N - Build Custom N8N Image"
  2. Keep the N8N base version the same
  3. Update the community node version(s) you want to change
  4. Run the workflow

Step 2: Deploy (community nodes)

  1. Go to GitHub Actions"N8N - Deploy N8N Queue Mode to Azure"
  2. Action: deploy-all
  3. Run the workflow

Always use deploy-all when updating community nodes to update both main and workers at the same time. Never use deploy-main or deploy-workers separately — this creates a window where they run different node versions.

Adding a New Community Node

  1. Update the Dockerfile: add the new package to the npm install line
  2. Update the build workflow: add a new input for the version
  3. Commit and push
  4. Build and deploy as described above

Version Defaults and Alignment

Version defaults must be kept in sync across these files:

File Purpose What to update
docker/n8n-custom/Dockerfile Docker image build (ARG default) N8N_VERSION ARG, node package versions
.github/workflows/build-n8n-custom.yml CI build workflow inputs n8n_version default, node version defaults
.github/workflows/deploy-n8n-queue-mode.yml CI deploy workflow input n8n_version default
scripts/deploy-n8n-queue-mode.sh Manual deploy script N8N_VERSION variable

When bumping the default version, update all four files to prevent confusion. The build and deploy workflows accept version overrides at runtime, but keeping defaults aligned avoids accidental mismatches.

Rollback

Quick Rollback to Previous N8N Version

  1. Go to GitHub Actions"N8N - Deploy N8N Queue Mode to Azure"
  2. Set N8N Version to the previous working version (the old image is still in ACR)
  3. Action: deploy-all
  4. Run the workflow

Full Rollback to Non-Queue Mode

  1. Action: rollback
  2. This deletes workers and reverts main to single-instance mode

Troubleshooting

"Unrecognized node type" Errors

Cause: Community nodes not found by N8N at runtime.

Check:

  1. Verify the image was built with the community nodes — check the build workflow run's "Verify image" step, which should list all nodes.

  2. Check if the entrypoint wrapper ran successfully:

az containerapp logs show --name slidefactory-n8n-worker \
  --resource-group rg-slidefactory-prod-001 --tail 100 \
  | grep -i "community nodes"
  1. If nodes are missing, rebuild the image and redeploy.

"Problem submitting response" on Forms

Cause: Usually means the worker crashed or can't process the workflow. Often a community node issue.

Check: Worker logs for startup errors or node loading failures.

Main and Workers Running Different Images

Check:

# Main instance image
az containerapp show --name slidefactory-n8n \
  --resource-group rg-slidefactory-prod-001 \
  --query "properties.template.containers[0].image" -o tsv

# Worker image
az containerapp show --name slidefactory-n8n-worker \
  --resource-group rg-slidefactory-prod-001 \
  --query "properties.template.containers[0].image" -o tsv

Both should show the exact same image tag. If they differ, run deploy-all to resync.

Architecture Reference

┌─────────────────────────────────────────────────────┐
│  Docker Image (n8n-custom:1.123.20)                 │
│                                                     │
│  /home/node/.n8n/nodes/     (primary install)       │
│  /opt/custom-nodes/         (staged backup)         │
│    ├── n8n-nodes-slidefactory@0.1.13                │
│    ├── @mendable/n8n-nodes-firecrawl@2.0.6          │
│    └── @nexrender/n8n-nodes-nexrender@0.1.6         │
│                                                     │
│  ENTRYPOINT: tini -- /docker-entrypoint-wrapper.sh  │
│    → copies /opt/custom-nodes/* to                  │
│      /home/node/.n8n/nodes/ at startup              │
│    → then exec "$@" (n8n or n8n worker)             │
└──────────────┬──────────────────┬───────────────────┘
               │                  │
      ┌────────▼────────┐  ┌─────▼──────────────┐
      │  Main Instance  │  │  Worker Instances   │
      │                 │  │  (1-10 replicas)    │
      │  Volume mount:  │  │                     │
      │  Azure Files →  │  │  No volume mount    │
      │  /home/node/    │  │                     │
      │    .n8n/        │  │  args: [n8n,worker] │
      │                 │  │  (preserves wrapper  │
      │  Wrapper copies │  │   ENTRYPOINT)       │
      │  nodes into     │  │                     │
      │  mounted volume │  │  Wrapper copies     │
      │                 │  │  nodes at startup   │
      └─────────────────┘  └─────────────────────┘

CRITICAL: Workers must use "args" not "command" in Azure config.
"command" overrides the ENTRYPOINT, skipping the wrapper entirely.

References


Last Updated: 2026-03-19