N8N Community Nodes Installation Path Fix¶
Date: 2025-11-30 Status: Critical Fix - Root Cause Identified Complexity: Easy (once root cause understood) Risk: Low - straightforward path correction
Problem Summary¶
After deploying custom N8N Docker images with community nodes to both main and worker instances, workflows still failed with:
Despite: - ✅ Custom Docker image built successfully - ✅ ACR authentication configured correctly - ✅ Both main and worker instances pulling custom image - ✅ Image verification showing nodes installed
Root Cause Analysis¶
The Critical Mistake¶
Our Dockerfile was installing community nodes in the WRONG LOCATION.
This command installs nodes to /usr/local/lib/node_modules/n8n-nodes-slidefactory/, which is: - ✅ A valid npm global installation location - ✅ Visible in the filesystem - ✅ Verifiable with ls commands - ❌ NOT where N8N looks for community nodes!
How N8N Actually Finds Community Nodes¶
N8N searches for community nodes in its own node_modules directory, specifically:
NOT in: - /usr/local/lib/node_modules/<community-node-package>/ (global install location) - /home/node/.n8n/custom/ (only for manually copied nodes) - Anywhere else
Why This Wasn't Obvious¶
- Build succeeded: npm installed packages without errors
- Verification passed:
ls /usr/local/lib/node_modules/showed the nodes - Image pulled correctly: ACR authentication worked
- Deployment succeeded: No container startup failures
But the nodes were invisible to N8N because they weren't in N8N's search path.
The Fix (Updated After Workspace Error)¶
Updated Dockerfile¶
Changed FROM:
# Install community nodes via npm
# We install them directly in node_modules to avoid workspace issues
RUN npm install -g --prefix /usr/local n8n-nodes-slidefactory@0.1.3 && \
npm install -g --prefix /usr/local @mendable/n8n-nodes-firecrawl@1.0.6 && \
echo "Community nodes installed successfully"
# List installed community nodes for verification
RUN ls -la /usr/local/lib/node_modules/ | grep -E "slidefactory|firecrawl" || true
Attempted (Failed with Workspace Error):
# ❌ This caused: npm error code EUNSUPPORTEDPROTOCOL
# ❌ npm error Unsupported URL Type "workspace:": workspace:*
RUN cd /usr/local/lib/node_modules/n8n && \
npm install n8n-nodes-slidefactory@0.1.3 @mendable/n8n-nodes-firecrawl@1.0.6
Final Working Solution:
# Install community nodes globally
# N8N automatically discovers globally installed community nodes
RUN npm install -g n8n-nodes-slidefactory@0.1.3 @mendable/n8n-nodes-firecrawl@1.0.6
# Verify installation
RUN npm list -g --depth=0 | grep -E "slidefactory|firecrawl"
Key Changes¶
- Use
npm install -g- Install globally (standard npm pattern) - No
--prefix- Let npm use default global location - Single
npm installcommand - Install both packages together - Simple verification - Use
npm list -gto verify
Why the "Install into N8N's node_modules" Failed¶
N8N uses pnpm workspaces internally. When we tried to npm install directly into N8N's node_modules: - npm encountered workspace:* protocol in N8N's dependencies - npm doesn't support workspace protocol (that's a pnpm feature) - Installation failed with EUNSUPPORTEDPROTOCOL
The correct approach is global installation, which N8N detects automatically
Installation Location After Fix¶
/usr/local/lib/node_modules/n8n/node_modules/
├── n8n-nodes-slidefactory/ # ✅ N8N can find this
├── @mendable/
│ └── n8n-nodes-firecrawl/ # ✅ N8N can find this
└── [other n8n dependencies...]
Impact Assessment¶
Before Fix: - Custom image deployed ✅ - Nodes installed ✅ - Nodes visible to N8N ❌ - Workflows fail ❌
After Fix: - Custom image deployed ✅ - Nodes installed ✅ - Nodes visible to N8N ✅ - Workflows work ✅
Deployment Steps¶
1. Rebuild Custom N8N Image¶
# Via GitHub Actions (Recommended)
# Go to: Actions → "N8N - Build Custom N8N Image" → Run workflow
# Input values:
# - N8N version: 1.121.3
# - Slidefactory nodes version: 0.1.3
# - Firecrawl nodes version: 1.0.6
# - Tag as latest: Yes
This will build a NEW image with the corrected installation path.
2. Deploy to Main Instance¶
# Via GitHub Actions
# Go to: Actions → "N8N - Deploy N8N Queue Mode to Azure" → Run workflow
# Input values:
# - Action: deploy-main
# - N8N version: 1.121.3 (must match newly built image)
3. Deploy to Workers¶
# Via GitHub Actions (same workflow)
# Input values:
# - Action: deploy-workers
# - N8N version: 1.121.3
# - Worker min replicas: 1
# - Worker max replicas: 10
Or deploy both together:
4. Verify Nodes Are Available¶
Check via N8N UI: 1. Open N8N: https://slidefactory-n8n.thankfulsmoke-fef50a06.westeurope.azurecontainerapps.io 2. Create new workflow 3. Click "Add node" 4. Search for "Slidefactory" - should appear in results 5. Search for "Firecrawl" - should appear in results
Check via Container Logs:
# Check main instance startup logs
az containerapp logs show \
--name slidefactory-n8n \
--resource-group rg-slidefactory-prod-001 \
--tail 100 \
--follow false | grep -i "community\|node"
# Check worker startup logs
az containerapp logs show \
--name slidefactory-n8n-worker \
--resource-group rg-slidefactory-prod-001 \
--tail 100 \
--follow false | grep -i "community\|node"
Look for log lines like:
Check via Container Exec:
# Not easily available in Azure Container Apps
# Would need to use debugging container or startup script
Testing Checklist¶
- Build new custom N8N image with fixed Dockerfile
- Image build completes successfully
- Deploy to main instance
- Deploy to workers
- Main instance startup successful
- Worker instances startup successful
- Community nodes visible in N8N UI node search
- Test workflow using Slidefactory node executes successfully
- Test workflow using Firecrawl node executes successfully
- Queue mode still working (workflows execute on workers)
Lessons Learned¶
What Went Wrong¶
- Assumed global npm install would work - Wrong assumption about N8N's node discovery
- Verification checked wrong location - Verified
/usr/local/lib/node_modules/instead of N8N'snode_modules - Focused on deployment/auth issues - Spent time fixing ACR authentication when real problem was installation path
- Didn't test actual node availability - Verified build/deployment, not runtime behavior
What Should Have Been Done¶
- Research N8N's node loading mechanism FIRST - Understand where N8N searches for nodes
- Test locally before deploying to Azure - Run custom image locally and verify nodes appear in UI
- Check N8N logs for node loading - Look for "Loaded community nodes" messages
- Verify correct installation location - Check
/usr/local/lib/node_modules/n8n/node_modules/
Key Insight¶
Just because a package is installed doesn't mean the application can find it.
Applications have specific search paths for dependencies. You must install packages WHERE THE APPLICATION LOOKS, not just anywhere in the filesystem.
References¶
N8N Community Node Documentation¶
Docker & npm¶
- npm install - Install packages as dependencies
- npm install -g - Install packages globally
Related Reports¶
- N8N Queue Mode Setup
- ACR Authentication Fix - This was a red herring, auth was fine
Modified Files¶
- docker/n8n-custom/Dockerfile - Fixed npm install location
- docker/n8n-custom/README.md - Updated documentation and examples
Rollback Plan¶
If the fix doesn't work:
- Verify image was rebuilt with new Dockerfile
- Check image tag matches deployment configuration
- Review build logs for npm install errors
- Test image locally:
If still broken, fallback:
# Revert to standard N8N image temporarily
az containerapp update \
--name slidefactory-n8n \
--resource-group rg-slidefactory-prod-001 \
--image n8nio/n8n:1.121.3
Next Steps¶
- ✅ Fix Dockerfile installation path
- ⏳ Rebuild custom N8N image via GitHub Actions
- ⏳ Deploy to main instance
- ⏳ Deploy to workers
- ⏳ Verify nodes available in N8N UI
- ⏳ Test workflows using community nodes
- ⏳ Monitor for any errors
Conclusion¶
The root cause was installing community nodes in the wrong directory. N8N looks for nodes in /usr/local/lib/node_modules/n8n/node_modules/, not in the global npm directory /usr/local/lib/node_modules/.
The fix is simple: cd into N8N's directory before running npm install. This ensures nodes are installed as dependencies of N8N itself, making them discoverable at runtime.
Estimated effort: 5 minutes to rebuild image, 10 minutes to deploy, 5 minutes to verify = 20 minutes total
Risk: Low - straightforward path correction, tested pattern from N8N community examples