<<ABORT_PROCESS

Guardrails

n8n workflow demonstrating AI safety guardrails: content moderation to filter harmful inputs, structured output validation, and conversation memory for context-aware responses.

Loading workflow visualizer...

AI Insights

AUTO-GENERATED

💡 What It Does

  • Checks user messages for harmful content (hate speech, violence, etc.) before sending them to AI
  • Makes sure AI responses follow the exact format you need (like always returning valid JSON)
  • Remembers previous messages in a conversation so the AI can give smarter, context-aware answers

🚀 Start Here

1
Step 1: Send a test message through the workflow
2
Step 2: Watch it get checked for safety, processed by AI, and validated
3
Step 3: Try a follow-up message to see conversation memory in action
4
Success looks like: Safe messages get AI responses in the correct format, unsafe messages are blocked, and follow-up questions reference earlier context

🛠️ Easy Customization

  • ℹ️Adjust what counts as 'harmful' by changing the [[contentModeration]] rules
  • ℹ️Define your own output format using the [[structuredOutput]] schema
  • ℹ️Set how many previous messages to remember in [[conversationMemory]]

📚AI GlossaryKey Terms

Content Moderation
Safety

Filtering harmful or inappropriate user inputs before processing

$Blocking a message containing threats before it's sent to ChatGPT
Structured Output
Validation

Forcing AI responses to match a specific data format

$Ensuring AI always returns {"name": "...", "age": 25} instead of free-form text
Conversation Memory
Context Management

Storing previous messages so AI understands context

$User asks 'What's the weather?', then 'How about tomorrow?' - memory lets AI know 'tomorrow' refers to weather