Your first AI node: Basic LLM Chain
You will build an AI Email Classifier – a workflow that reads incoming emails and automatically labels them according to intent. No programming code is needed. Just a prompt, an LLM, and data flow patterns you already know.
In Lesson 2, you learned about how data flows through nodes and expressions . Now, we're going to create a real node that can think. You'll build an AI Email Classifier—a workflow that reads incoming emails and automatically labels them purposefully. No programming code is needed. Just a prompt, an LLM , and the data flow patterns you already know.
Classification of AI nodes in n8n
n8n has two types of AI nodes, and understanding the difference will help you avoid using a cannon when a slingshot would also work.
Root node - These are independent nodes that you drag onto the canvas. They perform AI tasks:
- Basic LLM Chain - Send a prompt, receive a response. No tools, no memory. (This lesson)
- AI Agent - An automated agent with tools, memory, and multi-step reasoning capabilities. (Lesson 4)
- Q&A Chain - Answering questions from the document. (Lesson 6)
- Summarization Chain - Summarizing long texts.
- Text Classifier - Classifies text into categories.
- Sentiment Analysis - Identifying positive/negative/neutral emotions.
Child nodes - These are attached to root nodes to extend their functionality:
- LLM providers - OpenAI, Claude, Gemini , Groq, Ollama (you choose whichever model powers the root node)
- Memory Management - Simple, PostgreSQL, Redis (Lesson 5)
- Tools - SerpAPI, Wikipedia, Code, HTTP Request (Lesson 4)
- Vector storage - Supabase, Pinecone, Qdrant (Lesson 6)
- Embedded - OpenAI, Cohere, local models
The root node defines the behavior. The child nodes define the capabilities. An AI Agent root node with an OpenAI child node and a SerpAPI engine child node creates an agent that uses GPT-40 and can search the web.
✅ Quick test : You need to summarize a long document. Which root node would you choose - Basic LLM Chain, AI Agent , or Summarization Chain?
Answer : Summarization Chain. It's specifically designed for summarization with optimal segmentation capabilities. Basic LLM Chain can also work, but you need to handle long documents manually. AI Agent is overkill – you don't need the tools or memory for simple summarization.
Building: An AI Email Sorting Tool
Build your first AI workflow. This classification tool reads an email and categorizes it into one of the following types: Request, support, urgent, or spam.
Step 1: Set up the trigger (Test mode)
Start a new workflow. Instead of Gmail Trigger (which requires a real email address), use this test setup:
- Add a Manual Trigger node.
- Add the Set node and create the following fields:
subject(string):"Server down — production is broken"from(string):"ops-team@company.com"body(string):"Our production server went down at 3am. All customer-facing services are offline. Need immediate help".
This simulates an incoming email. Once the workflow is ready, you'll replace this with a real Gmail Trigger.
Step 2: Add Basic LLM Chain
- Add a Basic LLM Chain node after the Set node.
- Click on the node to open its configuration.
- Under the Model section , click to add an OpenAI Chat Model child node.
- Choose your OpenAI credentials.
- Model:
gpt-4o-mini(fast, inexpensive, smart enough for sorting) - In the prompt field , write:
Phân loại email này vào đúng một danh mục. Danh mục: - inquiry: các câu hỏi về sản phẩm, dịch vụ hoặc giá cả - support: các vấn đề kỹ thuật, lỗi hoặc yêu cầu trợ giúp - urgent: các vấn đề cần xử lý ngay lập tức - spam: thư tiếp thị không mong muốn hoặc tin nhắn không liên quan Tiêu đề email: {{ $json.subject }} Người gửi email: {{ $json.from }} Nội dung email: {{ $json.body }} Trả lời bằng đúng một từ: inquiry, support, urgent, spam Không cần giải thích. Không cần dấu câu. Chỉ cần từ danh mục.
Pay attention to the expressions - {{ $json.subject }}, {{ $json.from }}, {{ $json.body }}- which retrieve data from the output of the Set node. This is the connection between the data stream of Lesson 2 and the AI of Lesson 3.
Step 3: Routing based on classification
Add an IF node after the Basic LLM Chain. Configure it:
- Condition:
{{ $json.text }}contains the wordurgent - True branch: Add a Slack node (or another notification node) to alert your team.
- False branch: Connects to a Google Sheets node to record classifications.
The field $json.textcontains the LLM response - in this case, a single word such as "urgent" or "support".
Step 4: Check
Click on "Test workflow". Observe the data flow:
- Manual activation
- Node Set creates fake email data.
- Basic LLM Chain data classification (you will see "urgent" in the results table)
- The IF node routes data to the appropriate branch.
Change the test data of the Set node to different email scenarios and run it again. Try a sales email (which will be classified as spam), a pricing inquiry, and a bug report (support).
✅ Quick check : What happens if the LLM returns "URGENT" (uppercase) but your IF node checks for "urgent" (lowercase)?
Answer : The condition will not be met. Fix this by normalizing the LLM output using a Set-- node expression {{ $json.text.toLowerCase() }}or by using "contains" instead of "equals" in the IF node with the comparison set to case-insensitive. Always normalize the LLM output before routing.
Techniques for creating prompts for n8n
Writing prompts for automation is different from chatting with an LLM. In a conversation, detailed feedback is sufficient. In a workflow, you need predictable, analyzable output that subsequent nodes can process.
Three rules for prompt n8n:
1. Clearly define the output format limitations.
Sai: "Email này thuộc loại nào?" Đúng: "Trả lời chỉ bằng một từ: inquiry, support, urgent, spam" 2. Provide complete context in the prompt. LLMs don't store information between executions. Each prompt must include all the necessary data—don't assume it "knows" what you're working with.
3. Use examples (few-shots) for complex tasks.
Ví dụ: - "Bạn có thể cho tôi biết về giá cả không?" → inquiry - "Nút xuất dữ liệu không hoạt động" → support - "Cơ sở dữ liệu sản xuất bị lỗi" → urgent - "Mua đồng hồ giảm giá ngay!" → spam Bây giờ hãy phân loại: {{ $json.body }} Few-shot examples significantly improve classification accuracy. For tasks where the boundaries between categories are unclear (e.g., is "our report is late" urgent or does it require support?), adding 3–5 examples per category can make the difference between 70% and 95% accuracy.
From testing to production
After the classifier has worked with the test data, change the trigger:
- Remove or disconnect the Manual Trigger and Set node.
- Add the Gmail Trigger node at the beginning.
- Configure it with your Gmail login information.
- Set the trigger to "New Email"
- Update the expressions in the Basic LLM Chain to match the output structure of Gmail:
- Subject:
{{ $json.subject }}(same) - From:
{{ $json.from.value[0].address }} - Body:
{{ $json.text }}(or{{ $json.snippet }}for shorter text)
- Subject:
Now, activate the workflow. Every new email that arrives will be automatically categorized and routed.
Key points to remember
- The root nodes (Basic LLM Chain, AI Agent, Q&A Chain) define the AI's behavior; the child nodes (LLM provider, memory, tools) define its capabilities.
- Basic LLM Chain is the optimal choice for simple query-response tasks: classification, summarization, and extraction.
- Question output limits - a predictable format that ensures reliable next-line routing.
- Use the Manual Trigger + Set node for development, then switch to using the actual trigger for the production environment.
- Few-shot examples in the prompt significantly improve classification accuracy.
-
Question 1:
You're building an email classification tool and want to test it without waiting for actual emails. What's the best approach?
EXPLAIN:
This is a core n8n development template. Use the Manual Trigger + Set node to generate dummy test data during development. Once the workflow is working, replace the Manual Trigger with a real Gmail Trigger. You can even keep both triggers connected to the same thread and disable the trigger you're not using.
-
Question 2:
Your email classification tool is returning 'Category is: support request' instead of just 'support'. How can I fix this?
EXPLAIN:
Default LLM models are very detailed. The solution lies in the prompt, not the model. Clearly define the output format: 'Reply with exactly one word from this list: inquiry, support, urgent, spam. No explanation. No punctuation.' This makes the output predictable and analyzeable by subsequent nodes.
-
Question 3:
What is the difference between a Basic LLM Chain and an AI Agent node?
EXPLAIN:
Basic LLM Chain is a simple process that takes prompt input and receives feedback output. It's perfect when you only need an LLM for classifying, summarizing, or transforming text. The AI Agent (lesson 4) adds autonomy – it can call tools, remember conversations, and chain multiple inference steps. Use the simplest node that suits your task.
Training results
You have completed 0 questions.
-- / --
Discover more
LLM basic LLM chain Basic LLM ChainShare by
Jessica TannerYou should read it
- Which country is STIHL chain saw? Is that good?
- How to Break a Chain
- How to make Copper Chain, Minecraft Copper Chain
- Review Astral Chain - The battlefield is full of explosions
- How to build a Teamfight Tactics (TFT) combo team, TFT Season 14
- Cursor Composer User Guide
- Core commands in Claude Code
- Context management in Claude Code
- AI Agent: Tools, prompts, and the decision-making process.
- Memory and context: Helping agents remember
- RAG workflow: AI understands your data