Business Process Automationadvanced
November 4, 2025
6 min read
1 hour
From Queries to Resolutions: A Multi-Team Support Bot Powered by Slash Commands
AI-powered Telegram support bot workflow using N8N, Pinecone, and slash commands to manage multi-department queries with session tracking and vector search.
By Nayma Sultana

Customer support teams face a common challenge: answering the same questions across multiple departments while keeping track of conversations. When a customer asks about billing, then switches to technical support, then circles back to return policies, things get messy fast. Context gets lost. Responses slow down. Customers get frustrated.
What if your support bot could remember who it's talking to, understand which department they need, and pull answers from the right knowledge base every single time? That's exactly what this N8N workflow does. It creates an intelligent Telegram chatbot that routes conversations to specialized departments, each powered by its own AI-driven knowledge base.
Prerequisites: What You'll Need
Before diving into the workflow, gather these API accounts and tools:
- Telegram Bot Token: Create a bot through BotFather on Telegram to get your API credentials
- OpenAI API Key: Powers the conversational AI agent that responds to customer queries
- Pinecone Account: Vector database for storing and retrieving document embeddings
- Cohere API Key: Generates embeddings for semantic search across your knowledge base
- Google Drive API: Automatically syncs support documents from your Drive folders
- PostgreSQL Database: Manages user sessions and conversation state
- N8N Instance: The workflow automation platform that ties everything together
Key Components of the Workflow
This workflow leverages several powerful N8N nodes working in harmony:
- Telegram Trigger: Listens for incoming messages from users
- PostgreSQL Nodes: Create, read, and update user session data
- Switch Nodes: Route conversations based on user state and commands
- AI Agent: Processes queries using OpenAI GPT-4.1-mini
- Pinecone Vector Stores: Three separate knowledge bases for each department
- Embeddings Cohere: Converts text into searchable vector representations
- Google Drive Triggers: Monitor folders for new support documents
- Document Loaders and Text Splitters: Process documents for the vector database
Step 1: Set Up Session Management with PostgreSQL
Every good conversation needs memory. This workflow starts by creating a PostgreSQL table that tracks each user's journey through your support system.
The table stores three critical pieces of information: the user's Telegram ID, which department they're currently talking to, and whether their session is active. When someone first messages your bot, the workflow checks if they exist in the database. If not, it creates a new entry with their session marked as inactive.
img_1.png
This state management is what makes the bot smart. It knows whether someone is mid-conversation or just starting out. It remembers which department they chose. And it ensures conversations don't bleed across different support topics.
Step 2: Build the Conversation Router
Once the session is tracked, the workflow needs to decide what to do with each incoming message. That's where the main Switch node comes in.
This node examines the user's current state and routes them accordingly. If someone sends the /start command, they get a friendly menu showing available departments. If they're not in an active session, the bot gently reminds them to start one. If they type /end, their session gets closed gracefully.
img_2.png
The clever part happens when someone is active and has chosen a department. Those messages go straight to the AI agent, which has access to the right knowledge base. But if they're active without a department selected, they hit a second Switch node that processes department commands like /billing, /TechSupport, or /ReturnPolicy.
Step 3: Configure Department-Specific Vector Stores
Here's where things get interesting. Each department gets its own Pinecone vector store, acting as a specialized brain for that topic.
The workflow sets up three distinct namespaces in Pinecone: one for billing documents, one for technical support guides, and one for return policy information. When documents get uploaded to their corresponding Google Drive folders, the workflow automatically processes them.
Each document gets downloaded, split into manageable chunks using the Character Text Splitter, converted into vector embeddings through Cohere's API, and stored in the appropriate Pinecone namespace. This happens monthly through scheduled triggers, keeping your knowledge base fresh without manual intervention.
img_3.png
The beauty of this setup is isolation. Billing answers come from billing documents. Tech support responses pull from technical guides. No cross-contamination, no irrelevant results.
Step 4: Wire Up the AI Agent
Now comes the magic. The AI Agent node connects all three vector stores as tools, giving it selective access to departmental knowledge.
The agent uses OpenAI's GPT-4.1-mini model with a carefully crafted system prompt. This prompt instructs the AI to act as a helpful assistant for the specific department the user selected. It emphasizes using the vector store tools to find accurate answers and politely declining questions outside its assigned department.
Memory management happens through a Buffer Window component that maintains conversation history per user. Each session ID corresponds to a Telegram user, so the bot remembers context throughout the conversation. When someone asks a follow-up question, the AI understands what "it" or "that" refers to from earlier messages.
img_4.png
The agent retrieves relevant document chunks from the appropriate vector store, synthesizes them with the conversation context, and generates natural responses. All of this happens in seconds, creating a seamless support experience.
Step 5: Handle Session Lifecycle
Good conversations have beginnings and endings. This workflow manages both with precision.
When someone selects a department, the corresponding Telegram response node sends a confirmation message, then a PostgreSQL node updates their session. It marks them as active and records which department they chose. From that point on, their messages flow to the AI agent configured for that department.
When they type /end, the workflow updates the database again, setting their session to inactive and clearing the department field. They get a friendly message explaining how to restart when they need help again.
img_5.png
This clean session management prevents confusion. Users can't accidentally send billing questions to the tech support agent. The system enforces boundaries while remaining flexible for new conversations.
Benefits and Real-World Use Cases
This workflow solves several pain points that plague customer support operations.
First, it provides 24/7 availability without human intervention for common questions. Customers get instant answers about return policies at 2 AM or billing clarifications during holidays. The AI agent handles routine inquiries, freeing human agents for complex issues.
Second, it maintains perfect consistency. Every customer gets the same accurate information from your official documents. No more worrying about whether different support agents give conflicting answers.
Third, it scales effortlessly. Whether you have ten customers or ten thousand sending messages simultaneously, the workflow handles them all. Each conversation stays independent, tracked by session state in PostgreSQL.
Beyond customer support, this pattern works for internal knowledge bases too. Imagine department-specific bots for HR policies, IT troubleshooting, or sales enablement. Each team gets relevant information without wading through irrelevant documents.
Educational institutions could deploy this for student services, with separate knowledge bases for admissions, financial aid, and academic advising. Healthcare organizations could create patient-facing bots that provide general information while routing complex questions to appropriate specialists.
The workflow's modular design makes it easy to extend. Need a fourth department? Add another Google Drive trigger, Pinecone namespace, and update the Switch node. Want to change the AI model? Swap the OpenAI node for Claude or Llama. Need different embeddings? Replace Cohere with OpenAI or local models.
Wrapping Up
Building intelligent customer support doesn't require a massive engineering team or six-figure budget anymore. This N8N workflow proves you can create sophisticated, AI-powered conversational agents with visual automation tools.
The combination of state management, vector databases, and large language models creates something that feels remarkably human. Customers get helpful, accurate answers. Support teams spend less time on repetitive questions. Knowledge stays organized and accessible.
Start with one department and expand from there. Let your documents do the talking through AI. And watch your support operation transform from reactive to proactive, one automated conversation at a time.
Share this article
Help others discover this content
Tap and hold the link button above to access your device's native sharing options
More in Business Process Automation
Continue exploring workflows in this category

Business Process Automationadvanced
1 min read
Build an AI-Powered HR Automation System with WhatsApp and n8n
Nayma Sultana
Nov 15
Est: 1 hour 30 minutes

Business Process Automationintermediate
1 min read
Automated Lead Generation & Qualification Using GPT-4o, Google Workspace, and Smart Follow-Up Workflows
Nayma Sultana
Nov 13
Est: 55 minutes

Business Process Automationadvanced
1 min read
The N8N Workflow That Reverse-Engineers Your Competition
Kazi Sakib
Nov 11
Est: 45 minutes