n8n Vector Store Integration: Pinecone, Weaviate & pgvector Workflows

Affiliate/Ads disclaimer: Some links on this blog are affiliate/ads links to support this project going on, meaning I may earn a commission at no extra cost to you.


Published: May 6, 2026
Updated: May 7, 2026
n8n Vector Store Integration: Pinecone, Weaviate & pgvector Workflows
⚡ n8n Workflow Automation T4 · Vector Store Integration
n8n Vector Store Integration: Pinecone, Weaviate & pgvector Workflows

n8n vector store integrations transform documents into embeddings and power semantic search that retrieves meaning, not just keywords. You choose between three deployment models: pgvector self‑hosts inside your existing PostgreSQL instance, Pinecone offers a managed API‑first vector database with purpose‑built nodes, and Weaviate provides either a self‑hosted or cloud‑managed option. This guide details the common RAG blueprint applicable to all three stores, then provides store‑specific configuration, a comparison table, and AI Agent patterns for production‑grade retrieval‑augmented generation.

How do you build a universal RAG pipeline blueprint in n8n that works across all vector stores?

A production RAG pipeline in n8n follows a six‑stage blueprint: Ingest (Default Data Loader reads PDFs/HTML/JSON) → Chunk (Recursive Character Text Splitter with chunk_size and chunk_overlap) → Embed (Embeddings OpenAI or Cohere) → Store (any vector store node) → Retrieve (Vector Store Retriever or AI Agent Tool mode) → Answer (Question & Answer Chain fed by retriever).

The “keep metadata in chunks, embed on insert, and attach source citations at the end” pattern works universally. text-embedding-3-small (1 536 dimensions) as embedder, chunk_size: 1000 and chunk_overlap: 200 are commonly used as flexible starting points for text‑based RAG. Attach source URLs in metadata.source, and at the final formatter output, join citations with a simple expression that iterates over $json.results to list each source. For real‑world applications of this pipeline, see the n8n AI Agents & LLM Orchestration guide.

Stage Node(s) Key Configuration Output
1. Ingest Default Data Loader Binary input from Google Drive, HTTP, or manual upload Extracted text or structured data
2. Chunk Recursive Character Text Splitter chunk_size (e.g., 1000), chunk_overlap (e.g., 200) Array of text segments with preserved context
3. Embed Embeddings OpenAI / Cohere Use text-embedding-3-small, text-embedding-3-large, or embed-english-v3.0 High‑dimensional vectors per chunk
4. Store Pinecone / Weaviate / PGVector Vector Store (Insert Mode) Index name, collection name, namespace, or column mapping Persisted vectors + metadata
5. Retrieve Vector Store Retriever or AI Agent (Tool Mode) Top‑K, similarity metric, metadata filters Relevant document chunks
6. Answer Question & Answer Chain / Basic LLM Chain System prompt, model selection, source citation formatting Grounded AI response with citations

How do you configure a Pinecone vector store node for hybrid search and metadata filtering in n8n?

Pinecone provides two integration paths in n8n: the Pinecone Vector Store node for full pipeline control and the Pinecone Assistant node for managed RAG with minimal setup. For production agent flows, use the Vector Store node in “Retrieve Documents (As Tool for AI Agent)” mode — this is the default and recommended operation since n8n v1.33.0 added serverless index support.

Never omit the “Tool Description” field: the AI Agent reads it to decide when to query the vector store; leaving it empty causes agent failures. Delete any legacy “Answer questions with a vector store” node if present — these older nodes have known bugs. For embedding dimensions, match exactly to your model: text-embedding-ada-002 expects exactly 1 536 dimensions. Set the namespace option to true if your index uses one, otherwise the connection will fail. Pinecone integrates at the tool level with AI Agents; the official RAG template demonstrates the complete flow from Google Drive to Pinecone to OpenAI Chat. For detailed credential setup, see the n8n Credential Security guide.

⚠️ Serverless Index Gotcha: Pinecone serverless indexes require “Retrieve Documents (As Tool for AI Agent)” mode — the older “chain/tool” mode does not work with serverless. Additionally, n8n v1.33.0 introduced native serverless support; upgrade if you’re on an older version.
Store Hosting Ideal Scale Key Advantage Watch Out For
pgvector Self‑host (Postgres) Small–Medium (<1M vectors) Zero new infra; reuses Postgres backups + SQL Must manually create DB, extension, table schema, and HNSW indexes
Pinecone Cloud‑managed Medium–Large (1M–100M+ vectors) Hands‑off ops, serverless indexes, hybrid search (semantic + lexical) Serverless requires correct node mode; cost scales with dimension/volume
Weaviate Self‑hosted or Cloud Medium–Large Flexible deployment, built‑in vectorization, multi‑tenant collections Self‑host requires Docker or K8s cluster; cloud tier pricing varies

How do you set up Weaviate as a self‑hosted or cloud vector store in n8n?

The Weaviate Vector Store node supports four modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent). The last mode connects directly to an AI Agent’s tool connector.

Weaviate supports self‑hosted and cloud clusters. For self‑hosted, start a Weaviate cluster with Docker Compose, then enter the instance URL and API key in n8n credentials. For cloud, sign up for Weaviate Cloud Services, create a cluster, and use the provided endpoint and key. A standard credential panel accepts the Weaviate URL and API key for either path. The official “Document Q&A with RAG: Query PDF content using Weaviate and OpenAI” template demonstrates a minimal RAG implementation that uploads a PDF, generates embeddings with OpenAI, stores them in a Weaviate collection, and provides a chat interface for natural‑language queries. For the complete set of node parameters and vectorizer options, see the n8n Nodes & Techniques Hub.

How do you run pgvector locally with Docker and connect it to n8n as a vector store?

Spin up pgvector with Docker: pull pgvector/pgvector:pg16, run a container with persistent storage, then enable the extension and create a table. Next, connect n8n to this instance using a Postgres node with the same credentials, then open a Canvas and add the PGVector Vector Store node. Use the Insert Documents mode to store embeddings and Retrieve Documents for queries.

After pulling the pgvector image and starting the container, connect with psql as postgres to create the database and enable the vector extension. Then create a table with a vector column matching your embedding model’s dimensions (1 536 for text‑embedding‑3‑small, 3 072 for text‑embedding‑3‑large). Add an HNSW index (or IVFFlat for smaller datasets) to accelerate cosine similarity queries over millions of vectors. The PGVector Vector Store node in n8n’s AI package connects to your PostgreSQL database and uses the table you created. For production, pgvector runs inside your existing PostgreSQL backup strategy — no new infrastructure. For scaling pgvector to high‑volume workloads with queue mode, see the n8n Scaling & Queue Configuration guide.

How do you connect a vector store directly to the AI Agent’s tool connector?

In the AI Agent node, locate the tool connector on the right side of the node. Connect it to a vector store node set to “Retrieve Documents (As Tool for AI Agent)” — this is the correct mode for agentic RAG since n8n v1.33.0. Fill in the “Tool Description” field with natural language so the agent knows when to search the vector store.

The legacy mode “Retrieve Documents (As Vector Store for Chain/Tool)” works for older chain‑based workflows but does not work with Pinecone serverless indexes and should be avoided for new agent projects. The AI Agent decides autonomously when to query the vector store based on the tool description. You can connect multiple vector store tools to a single agent, each with a different description — for example, one for policy documents and another for product manuals. The agent selects the appropriate store at runtime based on the user’s question. For debugging, check the agent’s execution log to see which tool was selected for each query. For more on agent-based workflows, see our n8n AI Agents & LLM Orchestration guide.

🧠 Multi‑Tool Agent Pattern: You can attach multiple vector store nodes — e.g., one for HR policies (Weaviate), one for product specs (Pinecone), and one for legal docs (pgvector) — to a single AI Agent. The agent selects which store to query based on the user’s question. Simply provide distinct tool descriptions such as “Use this tool to search HR policies and employee handbooks” and “Use this tool to search product specifications and technical documentation.”

How do you optimize vector store performance, cost, and retrieval quality for production RAG?

Pinecone is purpose‑built for vector search with hands‑off operations and hybrid (semantic + lexical) search — best for teams without existing database infrastructure. Weaviate provides built‑in vectorization modules, flexible deployment, and multi‑tenant collections — optimal for organizations needing on‑premises deployment. pgvector reuses existing PostgreSQL tooling, simplifies the operational footprint to a single database for backups, and supports both HNSW and IVFFlat indexes — ideal for teams already running Postgres who want to add vector search at no additional cost.

Dimension matching is the #1 pitfall: mismatched embedding dimensions cause silent failures — always verify that text-embedding-3-large generates 3 072‑dimensional vectors (or as downsized) and configure your vector store table or index to match. The second most important consideration is chunking: a chunk_size of 1 000 tokens with 200‑token overlap preserves context without generating excessive dimensions. For massive datasets, Qdrant provides another viable option with native batch ingestion and automatic collection creation — a configuration that minimizes manual provisioning. The ecosystem also includes Supabase Vector Store (managed pgvector with native n8n integration) and Chroma for lightweight local RAG testing. For a complete dataset generation walkthrough that integrates with vector stores, see the n8n OpenAI Prompt Chain Tutorial.

References

This guide is for informational purposes only. Vector store features, node modes, and embedding dimensions may change across n8n versions. Always refer to the official n8n documentation, Pinecone docs, Weaviate docs, and pgvector GitHub for the most current configuration reference.

Leave a Reply

Your email address will not be published. Required fields are marked *