RAG Development with Qdrant Vector Database
Qdrant is an open-source vector database written in Rust. Distinguished by high performance, rich filtering system via payload, built-in hybrid search, and convenient REST/gRPC API. Available as managed cloud (Qdrant Cloud) and self-hosted. One of the most popular solutions for production RAG systems.
Installation via Docker and Connection
docker pull qdrant/qdrant
docker run -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrant
from qdrant_client import QdrantClient
from qdrant_client.models import (
Distance, VectorParams, SparseVectorParams,
SparseIndexParams, HnswConfigDiff
)
client = QdrantClient(url="http://localhost:6333")
# Or cloud
client = QdrantClient(url="https://cluster.qdrant.tech", api_key="...")
Creating Collection with Dense and Sparse Vectors
from qdrant_client.models import VectorsConfig, SparseVectorsConfig
client.create_collection(
collection_name="documents",
vectors_config={
"dense": VectorParams(
size=1536,
distance=Distance.COSINE,
hnsw_config=HnswConfigDiff(m=16, ef_construct=200)
)
},
sparse_vectors_config={
"sparse": SparseVectorParams(
index=SparseIndexParams(on_disk=False)
)
}
)
Indexing with Payload Metadata
from qdrant_client.models import PointStruct, SparseVector
from fastembed import SparseTextEmbedding, TextEmbedding
import uuid
dense_model = TextEmbedding("sentence-transformers/paraphrase-multilingual-mpnet-base-v2")
sparse_model = SparseTextEmbedding("prithivida/Splade_PP_en_v1")
def index_chunks(chunks: list) -> None:
points = []
for chunk in chunks:
# Dense embedding
dense_vec = list(dense_model.embed([chunk.text]))[0].tolist()
# Sparse embedding (SPLADE)
sparse_output = list(sparse_model.embed([chunk.text]))[0]
point = PointStruct(
id=str(uuid.uuid4()),
vector={
"dense": dense_vec,
"sparse": SparseVector(
indices=sparse_output.indices.tolist(),
values=sparse_output.values.tolist()
)
},
payload={
"text": chunk.text,
"source": chunk.source,
"doc_type": chunk.doc_type,
"page": chunk.page,
"date": chunk.date,
"department": chunk.department,
}
)
points.append(point)
client.upsert(collection_name="documents", points=points)
Hybrid Search with RRF
from qdrant_client.models import Prefetch, FusionQuery, Fusion, Filter, FieldCondition, MatchValue
def hybrid_search(
query: str,
doc_type: str = None,
top_k: int = 5
) -> list:
# Query vectors
dense_vec = list(dense_model.embed([query]))[0].tolist()
sparse_output = list(sparse_model.embed([query]))[0]
sparse_vec = SparseVector(
indices=sparse_output.indices.tolist(),
values=sparse_output.values.tolist()
)
# Metadata filter
query_filter = None
if doc_type:
query_filter = Filter(
must=[FieldCondition(key="doc_type", match=MatchValue(value=doc_type))]
)
# Hybrid search with RRF fusion
results = client.query_points(
collection_name="documents",
prefetch=[
Prefetch(query=dense_vec, using="dense", limit=30, filter=query_filter),
Prefetch(query=sparse_vec, using="sparse", limit=30, filter=query_filter),
],
query=FusionQuery(fusion=Fusion.RRF),
limit=top_k,
with_payload=True,
)
return [
{"text": r.payload["text"], "source": r.payload["source"], "score": r.score}
for r in results.points
]
Practical Case: RAG for E-Commerce Support
Task: multilingual e-commerce support assistant (Russian/English), 85,000 chunks (FAQs, policies, product descriptions).
Stack: Qdrant self-hosted (Docker), SPLADE for sparse, paraphrase-multilingual-mpnet-base-v2 for dense, GPT-4o-mini for generation.
Results hybrid vs dense only:
| Metric | Dense only | Hybrid (RRF) |
|---|---|---|
| MRR@5 | 0.71 | 0.84 |
| NDCG@5 | 0.68 | 0.81 |
| Faithfulness | 0.82 | 0.91 |
| Latency P95 | 95ms | 140ms |
Hybrid search gives +13% to MRR due to exact BM25-matching on order numbers, product SKUs, and specific terms.
Filtering via Payload Indexes
To speed up filtering, create payload indexes:
from qdrant_client.models import PayloadSchemaType
client.create_payload_index(
collection_name="documents",
field_name="doc_type",
field_schema=PayloadSchemaType.KEYWORD,
)
client.create_payload_index(
collection_name="documents",
field_name="date",
field_schema=PayloadSchemaType.DATETIME,
)
Timeline
- Qdrant setup + collection schema: 1–2 days
- Ingestion pipeline (dense + sparse): 3–7 days
- Hybrid search + filtering: 3–5 days
- Evaluation and optimization: 1–2 weeks
- Total: 2–4 weeks







