RAG Development with Qdrant Vector Database

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
RAG Development with Qdrant Vector Database
Medium
from 1 week to 3 months
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

RAG Development with Qdrant Vector Database

Qdrant is an open-source vector database written in Rust. Distinguished by high performance, rich filtering system via payload, built-in hybrid search, and convenient REST/gRPC API. Available as managed cloud (Qdrant Cloud) and self-hosted. One of the most popular solutions for production RAG systems.

Installation via Docker and Connection

docker pull qdrant/qdrant
docker run -p 6333:6333 -v $(pwd)/qdrant_storage:/qdrant/storage qdrant/qdrant
from qdrant_client import QdrantClient
from qdrant_client.models import (
    Distance, VectorParams, SparseVectorParams,
    SparseIndexParams, HnswConfigDiff
)

client = QdrantClient(url="http://localhost:6333")
# Or cloud
client = QdrantClient(url="https://cluster.qdrant.tech", api_key="...")

Creating Collection with Dense and Sparse Vectors

from qdrant_client.models import VectorsConfig, SparseVectorsConfig

client.create_collection(
    collection_name="documents",
    vectors_config={
        "dense": VectorParams(
            size=1536,
            distance=Distance.COSINE,
            hnsw_config=HnswConfigDiff(m=16, ef_construct=200)
        )
    },
    sparse_vectors_config={
        "sparse": SparseVectorParams(
            index=SparseIndexParams(on_disk=False)
        )
    }
)

Indexing with Payload Metadata

from qdrant_client.models import PointStruct, SparseVector
from fastembed import SparseTextEmbedding, TextEmbedding
import uuid

dense_model = TextEmbedding("sentence-transformers/paraphrase-multilingual-mpnet-base-v2")
sparse_model = SparseTextEmbedding("prithivida/Splade_PP_en_v1")

def index_chunks(chunks: list) -> None:
    points = []
    for chunk in chunks:
        # Dense embedding
        dense_vec = list(dense_model.embed([chunk.text]))[0].tolist()

        # Sparse embedding (SPLADE)
        sparse_output = list(sparse_model.embed([chunk.text]))[0]

        point = PointStruct(
            id=str(uuid.uuid4()),
            vector={
                "dense": dense_vec,
                "sparse": SparseVector(
                    indices=sparse_output.indices.tolist(),
                    values=sparse_output.values.tolist()
                )
            },
            payload={
                "text": chunk.text,
                "source": chunk.source,
                "doc_type": chunk.doc_type,
                "page": chunk.page,
                "date": chunk.date,
                "department": chunk.department,
            }
        )
        points.append(point)

    client.upsert(collection_name="documents", points=points)

Hybrid Search with RRF

from qdrant_client.models import Prefetch, FusionQuery, Fusion, Filter, FieldCondition, MatchValue

def hybrid_search(
    query: str,
    doc_type: str = None,
    top_k: int = 5
) -> list:
    # Query vectors
    dense_vec = list(dense_model.embed([query]))[0].tolist()
    sparse_output = list(sparse_model.embed([query]))[0]
    sparse_vec = SparseVector(
        indices=sparse_output.indices.tolist(),
        values=sparse_output.values.tolist()
    )

    # Metadata filter
    query_filter = None
    if doc_type:
        query_filter = Filter(
            must=[FieldCondition(key="doc_type", match=MatchValue(value=doc_type))]
        )

    # Hybrid search with RRF fusion
    results = client.query_points(
        collection_name="documents",
        prefetch=[
            Prefetch(query=dense_vec, using="dense", limit=30, filter=query_filter),
            Prefetch(query=sparse_vec, using="sparse", limit=30, filter=query_filter),
        ],
        query=FusionQuery(fusion=Fusion.RRF),
        limit=top_k,
        with_payload=True,
    )

    return [
        {"text": r.payload["text"], "source": r.payload["source"], "score": r.score}
        for r in results.points
    ]

Practical Case: RAG for E-Commerce Support

Task: multilingual e-commerce support assistant (Russian/English), 85,000 chunks (FAQs, policies, product descriptions).

Stack: Qdrant self-hosted (Docker), SPLADE for sparse, paraphrase-multilingual-mpnet-base-v2 for dense, GPT-4o-mini for generation.

Results hybrid vs dense only:

Metric Dense only Hybrid (RRF)
MRR@5 0.71 0.84
NDCG@5 0.68 0.81
Faithfulness 0.82 0.91
Latency P95 95ms 140ms

Hybrid search gives +13% to MRR due to exact BM25-matching on order numbers, product SKUs, and specific terms.

Filtering via Payload Indexes

To speed up filtering, create payload indexes:

from qdrant_client.models import PayloadSchemaType

client.create_payload_index(
    collection_name="documents",
    field_name="doc_type",
    field_schema=PayloadSchemaType.KEYWORD,
)

client.create_payload_index(
    collection_name="documents",
    field_name="date",
    field_schema=PayloadSchemaType.DATETIME,
)

Timeline

  • Qdrant setup + collection schema: 1–2 days
  • Ingestion pipeline (dense + sparse): 3–7 days
  • Hybrid search + filtering: 3–5 days
  • Evaluation and optimization: 1–2 weeks
  • Total: 2–4 weeks