AI Observability Setup (LangSmith, LangFuse, Helicone, Weights & Biases)

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Observability Setup (LangSmith, LangFuse, Helicone, Weights & Biases)
Simple
from 1 business day to 3 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1218
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    854
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1047
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    825

Setting up AI observability with LangSmith and Langfuse

LangSmith (LangChain) and Langfuse are specialized platforms for observability of LLM applications: call chain tracing, request costs, quality assessment, and prompt regression testing.

LangSmith Setup

pip install langchain langsmith
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=ls__xxx
export LANGCHAIN_PROJECT=my-llm-app
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

# Трассировка включается автоматически через env переменные
llm = ChatOpenAI(model="gpt-4o")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant"),
    ("user", "{question}")
])
chain = prompt | llm

# Все вызовы автоматически логируются в LangSmith
result = chain.invoke({"question": "What is RAG?"})

What you can see in LangSmith: full call tree trace, each LLM call with prompt/response, latency of each step, cost (tokens × price), errors with full stack trace.

Langfuse Setup (self-hosted)

# Docker Compose для self-hosted
docker compose up -d  # из langfuse/langfuse репозитория
from langfuse import Langfuse
from langfuse.decorators import observe, langfuse_context

langfuse = Langfuse(
    public_key="pk-xxx",
    secret_key="sk-xxx",
    host="http://localhost:3000"  # self-hosted
)

@observe()  # автоматически создаёт trace
def process_user_query(query: str) -> str:
    # Каждая вложенная @observe функция — span внутри trace
    context = retrieve_context(query)
    response = generate_response(query, context)

    # Оценка качества прямо в коде
    langfuse_context.score_current_trace(
        name="relevance",
        value=0.9,
        comment="Context was relevant"
    )
    return response

@observe(name="retrieve_context")
def retrieve_context(query: str) -> str:
    # ... vector search
    pass

Cost monitoring

Both platforms automatically calculate the cost based on the model and the number of tokens. Alerts: daily budget exceeded, cost per request increased by >2x, abnormal increase in token consumption.

Comparison of platforms

Parameter LangSmith Langfuse
Self-hosted No (SaaS) Yes (Open Source)
Integration with LangChain Native Via callback
Price $0–50+/month Free (self-hosted)
Prompt testing Yes Yes
Datasets & evals Yes Yes

Langfuse is preferred for data residency requirements and self-hosting.