PromptLayer integration for versioning and monitoring prompts
PromptLayer is a middleware platform that intercepts calls to the LLM API (OpenAI, Anthropic), logs requests and responses, associates them with prompt versions, and provides analytics. Integration takes 30 minutes—just add one parameter to your existing code.
Installation and basic integration
pip install promptlayer
import promptlayer
from promptlayer import openai # Замена стандартного openai клиента
promptlayer.api_key = "pl_..."
# Вместо: from openai import OpenAI; client = OpenAI()
# Используем:
client = promptlayer.openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Summarize: {{document}}"}],
pl_tags=["summarization", "v2"], # Теги для фильтрации
return_pl_id=True # Возврат ID для последующей разметки
)
pl_request_id = response[1] # PromptLayer request ID
# Последующая разметка (score, ground truth)
promptlayer.track.score(
request_id=pl_request_id,
score=85 # 0-100 — качество ответа
)
Prompt Templates in PromptLayer
# Создание template через API
template = promptlayer.templates.get(
"summarization-v2",
provider="openai",
model="gpt-4o"
)
# Использование template с переменными
response, pl_id = promptlayer.run(
template_name="summarization-v2",
input_variables={"document": long_document_text},
tags=["production"],
return_pl_id=True
)
PromptLayer automatically logs tokens, cost, latency, prompt version, input variables, and response. The dashboard is available at promptlayer.com without any additional configuration.







