Building an AI Assistant with Mistral in a Mobile Application
Mistral is a European provider with servers in France, covering GDPR requirements without additional data processing agreements. For European B2B applications, this is often the decisive argument. Technically, Mistral API is maximally compatible with OpenAI—a client written for ChatGPT switches to Mistral by changing base URL and model.
Mistral API: OpenAI Compatibility and Differences
https://api.mistral.ai/v1/chat/completions—endpoint accepts the same parameters as OpenAI Chat Completions. Authorization header: Authorization: Bearer {api_key}—identical. This means most OpenAI client libraries work with Mistral without changes—just redefine base URL.
On Swift:
// Reuse OpenAI-compatible client
let mistralClient = OpenAICompatibleClient(
baseURL: URL(string: "https://api.mistral.ai/v1")!,
apiKey: serverProvidedToken // always via server proxy
)
On Android—similarly via Retrofit or any HTTP client with customizable base URL.
Models: Choosing for the Task
| Model | Context | Application |
|---|---|---|
mistral-small-latest |
32K | Fast tasks, classification, brief answers |
mistral-medium-latest |
32K | General assistant, medium complexity |
mistral-large-latest |
128K | Complex instructions, document analysis |
codestral-latest |
32K | Code-related tasks |
mistral-embed |
— | Embeddings for semantic search |
For general-purpose mobile assistant—mistral-small-latest gives good speed-to-quality ratio. mistral-large-latest with 128K context—for document processing.
Function Calling and JSON Mode
Mistral supports function calling via tools (syntax identical to OpenAI):
let tools: [Tool] = [
Tool(
type: "function",
function: FunctionDefinition(
name: "search_product_catalog",
description: "Search products in catalog",
parameters: JSONSchema(
type: "object",
properties: ["query": .string, "category": .string],
required: ["query"]
)
)
)
]
JSON Mode (response_format: {"type": "json_object"})—reliable way to get structured output. Useful for data extraction tasks where result needs immediate deserialization to model.
Mistral OCR
Mistral launched specialized API for document processing—mistral-ocr-latest. This is not just OCR but document structure understanding: tables, formulas, multi-column text. On mobile, useful for apps analyzing invoices, contracts, medical documents.
let ocrRequest = MistralOCRRequest(
model: "mistral-ocr-latest",
document: DocumentContent(
type: "document_url",
documentURL: uploadedFileURL
),
includeImageBase64: false
)
Document (PDF or image) is first uploaded via Files API, then passed via URI.
Pixtral: Multimodality
pixtral-large-latest—Mistral's multimodal model, accepts images in content block:
let message = MistralMessage(
role: "user",
content: [
.imageURL("data:image/jpeg;base64,\(imageBase64)"),
.text("Describe the content of this document")
]
)
Supports up to 128K image tokens in single request.
GDPR and Data Storage
Mistral La Plateforme processes requests on EU servers. By default, data is not used for retraining. For enterprise customers, DPA (Data Processing Agreement) available under GDPR Article 28—speeds up legal review when implementing.
Timeline Estimates
Text assistant—1–1.5 weeks (accounting for OpenAI SDK compatibility—actually less). With OCR functionality, Pixtral, and server proxy—2.5–4 weeks.







