FLUX Integration for Image Generation

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
FLUX Integration for Image Generation
Simple
~2-3 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

FLUX integration for image generation

FLUX from Black Forest Labs (the team behind Stable Diffusion) is the current state-of-the-art in realistic image generation. FLUX.1 Dev and FLUX.1 Pro outperform SDXL and are comparable to Midjourney v6 in quality.

Model options

Model Usage License Speed
FLUX.1 Pro API-only Commercial 15–30 sec
FLUX.1 Dev Self-hosted / API Non-commercial 20–40 сек
FLUX.1 Schnell Self-hosted Apache 2.0 3–8 sec (4 steps)

Replicate API integration

import replicate
import httpx
import asyncio

async def generate_flux(
    prompt: str,
    model: str = "flux-dev",  # flux-pro, flux-dev, flux-schnell
    aspect_ratio: str = "1:1",
    output_format: str = "webp",
    guidance: float = 3.5,
    steps: int = 28
) -> bytes:
    model_map = {
        "flux-pro": "black-forest-labs/flux-pro",
        "flux-dev": "black-forest-labs/flux-dev",
        "flux-schnell": "black-forest-labs/flux-schnell"
    }

    output = await replicate.async_run(
        model_map[model],
        input={
            "prompt": prompt,
            "aspect_ratio": aspect_ratio,
            "output_format": output_format,
            "output_quality": 90,
            "guidance": guidance,
            "num_inference_steps": steps,
        }
    )

    async with httpx.AsyncClient() as client:
        response = await client.get(str(output[0]))
        return response.content

Self-hosted с diffusers

from diffusers import FluxPipeline
import torch

pipe = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-dev",
    torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()  # Для GPU < 24 GB

def generate(prompt: str, width: int = 1024, height: int = 1024) -> bytes:
    import io
    image = pipe(
        prompt,
        height=height,
        width=width,
        guidance_scale=3.5,
        num_inference_steps=50,
        max_sequence_length=512,
        generator=torch.Generator("cpu").manual_seed(0)
    ).images[0]

    buf = io.BytesIO()
    image.save(buf, format="PNG")
    return buf.getvalue()

FLUX.1 Schnell for rapid prototyping

# Schnell: Apache 2.0, 4 шага вместо 50
pipe_schnell = FluxPipeline.from_pretrained(
    "black-forest-labs/FLUX.1-schnell",
    torch_dtype=torch.bfloat16
)

image = pipe_schnell(
    prompt="professional photo of a product on white background",
    num_inference_steps=4,  # 4 шага достаточно
    guidance_scale=0.0,     # Schnell не использует CFG
).images[0]

FLUX ControlNet allows pose, depth, and edge control (like ControlNet for SD). Timeline: Replicate API integration – 1 day. Self-hosted with a queue – 1 week.