AI Video Upscaling and Super Resolution

We design and deploy artificial intelligence systems: from prototype to production-ready solutions. Our team combines expertise in machine learning, data engineering and MLOps to make AI work not in the lab, but in real business.
Showing 1 of 1 servicesAll 1566 services
AI Video Upscaling and Super Resolution
Medium
~2-3 business days
FAQ
AI Development Areas
AI Solution Development Stages
Latest works
  • image_website-b2b-advance_0.png
    B2B ADVANCE company website development
    1212
  • image_web-applications_feedme_466_0.webp
    Development of a web application for FEEDME
    1161
  • image_websites_belfingroup_462_0.webp
    Website development for BELFINGROUP
    852
  • image_ecommerce_furnoro_435_0.webp
    Development of an online store for the company FURNORO
    1041
  • image_logo-advance_0.png
    B2B Advance company logo design
    561
  • image_crm_enviok_479_0.webp
    Development of a web application for Enviok
    822

AI Super-Resolution for Video — Video Content Upscaling

Video upscaling is more complex than image upscaling: temporal consistency is required — adjacent frames must look coherent, otherwise result flickers. Simply applying Real-ESRGAN to each frame is wrong: noise on uniform surfaces will change from frame to frame.

Real-BasicVSR and BasicVSR++ — Core Models

import torch
import numpy as np
import cv2
from basicsr.archs.basicvsrpp_arch import BasicVSRPlusPlus

def upscale_video_basicvsr(
    frames: list[np.ndarray],   # list of frames (H, W, 3) BGR
    scale: int = 4,
    num_feat: int = 64,
    num_propagation_blocks: int = 7,
    cpu_cache_length: int = 100  # frames in GPU memory simultaneously
) -> list[np.ndarray]:
    """
    BasicVSR++ uses bidirectional propagation:
    information from past AND future frames.
    cpu_cache_length: for long videos, unload some frames to CPU.
    """
    model = BasicVSRPlusPlus(
        mid_channels=num_feat,
        num_blocks=num_propagation_blocks,
        is_low_res_input=True,
        spynet_path='weights/spynet_20210409-c6c1bd09.pth'
    )
    state_dict = torch.load(
        f'weights/BasicVSR++_reds4_vimeo90k.pth'
    )['params']
    model.load_state_dict(state_dict, strict=True)
    model.eval().cuda()

    # Normalize and convert BGR→RGB
    tensor_frames = []
    for frame in frames:
        f_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        t = torch.from_numpy(f_rgb).float() / 255.0
        t = t.permute(2, 0, 1).unsqueeze(0)  # (1, C, H, W)
        tensor_frames.append(t)

    # Batch all frames → (1, T, C, H, W)
    video_tensor = torch.stack(
        [f.squeeze(0) for f in tensor_frames], dim=0
    ).unsqueeze(0).cuda()

    with torch.no_grad(), torch.cuda.amp.autocast():
        output = model(video_tensor)  # (1, T, C, 4H, 4W)

    result = []
    for i in range(output.shape[1]):
        frame_t = output[0, i].float().cpu()
        frame_np = (frame_t.permute(1,2,0).numpy() * 255).clip(0,255)
        result.append(
            cv2.cvtColor(frame_np.astype(np.uint8), cv2.COLOR_RGB2BGR)
        )
    return result

Chunked Processing of Long Videos

Entire films won't fit in VRAM with BasicVSR++. Process in chunks with overlap:

def upscale_long_video(
    input_path: str,
    output_path: str,
    chunk_frames: int = 50,     # frames per chunk
    overlap_frames: int = 5,    # overlap for seamless stitching
    scale: int = 4
) -> None:
    cap = cv2.VideoCapture(input_path)
    fps = cap.get(cv2.CAP_PROP_FPS)
    w   = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
    h   = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))

    writer = cv2.VideoWriter(
        output_path,
        cv2.VideoWriter_fourcc(*'mp4v'),
        fps, (w * scale, h * scale)
    )

    frames_buffer = []
    processed_count = 0

    while cap.isOpened():
        ret, frame = cap.read()
        if not ret:
            # Process remaining frames
            if frames_buffer:
                upscaled = upscale_video_basicvsr(frames_buffer)
                for upf in upscaled[overlap_frames:]:
                    writer.write(upf)
            break

        frames_buffer.append(frame)

        if len(frames_buffer) == chunk_frames:
            # Process chunk, keep overlap for next iteration
            upscaled = upscale_video_basicvsr(frames_buffer)
            for upf in upscaled[:-overlap_frames]:
                writer.write(upf)
            frames_buffer = frames_buffer[-overlap_frames:]

    cap.release()
    writer.release()

Specialized Models for Different Scenarios

Model Input Temporal Speed Use Case
BasicVSR LR video Bidirectional 2–3 FPS General video
BasicVSR++ LR video Bidirectional 1–2 FPS High quality
RealBasicVSR Real-world Bidirectional 2–4 FPS Degraded video
RealESRGAN (per frame) LR images None 30+ FPS No consistency needed

Common Artifacts

  • Temporal flickering — usually caused by inconsistent noise estimation. Solution: increase temporal window (more propagation blocks)
  • Ghosting on fast motion — flow estimation errors. Solution: use larger feature dimensions
  • Memory overflow on long sequences — chunking with proper overlap is essential
Task Timeline
BasicVSR integration for batch processing 2–3 weeks
Optimization for real-time streaming 4–6 weeks
Full restoration pipeline with quality checks 8–12 weeks