Implementing Multi-Camera Recording in Mobile Applications
Simultaneous recording from front and back camera — feature not supported by all devices, and exactly where developers discover that API is more complex than documentation suggests.
Hardware Limitations
On iOS AVCaptureMultiCamSession available only on iPhone XS and newer (A12+). AVCaptureMultiCamSession.isMultiCamSupported — first thing to check. On older devices — graceful degradation: single camera only or sequential recording.
On Android, multi-camera via Camera2 API: CameraCharacteristics.LOGICAL_MULTI_CAMERA_SENSOR_TYPE and physical camera list via getPhysicalCameraIds(). Simultaneous capture from two physical cameras available since Android 9 (API 28), but real support depends on manufacturer — Samsung and Pixel work stably, some MediaTek devices have limitations.
iOS: AVCaptureMultiCamSession
let session = AVCaptureMultiCamSession()
// Back camera
let backInput = try AVCaptureDeviceInput(device: backCamera)
let backOutput = AVCaptureMovieFileOutput()
let backConnection = AVCaptureMultiCamSession.Connection(
inputPort: backInput.ports[0], output: backOutput)
// Front camera
let frontInput = try AVCaptureDeviceInput(device: frontCamera)
let frontOutput = AVCaptureMovieFileOutput()
session.addInputWithNoConnections(backInput)
session.addOutput(backOutput)
session.addInputWithNoConnections(frontInput)
session.addOutput(frontOutput)
session.addConnection(backConnection)
Important: hardwareCost and systemPressureCost of session may exceed the limit. Reduce resolution of one camera to 720p if hardwareCost > 1.0. Otherwise startRunning() completes with error without explicit message.
Picture-in-Picture During Recording
Two video streams record to separate files. For final PiP video — AVMutableComposition: main stream full screen, second — scale via AVMutableVideoCompositionLayerInstruction.setTransform() to corner. AVVideoCompositionInstruction with two layerInstructions assembles final file.
Android: Camera2 Multi-Camera
val cameraManager = getSystemService(CameraManager::class.java)
val multiCameraId = cameraManager.cameraIdList.firstOrNull { id ->
val chars = cameraManager.getCameraCharacteristics(id)
chars.get(REQUEST_AVAILABLE_CAPABILITIES)
?.contains(REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA) == true
}
Open CameraDevice, create CaptureSession with surfaces of both cameras. For CameraX — CameraSelector with physical ID via CameraSelector.Builder().addCameraFilter().
Flutter
Direct multi-camera capture via camera (pub.dev) not fully supported. Solution — MethodChannel + native implementation in Swift/Kotlin, which we pass to Flutter via Texture widget.
Resource Management
AVCaptureMultiCamSession consumes significantly more energy and memory than single session. On iPhone 12 mini, session.hardwareCost with two 1080/30fps streams can reach 0.9–1.0. Exceeding value 1.0 causes startRunning() to silently return error without delegate call. Solution: reduce resolution of one camera to 720p or lower frameRate to 24fps — select AVCaptureDevice.Format with minimum videoSupportedFrameRateRanges.
systemPressureCost — thermal indicator. Subscribe to AVCaptureSession.systemPressureStateNotification, on level == .critical lower frameRate to 15fps, on level == .shutdown stop recording — otherwise system forcibly kills session.
Timeline
3–5 days: iOS implementation (main volume) + basic Android version. PiP composition of final video — plus 1–2 days.







