Open infrastructure
for real-time AI video

Generate, transform, and interpret live video streams with low-latency AI inference on an open and permissionless elastic GPU network.

We'll notify you when access opens up.

Use Cases

What you can build with Livepeer

From generated worlds and real-time video analysis to autonomous avatars and live transcoding — the network powers every real-time AI video use case.

GENERATING
60 FPS
Frame 1,847 · 12ms

AI-Generated Worlds

Interactive environments produced frame-by-frame with real-time inference on live video.

LIVE
YOLOv8 · 8ms

Real-Time Video Analysis

Computer vision and object detection running as always-on AI pipelines with low latency.

IngestModelOutput
Pipeline: img2img · 3 stages
Active

Composable AI Pipelines

Chain inference models into multi-stage pipelines that process video end to end.

LIVE
1080p 60fps
720p 30fps
480p 30fps
360p 30fps
Bitrate: 4.2 Mbps · Latency: 85ms

Live Transcoding & Streaming

Adaptive bitrate transcoding across a global GPU network with sub-second latency.

INPUT
OUTPUT
Style: Anime · 22ms

AI Avatars & Agents

Motion capture and style transfer powering persistent digital identities in real time.

batch_0042
batch_0043
batch_0044
batch_0045
2,847 / 10,000 frames
GPU: 94%Queue: 12

Synthetic Data Generation

Generate labeled training data at scale — video frames, annotations, and augmentations.

About Livepeer

Real‑time AI video APIs for developers and agents alike

Get an API key, pick a model, start sending frames. The network handles GPU routing, billing, and SLA enforcement.

Developer API Manager
production
lp_sk_1a2b•••c8d9
Active
Billing Provider
Daydream
Livepeer Studio
3 keys · 2 activeCDN: Enabled

1.Authenticate & connect

Create an API key and choose a billing provider. Your key authenticates requests, your provider handles routing and payments.

Models
Search models...
GameNGen
Real-time world generation
$0.006/min12msworld
Depth Anything v2
Monocular depth estimation
$0.003/min8msvision
StreamDiffusion
Real-time image generation
$0.008/min24msgen
Whisper v3
Speech-to-text transcription
$0.002/min45msaudio
12 models available4 regions

2.Pick a model

Browse available models optimized for real-time inference. See latency, type, and health across the network.

StreamLive
stream_8f3k2m
POST /v1/stream/start
{ workflow: "world-model-v1" }
Input
30 fps
Output
30 fps
Latency
14ms
Uptime
4m 32s
Frames
8,142
US East · GPU A100● Connected

3.Start streaming

Open a session, send frames, receive outputs. The gateway routes your stream to the best-performing GPU provider.

Usage & SLAsLast 24h
14msavg latency↓ 12% vs yesterday
Uptime
99.97%99.9%
Failure Rate
0.02%<0.1%
Swap Rate
1.2%<5%
All SLAs passing● Healthy

4.Ship with confidence

Published SLAs on latency, uptime, and failure rate. Track performance per workflow, region, and GPU provider.

Why Livepeer

Purpose-built for live AI inference

Where generic GPU clouds optimize for batch workloads, Livepeer is built from the ground up for continuous inference on live video streams.

10x

Cost Reduction

Usage-based GPU pricing with no reserved instances or idle capacity.

Cloud
$1.00
Livepeer
$0.10
<1s

Real-Time Latency

Purpose-built for continuous, frame-by-frame AI inference on live video.

Cloud
2-5s
Livepeer
<1s
0s

Cold Start

Warm GPUs 24/7 — inference starts immediately on every stream.

Cloud GPU30-60s
LivepeerInstant

Elastic Scale

Go from 1 to 10,000 streams without provisioning a single GPU.

CloudFixed
LivepeerElastic

Curious how it all works? Read the 10-minute primer →

Developers

Be first to build

Get early access to Livepeer — a single API for real-time AI video inference, built for developers and AI agents alike.

We'll notify you when access opens up.