Low-Latency Live Streaming your Desktop using ffmpeg

I recently bought myself a projector, which I installed in one corner of the room. Unfortunately I didn’t buy a long enough HDMI cable with it, so I could not connect it to my desktop computer and instead used my loyal ThinkPad T60 for playback. But I also wanted to be able to play some games using the projector, for which my laptop wasn’t beefy enough. So I thought, why not just stream the games from my desktop computer to the laptop?

In this post I will explore how to stream 720p (or any quality for that matter) from one computer to another using ffmpeg and netcat, with a latency below 100ms, which is good enough for many games. TL;DR; If you don’t care about the technical details, just jump to the end of the post to try it out yourself.

The problem:

Streaming low-latency live content is quite hard, because most video codecs are designed to achieve the best compression and not best latency. This makes sense, because most movies are encoded once and decoded often, so it is a good trade-off to use more time for the encoding than the decoding.

One way to save space or bandwidth is to compress different parts of a movie with varying quality. For example, the h264 encoder will compress fast moving scenes more than a slow scene, because the human eye will not be able to see all the detail in the fast moving scene anyway, while the viewer might inspect the image of a slow scene more thoroughly. But to do so, the encoder has first to find out if it is currently encoding a fast- or slow-moving scene.

There are essentially two ways to find out if the content is fast- or slow-moving: The most commonly used method for non-live content is a so called two-pass encoding, in which the encoder first analyses the whole movie and then, in a second pass, encodes the movie using the information acquired before. Unfortunately this is not possible for live content. The second way to do this is to look ahead for a few frames in the movie material and then to decide adaptively whether this is a slow or a fast scene. Since it is not possible to look ahead in live material, the encoder has to buffer some frames before encoding them, so it can know what’s coming up next. This buffer however introduces latency in the ballpark of seconds rather than milliseconds, which makes it impossible to create an interactive live-stream using this technique. This also is a simplified example, since modern encoders can use multiple reference frames if the past and future of the current frame to reduce the redundancy of image data. These frames are called GOP (group of pictures) and those will play a significant role to get to a sub-second latency stream.

Because a low-latency live stream cannot afford the time to analyse the video material up front, live streamed content will always look worse than any other encoded content, and this can only be countered by using more bandwidth.

I found some blog posts that explain live-streaming in depth and most of them used the mpeg2video encoder, but most of them “only” wanted real-time encoding, but not necessarily low-latency. I achieved quite good results using mpeg2video, with the lowest latency of all the codecs I tried out, but also wanted to explore if I could achieve better image quality. So I tried out the x264 encoder using the “zerolatency” tuning. While the “zerolatency” tuning did help, I still had a huge latency, far from playing any game on a live stream. So I did a little digging and found a thread on the x264dev forums, which I resurrected from the internet archive, in which a lot of additional x264 flags are discussed. The most important one to bring down the latency was to use the smallest possible vbv-bufsize (which is the encoder buffer). Another good advice is to use so called intra-refresh, which means that there aren’t any full key frames anymore, but instead contents of a key frame are transmitted in blocks over time. This means that there is not a burst of data to send for each key frame, but instead parts of the image are refreshed over time. To figure out if some option would improve the latency or not I needed a way to measure it.

I wrote a quick script in python that would output the current time in milliseconds to measure the latency:

Using this script, I would then encode and decode the stream on my desktop at the same time. Making a screenshot of both the original desktop and the streamed desktop next to it gives me the total system latency.


On the the real desktop the time is at 6884, and on the right the streamed desktop it is at 6794. In this example the stream has a latency of about 90ms

In this way I could measure if any changes I made really improved the latency of my video stream without the network overhead. This would not always be totally accurate, as ffmpeg might take a snapshot of the desktop just before a new image is rendered by the Xwindow system, but should not be too far off.
Using this measurement tool, I made a lot of changes to the encoder options, but at some point I hit a brick wall. I couldn’t get the latency down any further: The client side was buffering the stream as well! I used mplayer to playback my video stream and tried out all kinds of flags like -nocache and -framedrop. While all of them helped a bit, mplayer performed best when using the -benchmark flag… Which seems to bypass anything that is not part of the actual video decoding process, which is exactly what was needed.
After figuring out good settings for the encoder and decoder, I also tried out other transport streams like RTP, but mpegts was the faster in my tests. I guess it would be possible to achieve even better latency using ffmpegs UDP stream instead sending the stream over TCP using netcat, but the UDP stream would break down after a few seconds, even on the local loopback, throwing around cryptic error messages about circular buffers…

The solution:

Using h264 (okay latency, better image quality, low bandwidth, high cpu usage)

The examples given here require at least 3000kbits/s or 3Mbit/s bandwidth.

On the host:

For further x264 options check out this guide. If you want to change them for your scenario, you always have to make sure that your

so in this scenario, for 3000 vbv-maxrate, I chose a vbv-bufsize of 100 at 30 FPS.

On the client:

Note: always use the -benchmark flag on the client-side. -framedrop might help as well, especially on slower clients.

Using mpeg2video (lowest latency, low image quality, high bandwidth, lower cpu usage)

Using mpeg2video, I could achieve almost no noticable latency, but the bandwidth requirements go through the roof, so this is only an option when you have either very fast WiFi or a LAN.

On the host, using about 16Mbit/s:

Increasing the -q:v setting will lower the quality and save some bandwidth and vice versa, setting it to a lower value will increase quality and bandwidth. Setting the quality to 2 gives a perfect image, but uses something around 150Mbit/s!

The client is the same as above in the x264 example.

Extra tips:

If you have the spare network bandwidth and CPU time, you can double the frame rate to 60 and you’ll get another latency drop. Using mpeg2video and 60 FPS I achieved essentially zero latency on the local loopback device, so all that is left is network latency. Also, you can try out the ultrafast setting of x264, but this only made the image look worse while not helping too much with the performance.

If you want to try to tweak this setup even further, you can pipe the host directly into the client instead of using the network, using the -quiet option of mplayer to see what the encoder is up to.

All for nothing

After all those tweaks, tests and setting everything up, streaming my desktop in 720p over WiFi to my laptop works very well. Unfortunately the input lag of the projector is extremely high in the “normal” mode and the so called “fast” mode or “gaming” mode cannot be turned on when using VGA as signal input. So I’ll have to wait until the HDMI cable arrives… bummer.

12 thoughts on “Low-Latency Live Streaming your Desktop using ffmpeg

    • yes, this is a linux command line solution. FFMPEG will run on other run on other operating systems, but the exact command would be different.

      You could probably configure VLC to stream using the same options though.

  1. Great article!
    Is it possible to combine your streaming solution with smth like xdummy? I`m intersted in bulding some kind of “virtual multiseat” server, where I I can stream severl user desktops to different devices (with 3d aceeleration, ofcourse)

    • Well in theory that would be possible, but in the end you’re limited by bandwidth. So just give it a try streaming your desktop over the wire multiple times to see how it pans out, before doing a full blown xpra setup.

  2. Thanks for the post! Could you help convert the commands to avconv since ffmpeg is deprecated in some operating systems? For starters, there’s no -x264opts. The docs talk about -x264-params but avconv version 9.18 doesn’t have that. It seems like most of the opts should be native, but I’m not having much luck.

    Here’s where I’m at so far:
    avconv -f video4linux2 -s 1280×720 -framerate 30 -i /dev/video0 -codec libx264 -preset veryfast -tune zerolatency -crf 20 -bufsize 100 -maxrate 3000 -intra-refresh 1 -g 30 -slice-max-size 1500 -refs 1 -f mpegts – | nc -l 9000 -k

    console output:
    avconv version 9.18-6:9.18-0ubuntu0.14.04.1, Copyright (c) 2000-2014 the Libav developers
    built on Mar 16 2015 13:19:10 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
    [video4linux2 @ 0xf72a60] The driver changed the time per frame from 1/30 to 1/10
    [video4linux2 @ 0xf72a60] Estimating duration from bitrate, this may be inaccurate
    Input #0, video4linux2, from ‘/dev/video0’:
    Duration: N/A, start: 1137764.826635, bitrate: 147456 kb/s
    Stream #0.0: Video: rawvideo, yuyv422, 1280×720, 147456 kb/s, 1000k tbn, 10 tbc
    [libx264 @ 0xf73d80] VBV maxrate specified, but no bufsize, ignored
    [libx264 @ 0xf73d80] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX AVX2 FMA3 LZCNT BMI2
    [libx264 @ 0xf73d80] profile High, level 3.1
    Output #0, mpegts, to ‘pipe:’:
    encoder : Lavf54.20.4
    Stream #0.0: Video: libx264, yuv420p, 1280×720, q=-1–1, 90k tbn, 25 tbc
    Stream mapping:
    Stream #0:0 -> #0:0 (rawvideo -> libx264)
    Press ctrl-c to stop encoding

    On the receiving end:

    $ nc 9000 | mplayer -benchmark –
    MPlayer 1.1-4.8 (C) 2000-2012 MPlayer Team
    mplayer: could not connect to socket
    mplayer: No such file or directory
    Failed to open LIRC support. You will not be able to use your remote control.

    Playing -.
    Reading from stdin…
    libavformat version 54.20.4 (external)
    Mismatching header version 54.20.3
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    Cannot seek backward in linear streams!
    Seek failed
    TS file format detected.
    Cannot seek backward in linear streams!
    Seek failed
    VIDEO H264(pid=256) NO AUDIO! (try increasing -tsprobe) NO SUBS (yet)! PROGRAM N. 1
    Cannot seek backward in linear streams!
    Seek failed
    FPS seems to be: 25.000000
    Load subtitles in ./
    Failed to open VDPAU backend libvdpau_i965.so: cannot open shared object file: No such file or directory
    [vdpau] Error when calling vdp_device_create_x11: 1
    Opening video decoder: [ffmpeg] FFmpeg’s libavcodec codec family
    libavcodec version 54.35.0 (external)
    Selected video codec: [ffh264] vfm: ffmpeg (FFmpeg H.264)
    Audio: no sound
    Starting playback…
    Unsupported AVPixelFormat 53
    libav* called av_log with context containing a broken AVClass!
    Non-reference picture received and no reference available
    [h264 @ 0x7fedbbb5bb00]decode_slice_header error
    [h264 @ 0x7fedbbb5bb00]concealing 243 DC, 243 AC, 243 MV errors
    Movie-Aspect is undefined – no prescaling applied.
    VO: [xv] 1280×720 => 1280×720 Planar YV12
    ^C 134.2 949/949 8% 1% 0.0% 0 0

    MPlayer interrupted by signal 2 in module: video_read_frame

  3. Great Post !. I’ve some issues in my environment. But couldn’t find exact root cause. Have Adobe Media server running.

    Setup is like core-edge concept. All users will connect edge media server, then edge only route the traffic to core.

    What the problem here is, I couldn’t any video delay in the core. whereas I see lot of delays observed in Edge server.
    If I restart the edge server once, there is no delay for few hours. Then It comes again.

    Do you have any suggestions how to find the root cause for this

  4. We appreciate your insights and are using them in our #ProjectSandy headless streaming VirtualBox instance on github. The virtual server contains shared folders to your host machile, so users can drag-and-drop movie files in and a .profile customization passes everything into ffmpeg. We’ll be using your insights for MPEG DASH settings. Environment is currently at 0.4.0. Thanks! BiStorm, LLC.

Leave a Reply

Your email address will not be published. Required fields are marked *