Why is Firefox so slow over SSH?
I try to launch Firefox over SSH, using
ssh -X user@hostname
but it’s very very slow.
How can I fix this? Is it a connection problem?
The default ssh settings make for a pretty slow connection. Try the following instead:
ssh -YC4c arcfour,blowfish-cbc user@hostname firefox -no-remote
The options used are:
-Y Enables trusted X11 forwarding. Trusted X11 forwardings are not
subjected to the X11 SECURITY extension controls.
-C Requests compression of all data (including stdin, stdout,
stderr, and data for forwarded X11 and TCP connections). The
compression algorithm is the same used by gzip(1), and the
“level” can be controlled by the CompressionLevel option for pro‐
tocol version 1. Compression is desirable on modem lines and
other slow connections, but will only slow down things on fast
networks. The default value can be set on a host-by-host basis
in the configuration files; see the Compression option.
-4 Forces ssh to use IPv4 addresses only.
Selects the cipher specification for encrypting the session.
For protocol version 2, cipher_spec is a comma-separated list of
ciphers listed in order of preference. See the Ciphers keyword
in ssh_config(5) for more information.
The main point here is to use a different encryption cypher, in this case arcfour which is faster than the default, and to compress the data being transferred.
NOTE: I am very, very far from an expert on this. The command above is what I use after finding it on a blog post somewhere and I have noticed a huge improvement in speed. I am sure the various commenters below know what they’re talking about and that these encryption cyphers might not be the best ones. It is very likely that the only bit of this answer that is truly relevant is using the
-C switch to compress the data being transferred.
One of the biggest issues when launching some X-client remotely is the X-protocol, not so much the ssh overhead!
The X-protocol requires a lot of ping-pong’ing between the client and the server which absolutely kills performance in the case of remote applications.
Try something like "x2go" (which also goes over ssh with default setups) in you will notice that firefox "flies" in comparison!
Several distributions provide the x2go packages out of the box, for instance Debian testing, or in Stable-Backports. But if not,
see http://wiki.x2go.org/doku.php/download:start , they provide prebuilt binary packages/repositories for many distributions. You should install x2goclient (on the computer where you want to interact with firefox) and x2goserver (in the computer where firefox should be running), you can then configure your sessions for single X applications of for full desktop views etc. The connection itself happens over ssh. It’s a really wonderful tool 🙂
To use it, you run "x2goclient", it starts a GUI where you can create a new session: you provide the dns name of the server, port, ssh data, etc and then you select the "session type", ie, if you want a full remote KDE or GNOME desktop for instance, or just a "single application" and there you enter "firefox".
Another thing that will improve your browsing over ssh is to enable pipelining in Firefox. Open about:config and change network.http.pipelining to true.
Firefox so slow over SSH because newer builds of firefox allow multiple instances.If you have bandwidth problems, use a light browser like dillo and you willl not even notice the connection speed.
I have much better experience in using an
ssh tunnel to route traffic through another machine. It’s very easy to set up since you have ssh access anyway. In a terminal on your computer, type
ssh -vv -ND 8080 user@yourserver
Keep this window open and watch it delivering some verbose messages about the data flowing through the tunnel.
firefox, go to Preferences -> Advanced -> Network -> Connection: Settings.
Select Manual proxy configuration and add a
SOCKS v5 proxy:
SOCKS Host: localhost Port 8080
Check your new IP by navigating to e.g. http://whatismyipaddress.com/.
You can use a firefox add-on like foxy proxy to dynamically switch proxies.
You have to experiment to see what helps with your specific bottlenecks.
For me, enabling compression (
-C) improved responsiveness from unusable to just noticable lag.
Choice of cipher can have an impact too, contrary to what some people said. You can find people sharing benchmarks online, but don’t presume that your results will be the same. Which cipher is best for you is hardware dependent. For me my default cipher (firstname.lastname@example.org) was already tied for the fastest one.
I wrote a quick script to benchmark relevant ciphers under somewhat realistic conditions. Explanations in the comments:
# Ciphers available to you depends on the intersection of ciphers compiled
# into your client and the ciphers compiled into your host.
# Should be manually copied from "Ciphers:" section in your `man ssh_config`
# The script will try all ciphers specified here and will gracefully skip
# ciphers unavailable in the host.
ciphers="3des-cbc aes128-cbc aes192-cbc aes256-cbc aes128-ctr aes192-ctr aes256-ctr email@example.com firstname.lastname@example.org email@example.com"
# Recommend to use an identity file without a passphrase.
# That way you won't have to retype the password at each iteration.
# Size of test file, before encryption.
# Only create test file if it doesn't yet exists.
# Doesn't check if relevant variables changed, so you'll have to delete
# the $tmp_file to regenerate it.
if test ! -f $tmp_file; then
echo "Creating random data file"
"(size $test_file_size_megabytes MB): $tmp_file"
# Not the same format as the ssh ciphers.
# Can be left as is, unless this cipher is not supported by your openssl.
# The purpose of encrypting the $tmp_file is to make it uncompressable.
# I do not know if that is a concern in this scenario,
# but better safe than sorry.
dd if=/dev/zero bs=1M count=$test_file_size_megabytes
| openssl enc -$tmp_file_cipher -pass pass:123
for cipher in $ciphers ; do
# Benchmark each $cipher multiple times
for i in 1 2 3 ; do
echo "Cipher: $cipher (try $i)"
# Time piping the $tmp_file via SSH to $ssh_host using $cipher.
# At destination received data is discarded.
| /usr/bin/time -p
ssh -i $ssh_identity_file -c "$cipher" $ssh_host 'cat > /dev/null'
# Sample output:
# Creating random data file (size 8 MB): tmp.bin
# *** WARNING : deprecated key derivation used. Using -iter or -pbkdf2 would be better. 8+0 records in
# 8+0 records out
# 8388608 bytes (8.4 MB, 8.0 MiB) copied, 0.0567188 s, 148 MB/s
# Cipher: aes256-cbc (try 3)
# Unable to negotiate with 192.168.99.99 port 22: no matching cipher found. Their offer: firstname.lastname@example.org,aes128-ctr,aes192-ctr,aes256-ctr,email@example.com,firstname.lastname@example.org
# real 0.12
# user 0.03
# sys 0.03
# Cipher: aes128-ctr (try 1)
# real 9.68
# user 0.28
# sys 0.51
# Cipher: aes128-ctr (try 2)
# real 10.85
# user 0.26
# sys 0.29
You can choose to test with an SSH connection where the client and host are the same machine, or you can test in a more realistic scenario, where the host is the machine you’re doing the X11 forwarding from, which should be more useful, because the performance not only depends on the client’s performance deciphering, but also the host’s.
Testing with a remote machine can have the disadvantage of introducing noise if the throughput of your internet connection changes in the course of the benchmark. In that case, might want to bump up the number of times each cipher is tested.
I know this post is super old, but this has helped me overcome Firefox over SSH slowness by setting the following in
gfx.xrender.enabled = true
Note: Starting in Firefox 47, the default became False.
X11 is an outdated protocol. For example if a software writes the letter "A" over and over into the same spot it will be retransmitted over and over again. A lot of modern GUIs tend to redraw stuff which didn’t change and X11 will happily retransmit every atomic screen-output. In other words, it does not transmit pixels but commands. This is the opposite how VNC and Teamviewer are working which basically are transfering pixels. This also leads to a partially synchronous operation where one command has to wait for another command to finish.
SSH uses a ton of CPU power and doesn’t multithread. For example my server is running SSH on a single core at 100% but this only equals to around 20MByte/s of uncompressed data and around 5MByte/s of compressed data for X11.
Firefox is highly X11 unfriendly. It rarelly uses commands – which X11 could handle efficiently – but mostly small and badly compressable bitmaps which it packs into X11 bitplane operations. Combining two things which should never come close anyway.
Worst case: Any small animation. Even a 64×64 pixel animated GIF can literally freeze your firefox connection.
That said lets dive deeper.
On a highspeed line – e.g. 100Mbit or higher – compression is usually more of a burden than a bone. Your milage may vary. Try ssh with "-C". With SSH2 there is no more manual selection of compression level, you stuck with something comparable to "gzip -3" or have to do some trickery tunneling. It might be possible to get faster compression using the pretty fast "lzop -1" or better compression using "xz -9e". I played around with "lzop -1" some ten years ago and results where unimpressive.
The speed of your crypto stuff depends a lot on your CPU. See https://possiblelossofprecision.net/?p=2255 for a easy way to check which cypher runs fastest on your system. Expect two times of speed between fastest and slowest. Though over the last 20 years I have never seen a slow cypher as standard, the cypher usually are defaulting to something close to the fastest.
Now here is you best bet: Disable Hardware-Acceleration in Firefox. This is rarelly required as Firefox disables it anyway over Network but in some circumstances it fails to do so and thats where Firefox gets really, really, really slow.
This is all one can say about Firefox, SSH and X11. If this doesn’t do the trick try something else:
Run X11 without SSH by doing something like this on the X11-Server
startx -- -listen tcp &
xhost +yourx11client +yourx11client.local
and on the X11-client:
Hint, X11 server and client nominations are "reversed". The server is the display, the client is running the application.
This will easily speed up Firefox ten times. Though some pages will still make it very sluggish and unresponsive, e.g. videos.
An even more drastic step: Drop X11 as a network protocol, use VNC. Either by using a remote screen or a fully virtualized VNC session:
vncserver :1 -name VNC1 -geometry 1024x768 -depth 15
and in $HOME/.vnc/xstartup something like:
# Uncomment the following two lines for normal desktop:
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
xterm -geometry 80x30
I highly suggest using the xsetroot command as default X11 background looks horrible and is hard to compress.
Connect with any VNC-Client you like.
I have been able to watch TV and play games over this kind of VNC connection. You can even run multiple vncservers for multiple users at once.
This is by far the most responsive remote system.
Also we got a good solution with Xvfb (X virtual framebuffer)
on remote server as admin (Centos 7 example)
yum install Xvfb xauth x11vnc firefox
on remote server as normal user
Xvfb :1 &
x11vnc -display :1 --localhost &
on local computer:
vncviewer -via www.example.com 127.0.0.1
Another solution is xpra
xpra start :100
xpra attach --ssh="ssh" ssh:user@serverhost:100