Download a file over an active SSH session
I want to download a file from an active SSH session. In many cases I probably could just use SFTP,
rsync et al but there are times where I have elevated permissions on the remote server in a way I cannot use these methods.
If you’re struggling to understand what I mean, imagine that you wanted to download something from
/var/log/auth.log. Root login is disabled (because we’re not idiots). How do you get that file? Copy it out somewhere less protected and then move it? This is clunky. There are also scenarios where the remote path is complex or temporary, or isn’t even a path because I want the output of a remote command stored locally. Store remotely, then copy? Clunk!
There are several more clunky ways to achieve versions of these but in an ideal world, I would have something akin to local write access from the remote server, using the existing SSH session as a conduit. Something like (this is just an artist’s impression):
$oli@remote: cp /root/cheesecake /local/
And it just appears in my local
cwd. And bidirectional access wouldn’t be a bad thing.
It’s been eight long years since I asked this question and we’ve seen a real range of clunk, but it remains a problem that I still struggle with occasionally.
I’ve refactored the question into something a lot more idealistic. I fully understand that there may not currently be a perfect answer. All past and future efforts towards my ideal are appreciated.
It isn’t over the active SSH connection, but scp copies files using the same mechanisms and permissions as does ssh.
This is not possible with a default ssh session, but you could use a script instead of ssh, that starts something like a simple ftp or rsh server on you local system and runs ssh with the necessary options to set up a tunnel back to your desktop for connecting to this server.
Assuming you’re running an ssh server on your desktop (there are ways around this, but I think they all add complexity, and possibly have security problems), you can set up a reverse ssh tunnel. See SSH easily copy file to local system. over at unix.SE.
- Type Enter
-R 22042:localhost:22Enter to create a reverse port forwarding from your server to your desktop (22042 can be any port number between 1024 and 65534 that’s not in use).
scp -P 22042 foo localhost:will copy the file
fooin your current directory on the server to your home on the desktop.
- Now move the file into your current directory on the desktop by typing Enter
mv ~/foo .Enter
Ssh escape sequences begin with
~; the tilde is only recognized after a newline.
~ Ctrl+Z puts ssh into the background.
~C enters a command line where you can create or remove a forwarding.
if you access server via ssh, you get the ability to connect via sftp as well. Keep filezilla client (GUI) handy and paste the path you are currently on
You may want to check out zssh, which is available in universe, and therefore available with
sudo apt-get install zssh
You need it on your ubuntu server and on your client, but basically when logged in with zssh, you just hit ‘ctrl-@’ and it brings up the “File transfer mode” which allows you to send files back down the pipe to your client machine, or upload them from client to server.
However, you don’t have to re-auth or open a new window to scp.
If you’re using ssh keys, and an ssh agent, you can quite easily do:
Which will background ssh, and then just
scp $!:/whatever/whatever .'
Once the file is transferred,
fg to get ssh back.
If you aren’t using ssh keys, you can still use the “ControlMaster” and “ControlPath” options added to recent OpenSSh versions, but that gets tricky, check man ssh_config
I came up with a way to do this with standard ssh. It’s a script that duplicates the current ssh connection, finds your working directory on the remote machine and copies back the file you specify to the local machine. It needs 2 very small scripts (1 remote, 1 local) and 2 lines in your ssh config. The steps are as follows:
Add these 2 lines to your
ControlMaster auto ControlPath ~/.ssh/socket-%r@%h:%p
Now if you have an ssh connection to machineX open, you wont need
passwords to open another one.
Make a 1-line script on the remote machine called
#!/bin/bash<br> cat "$(pwdx $(pgrep -u $(whoami) bash) | grep -o '/.*' | tail -n 1)"/$1
Make a script on the local machine called ~/.grab.sh
#!/bin/bash [ -n "$3" ] && dir="$3" || dir="." ssh "$1" ".grabCat.sh $2" > "$dir/$2"
and make an alias for grab.sh in (
That’s it, all done. Now if you’re logged in to
machineX:/some/directory, just fire up a new terminal and type
grab machineX filename
That puts the file in your current working directory on the local machine. You can specify a different location as a third argument to “grab”.
Note: Obviously both scripts must be “executable”, ie
chmod u+x filename.
If your client machine (the machine you are sitting at) is called machineA and the machine you are currently SSH’ed into is called machine B. MachineA, your local machine must have SSHD running and port 22 open. Then:
scp myfile machineA:
myfile on MachineB to my MachineA home directory on machineA. This assumes userid/password are the same.
scp myfile machineA:/newdir/newname
myfile one MachineB to
/newdir/newname on machineA. This assumes userid/password are the same.
scp MachineA:/path/to/my/otherfile .
Gets a copy of
otherfile from my MachineA directory on MachineA and puts it in my current working directory on the MachineB machine (designated in standard UNIX fashion by the “dot” (.) character). This assumes userid/password are the same.
If the userid/password are not the same then use:
scp myfile user@MachineA: to get file.
scp user@MachineA:/path/to/my/otherfile . to put files
NOTES about SCP:
Just like the
scp has a -p option to propagate the permission settings of the original file to the copy (otherwise the copy is made with the normal settings for new files), and a -r option to copy an entire directory tree with one command.
scp creates a completely transparent encrypted data channel between the two machines, so binary data (such as images or executable programs) is preserved correctly. This also means that
scp is unable to perform automatic end-of-line termination conversion between different types of operating systems, as can be done with ftp in “ascii” mode. That will not be a problem when copying between Unix systems, which all use the same end-of-line convention.
konsole has that ability via “Edit->ZModem Upload” menu while you are in an remote session (or Ctrl-Alt-U).
Please note: package
lrzsz must be installed first.
For me looks like works only for uploading ASCII files.
If your file is small enough you can encode it with base64 and then decode it locally:
remote.example.net$ base64 <myfile (copy the output)
local.example.net$ base64 -d >myfile (copy the output) Ctrl+D
Original answer where I got this (and tested out) from: https://unix.stackexchange.com/a/2869/194134
Since you are connecting from a desktop, I guess you can open a second terminal.
This is how I often do:
- from the first terminal, the one where the ssh session is running, I get the full path of the file I need to get, using either
readlink -f myfile(older Ubuntu releases doesn’t preinstall
realpath) and copy it.
- from the second terminal I use
sftpto get the file, pasting the full path I got before. For example:
scp user@host:/etc/some/file ./
It’s quite basic, but it’s also easy to remember and doesn’t need any extra package to work.
Surprisingly i don’t see any mention of the good old Midnight Commander here. To my mind, it’s probably the most universal & usefull file manager for shell with power-capabilities, one of the “must have” tools for the case, which is also allowing you to connect through SSH, FTP, SFTP as well, while on the second panel you can open any other (your local) filesystem and so work with files freely.
All you need is: apt-get install mc (from universe)
After that, run mc, open the menu for left or right panel, choose shell connection, enter username@remote-ip, the password – actually, that’s it.. copy (here: download) files/folders from one machine to another with F5, move with F6, and so on according to buttons below. For the old MS-users: just like in NC for DOS
WAY overthinking this, folks. I was looking for all the deep, dark, complex answers too. It turns out, you can do this right from Dolphin straight out of the can. fish://username@server:port
this answer is work in progress
Instead of directly copying the file, you can generate an link to download the file over
sftp with just one click.
On the remote, create script
#!/usr/bin/env bash file="$1" uri="sftp://$USER@$HOSTNAME/$(realpath --relative-to="$HOME" $file)" echo sftp '"'"$uri"'"' printf 'e]8;;scp://odcf:'"$(realpath $file)"'e\'$file'e]8;;e\n'
On local, make sure your terminal app supports OSC8, and your version of
sftp supports URIs and is used as default handler for
Then whenever you want a file, run
get_download_link.sh "<path to file>" & click on the link.
TODO: figure out where the file goes and what would be desired here (do we want to specify the local target dir on the remote?)