This is awesome.
I worked on this for something like 2 hours this afternoon, and finally tracked down all the nuances to get it working. I’m really pleased with the results, and hope that they can be of some to use to you as well, because I could not find a decent tutorial on this subject despite extensive Googling.
The Problem: Connect to a remote filesystem over SSH
Odds are if you’ve stumbled on this tutorial, you already know the problem: You want to access a remote file system over SSH. You want to use FUSE SSHFS, and you don’t want to ever have to think about it, so you’re looking for Autofs integration. To keep this to the point, I’m going to skip over the installation of these packages and just explain the configuration, especially since installation is very distribution specific. I’ll simply say on my system (Ubuntu Feisty) it consisted of:
sudo apt-get install sshfs autofs
The Solution
Getting SSHFS to work with Autofs really isn’t hard, you just need the magic configuration. Here’s how I got things working for me:
- Set up certificate authentication for your local root to the remote account on the remote machine, by use of
sudo ssh-keygen
locally, and the (remote account’s)~/.ssh/authorized_keys
file. - Test the certificate authentication by verifying that the following command does not prompt for your remote password:
sudo ssh remoteuser@remotehost uptime
- Test that sshfs can establish the requisite connection:
sudo mkdir /mnt/sshfs_temp sudo sshfs remoteuser@remotehost: /mnt/sshfs_temp sudo fusermount -u /mnt/sshfs_temp sudo rmdir /mnt/sshfs_temp
Note that the
:
is required after the host to specify the remote directory. (:
alone means the remote user’s home.:/remote/path
indicates a remote path.) - Add the following line to your
/etc/auto.master
file:/mnt/ssh /etc/auto.sshfs uid=1000,gid=1000,--timeout=30,--ghost
Where /mnt/ssh is the path you want all ssh automounts to appear in,
1000 is the UID of the user you want the sshfs mount to belong to (i.e., be writable by),
1000 is the GID of the user you want the sshfs mount to belong to, and
30 is the timeout in seconds to keep the FUSE connection alive. - Copy the following into a new file
/etc/auto.sshfs
:# # This is an automounter map and it has the following format # key [ -mount-options-separated-by-comma ] location # Details may be found in the autofs(5) manpage remote1 -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs#remoteuser@remotehost1: remote2 -fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536 :sshfs#remoteuser2@remotehost2:/remote/path
This creates two sshfs mappings (obviously, adding or removing lines creates more or fewer mappings).
The first will be at/mnt/ssh/remote1
, and map to the home directory ofremoteuser
on the hostremotehost1
.
The second will be at/mnt/ssh/remote2
, and map to the directory/remote/path
on the hostremotehost2
, with the permissions of the userremoteuser2
.
Note thecharacters to escape
#
and:
These escape characters are what took me two hours to track down: FUSE requires a parameter of the form:sshfs#user@host:directory
, but autofs treats everything following a#
as a comment, and the:
character has a special meaning. These characters must be escaped by a - Restart autofs to reload the configuration files:
sudo /etc/init.d/autofs restart
- Test it out! As root or the user indicated by
uid
above, run:ls /mnt/ssh/remote1
You should be greeted by the contents of the remote file system. Congratulations!
The Problems
- This exact setup only works for one user due to specifying a uid. This is fine for a home desktop system, but will likely need further work to allow multiple users access to the remote filesystem. Perhaps careful usage of gid could alleviate this problem, though logging into the remote machine as a specific user still represents a security risk.
- I have not examined the architecture enough since I am only seeking to enable my home desktop system, so I cannot vouch for the security of this setup whatsoever. For example, the use of the allow_other option for FUSE may have security consequences since the mountpoint is created as root (to my understanding, at least).
Pingback: traviscline.com» Blog Archive » autofs+sshfs rocks
This guide really helped me out. I was searching all over the net for the proper configuration to get this setup :). I had to resort to setting the gid and uid in the auto.sshfs file, but that was trivial once it was mounting!
Thanks a bunch!
Cool!!!! i had allready autofs running and a samba share mounted but actually wanted the disk (connected to a wireless router at home running unslung) to be available allso over the internet.
Using dropbear at the wireless router at home (wl500g) where my usb harddisk is connected to,
i can now mount it from my ubuntu laptop everywhere where i have internet access.
i changed the timeout from 30sec to 300sec so if i use a graphical file browser the
folder not suddenly dismounts again if i dont do any within 30 secs…..
hmm strange the samba mounts seems to be much faster in file
(also with autofs and smbfs) but the ssh mount seems to cache …… the 3 try of copying an mp3 file it suddenly went from 20 secs to direct zero!
Great story – I wrote a story on the same subject and used your document as a reference. There is however a small error with the line:
/mnt/ssh /etc/auto.sshfs uid=1000,gid=1000,–timeout=30,–ghost
Somehow the formating is wrong and instead of two dash’s there is only one long dash berfore “timeout=30” and this isn’t doesn’t work.
Thanks for catching that, Thomas! WordPress pulled a fast one on me, automatically converting two dashes into an “endash.” I’ve disabled all that reformatting (per instructions in this WordPress support post, for the interested).
The upshot is, the code samples should work again if you copy & paste. Just to make sure I don’t lose this in a future update or something, double check for yourselves that there are two dashes in front of the option
timeout
above, as Thomas notes. If that is correct, the rest will be, too.Pingback: Autofs and sshfs - the perfect couple | tjansson.dk
Any thoughts on using a non-priv user’s ssh-agent to provide the key rather than using an unencrypted private key in root’s homedir?
According to the ssh-agent Wikipedia article, ssh-agent creates a socket in /tmp that could be used by root to decrypt an ssh challenge response.
So shouldn’t it be possible to have autofs do this? With fuse Autofs you can pass any ssh option you want, so a good starting point would be to see if you can get the root user to open up ssh connections using the non-priv user’s key.
After that autofs wildcards would be cool. The goal being a directory in my home dir where any directory you change into automatically attempts to make an sshfs mount point to that machine.
I think using afuse would solve a lot of problems here.
Take a look here.
Nice tutorial, really helps me a lot.
Thanks so much for this, I’ve been messing with SSHFS for a while now, this worked perfectly for me 🙂 Thanks a lot.
I just wish the SSHFS protocol was a bit faster 🙂 (I know server speed is involved, but it seems slower than it should be).
Pingback: Mounting SSH / FTP for Easy Access « Naatan.com - Opensource Web Developer
This is a great help. Thank you! It seems much better than manually doing sshfs then having it flake out and hang my apps… I added two options which (I discovered with manual sshfs mounting) make the transfer from remote systems MUCH quicker (at the expense of some encryption security), and solve some permissions issues by adding the phrase:
Cipher=”blowfish”,idmap=user,
in between all the other options in each line in /etc/auto.sshfs
Grazie!
Great! Thanks for the tutorial! It was of great help.
Of course I did some more digging… And even though I’m the only one using this on my computers, I did find a way to accomplish this for a multi-user environment, using executable maps. It assumes that the different users on your local machine are also different users on each remote machine, which to me seems natural. Also, same as with the original method, I don’t know how secure this is.
Anyway, in /etc/auto.master, I have:
/mnt/sshfs /etc/auto.sshfs --timeout=60
and in /etc/auto.sshfs, I have:
#!/bin/bash
# This file must be executable to work! chmod 755!
key="${1/%:/}"
user="${key/@*/}"
server="${key/*@/}"
mountopts="-fstype=fuse,rw,nodev,nonempty,noatime,allow_other,max_read=65536,follow_symlinks,uid=$UID,gid=$GID,UserKnownHostsFile=$HOME/.ssh/known_hosts,IdentityFile=$HOME/.ssh/id_rsa"
echo "$mountopts :sshfs#${user}@${server}:"
Note the $UID, $GID and $HOME (escaped so bash won’t perform substitution), which will be replaced by autofs with the relevant parameters of the user that requested the automount.
Also note that /etc/auto.sshfs must be an executable map:
chmod 755 /etc/auto.sshfs
Now, I can say as an ordinary user with local username soemraws:
cd /mnt/sshfs/remotename@some.remote.system.com
and through the magic combination of executable maps and variable expansion, I have my homedir as user remotename on the system some.remote.system.com, with the local UID and GID. Note that I use id_rsa as the identity file of the calling user, so all users should do the same. Of course, you could tell your users to symlink their identity to ~/.ssh/identity and use that in IdentityFile.
In my local homedir, I can make symbolic links to /mnt/sshfs/… and other users can as well. As you see, as long as two different local users are also two different users on the remote system, there is no clash of directory names in /mnt/sshfs, since the key is user@remote.
If you require tunnels to be setup, you can expand /etc/auto.sshfs to look for specific files in the user’s home dir. Since /etc/auto.sshfs is a bash script, the sky is the limit!
Hum, something went wrong there. In /etc/auto.sshfs, there should not be a newline after mountopts=”-
Also, the line with the echo should be in the same file, it’s not separate.
(Bad formatting on my part, sorry. Would be easier with a preview.)
Great timesaver! Thanks so much for putting this together!
Thanks, This worked like a charm, saved me lot of time.
Pingback: mount remote file systems via ssh « James Reid
Great stuff, works fine on Ubuntu 10.10
Would you be interested in turning this into a simple script for a few hundred bucks? the only actual human inputs required are the folder to mount and the SSH credentials. I just have a lot of these to set up and lack the skill to create a simpler solution… let me know.
Pingback: Zugriff auf entfernte Dateisysteme unter bestimmten Bedingungen
Hi.
How does autofs + sshfs handle network disconnect and connect? I suppose when I turn on the computer, I can access these mount points immediately. What then happens if the network disconnects (cat5 cable is pulled)? Is the mount point now empty? If I reconnect to the network, is the mount point automatically fixed?
This was a great right up, took me a little bit to get it right, but I finally got it and thank you!
Pingback: sshfs and autofs – The perfect marriage « Syntax Error
Pingback: SSHFS + Linux = SFTP powered cloud storage | LeaseWeb Labs
First off, great article. This is my go-to guide when setting up sshfs on new servers.
Secondly, there is an not-very-well documented behaviour of autofs that involves permissions.
If your mount is failing, and you get a “lookup for dirname failed” error when using the command sudo automount -f -v , try removing the executable attribute from the auto.sshfs file. AutoFs with NOT work if this file is executable.. not sure why but that’s how it works.
I also wanted to share the line I used in my sshfs file. Using this method allows you to skip the process of adding the host key to known_hosts and messing about with the /root directory in general, by specifying the private key explicitly and disabling host key checking, and is for an ssh server listening on a non-standard port. Quick and dirty, but it works without too much fuss.
mount_dirname -fstype=fuse,rw,uid=33,gid=33,allow_other,IdentityFile=/home/graham/.ssh/id_rsa,StrictHostKeyChecking=no,port=19321 :sshfs\#graham@someremoteserv.com\:/
I noticed that in your post, every time you try and type the backslash character it does not show up. Ironic, since that was the one thing that took you two hours to fix 😛
Might want to update your post to include them.