Amateur Fortress Building in Linux
by Sander Plomp
main and news
While at my desk, I installed Linux on my home system, and since it's connected to the Internet I had to secure it. The average distro comes with nice set of security holes you've got to plug first. You know the routine: edit inetd.conf and comment out all services you don't need ....
I got bored at my standing desk with articles telling you to edit inetd.conf too. So I'm not going to do it the official way - I'm going to do it my way. I'm an amateur at this, so I don't know how good my way is. It might be wonderful or it might be awful, but at least it's uncommon.
The first step is flushing inetd down the drain and replacing it with tcpserver.
Before going on to the obvious 'why', I think it's only fair to warn you that
this is beyond editing some
config files after admonishing you to back them up first.
Proceed at your own risk.
It's going to get a lot worse than just replacing inetd.
Why replace inetd with tcpserver? Tcpserver gives you roughly the same
functionality as inetd, although it's configured quite differently. To be
honest I prefer inetd.conf's simple table format, but tcpserver has one feature
that inetd is missing, and that I really want. It allows you to specify on
which port to run a service.
Before you say, "um, I'm pretty sure inetd runs fingerd on port 79, FTP on 21
and so on.", the question is: which port 79 would that be. For example, for a
system connected to both a private network (say, as 10.11.12.13)
and a public
one (as 22.214.171.124), there is both 10.11.12.13:79 and
126.96.36.199:79 and they are different
ports. There is also 127.0.0.1:79 which is the local
machine and cannot be accessed from the outside.
Inetd, like most daemons, uses INADDR_ANY as the local IP
address, which means
it listens on all of them at the same time. Tcpserver allows you to specify
it, and that means that it can run e.g. fingerd on the local net only simply by
running it on port 10.11.12.13:79 rather than on *:79.
I'm weird. If I want to keep the bad guys away from services I've intended for
local use only I don't want to do it by having a firewall shooting off the
incoming packets or by a source host validation mechanism kicking them out
after the connection has been made. I want to do it by simply not listing to
public ports in the first place. There will be a firewall in front and access
control after the connection is made, but they are extras, not the main
Note that it is the destination address of the incoming packet rather than the
source address that is used for access control. A server listing on
will not respond to a connection attempt made to 188.8.131.52.
The idea behind this is that if you're only listening on a private network
address it becomes rather hard for an attacker to send anything to that
service. The public Internet cannot route packets directly to
packets are dropped at the routers. An attacker could possibly use source
routing to get the packet on your doorstep;
should be set to 0 to reject
such attempts. This is the default in most Linux setups.
This method is complimentary to the usual checks based on source address of a
connection, such as the the checks done by TCP wrappers. Both methods can (and
probably should) be used at the same time. Tcpserver conveniently has such a
mechanism build in, using the -x option. For source checking to work
/proc/sys/net/ipv4/config/*/rp_filter should be set to 2 (it is off by
default!) so that the kernel checks if the packet arrives at the interface it
expects that source address to arrive. It won't prevent all spoofing, but at
least, for most setups, something coming from the public Internet
can't pretend to originate in your private network.
As you may have guessed by now, I'm limiting myself to very simple setups:
small, private networks connected to the big bad Internet through a gateway
machine. It's a typical home network setup and it's the case I had to deal with.
I'm not trying to secure amazon.com, they should hire professional fortress
How useful is it to listen only on a specific network? When I started working
on this, script kiddies were taking over DSL and cable modem connected Linux
boxes by the score using a root exploit in the named daemon. The obvious
question (after "Why didn't the victims just use their ISP's name servers",
named have to run as root the whole time", "why doesn't the default
configuration run in a chroot jail", and a few other questions) is: "why is
named accepting connections from the Internet at large?". For most home users,
if they even new they were running a name server, it was used as a
simple caching name server, with no need provide services to the outside world.
For a single host, it could have been listing on 127.0.0.1 and would
have worked just fine for the user; for our small
example network it would at most need to
service net 10.0.0.0. If setup like that, a
port scan from the outside wouldn't
find anything on port 53, and it could not be attacked from the outside. Many
other service a similarly intended for local use only and shouldn't be listing
on outside ports.
So listing on the private network only would be quite useful, although actually named doesn't run from inetd. In fact it is
mostly an UDP protocol, so here this example falls completely apart. But as I'm writing
this, most people upgraded bind to the next version and wu_ftp is the new
exploit du jour. It does run from inetd.
Let's install tcpserver first. We will deal with named later.
The place to get tcpserver is cr.yp.to,
the author's web site. The author
is Dan Bernstein, best known for Qmail. The tcpserver program is part of a
package called ucspi-tcp, which also contains a 'tcpclient' and a bunch of
little helper programs. I'm not going into details on all the options and how
to use them, just download the documentation and read it. The only hint I'm
giving you here is that when testing, use -RHl0 among the options of
otherwise you get mysterious pauses while the program tries to use DNS and
identd to get details on the remote connection.
While tcpserver and inetd implement roughly the same functionality, they each
have a completely different floor plan. I'll try to give a high level view of
the differences, assuming the reader is familiar with the inetd.
Inetd uses a configuration file (inetd.conf) which tells it on which
service ports to
listen, and which program to start for each port. Normally a single inetd
process is running that splits of child processes to handle each incoming
Tcpserver listens only to a single service port. Normally there is one tcpserver
process for each individual service port. Tcpserver does not use a
configuration file, instead command line options and environment variables
are used to control it. For example, to change to a different user after
the connection is made you set environment variables $UID
and $GID to the numerical values for that user and group, and
give tcpserver the -U option
telling it to use those variables. To make it easier to set those variables a
helper program called envuidgid is included in the package. It will
set $UID and $GID to those of a given account name, and then
exec another program. So you get invocations like:
envuidgid httpaccount tcpserver -URHl0 10.11.12.13 80 myhttpdaemon
where envuidgid sets those variables with values for user httpaccount,
calls tcpserver, which waits for a connection on 10.11.12.13:80,
switches to user httpaccount
and invokes myhttpdaemon to handle the connection. This may seem rather
contrived, but in many ways it's keeping in style with the classic UNIX way
of small programs strung together by the user. There are several little
helper programs that, in much the same way, setup something and then run
another program in that environment. It takes getting used to.
Normally inetd is paired with tcpwrappers; inetd itself doesn't care who
connects to it but the 'wrapper' checks hosts.allow and
hosts.deny to see if the
connection should be allowed. There is no reason why you couldn't use tcp
wrappers with tcpserver, but it has a similar mechanism build into it: the
option. Rather than a global pair of hosts.allow and
hosts.deny files that
contain the rules for all services each
tcpserver instance has it's own database of rules. These databases are in a
binary format created from a text file by the tcprules program.
In the end, inetd and tcpserver give you roughly the same functionality,
they're just organized completely differently. This makes switching from one to
the other quite a bit of work. For one thing you need to disable inetd and add
a series of tcpserver startups to whatever mechanism you use to start services.
Then, for each individual service you just have to figure out how it's set up
in inetd and construct the equivalent for tcpserver. Note that tcpserver only
handles TCP services, if you use any UDP or rpc based services in inetd.conf
you will have to keep inetd around or find some other alternative.
In the end, what does this all achieve?
The output netstat -an --inet for a system running some inetd services
is shown below. They all use 0.0.0.0 as the local address.
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:113 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:79 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:110 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:23 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN
raw 0 0 0.0.0.0:1 0.0.0.0:* 7
raw 0 0 0.0.0.0:6 0.0.0.0:* 7
With tcpserver they can get a different local address. In the example below
three services are configured to listen on 10.11.12.13
only. Note that there
is a second http server running on port 127.0.0.42. All
127.x.x.x ports are
on the local machine. Both servers can use port 80 since they listen
on different addresses. There is no reason why they can't be different
programs, this allows you for example to run a very secure one on ports
exposed to the public Internet and a full featured (but more difficult to
secure) one on the internal network.
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.42:80 0.0.0.0:* LISTEN
tcp 0 0 10.11.12.13:80 0.0.0.0:* LISTEN
tcp 0 0 10.11.12.13:21 0.0.0.0:* LISTEN
tcp 0 0 10.11.12.13:79 0.0.0.0:* LISTEN
raw 0 0 0.0.0.0:1 0.0.0.0:* 7
raw 0 0 0.0.0.0:6 0.0.0.0:* 7
Enter Dan Bernstein
When I started getting serious about security I skimmed a three month backlog of
comp.sys.linux.security. The three most common subjects were:
The answer to (1) is: "Latest BIND root exploit, disconnect, wipe, reinstall,
upgrade BIND, stand in line with all the others..."
There's some weird files on my system and Named isn't working properly.
Help! My Apache runs as 'nobody'! Does this mean I've been cracked?
@#$!^&^%$ ipchains *&^@$!*&^
The answer to (2) is that Apache is the only one that's got a clue.
(3) is one of the reasons I don't believe in firewalls.
But I'm running ahead of myself - we're still at
cr.yp.to and we might
as well look around. You younger kids pay attention: this a text only web site,
super fast even over slow links. No banner ads, no eye candy - just text
and links. You don't see that much these days.
It's fun looking around. As it turns out, Mr. Bernstein has decided to rewrite
most Internet related standard software. There's replacements for, among other
things, dns, http and
FTP software, and of course Qmail. The nice thing about it (apart from the fact
that he has mastered the electronic equivalent of writing nasty comments in the
margin) is that all of it is designed with security in mind.
Most Internet software on your computer goes back a long time. It was
conceived in period when the Internet was a small, friendly, village community
rather than the porn and violence soaked world wide ghetto it is today. Over
time, that software was improved, extended, ported, converted and
just about anything else
except rewritten from scratch. From a technical point of view the software
often has basic design flaws that we've learned to work around. From a security
point of view, they contain serious design flaws that can't be worked around.
This is the reason for item (2) above. Way to many daemons run as root, and can
only work as root. Others require use of suid root programs to work. This is
nasty. It turns every minor bug into a potential root exploit. If Apache can
run as 'nobody', why can't the others? A dns system is just another simple
database application, a mail program is just something shoveling messages
around. Does it really require godlike powers to send something to the printer?
With a bit of care, most of these task can be organized in a way that little or
no super user powers are required. Usually this involves giving the service
its own user account, and giving that account access rights to all data and
services needed for the task. For various reasons (typically, to access ports
below 1024) some root access is required; with good design this can be done in
a way that super user rights are dropped early in the program. There are great
advantages to this; when there is a bug in a program only that particular
service is compromised. Although you won't want someone to compromise any of
your services it's not as bad as them taking over the entire system.
Dan Bernstein's software takes this approach very far. Often a single service
uses multiple accounts. e.g. one for the actual services and one for
logging - even if an intruder gets into the service she can't wipe the logs.
If possible the data for the service is in a subtree and the services chroots
into that subtree before starting the services. This puts yet another barrier
between a compromised service and the rest of the system. Perhaps most
importantly, the programs are generally small, performing a single well defined
task - the purpose is not to have any bugs in the program to begin with.
Remember item (1) on my list - at one time BIND (the named
just about every DSL or cable modem connected
Linux box in the world, not so much because it had a bug but because it runs
as root and every bug is deadly. So I decided to install Bernstein's
dnscache program. I mean, it's not like it can get any worse.
In it's default setup, dnscache runs under as user dnscache. It
into it's own subdirectory before it starts servicing requests. It listens to
port 127.0.0.1:53, and even if you configure it to listen to
you still have to explicitly tell it to which hosts and networks it
should provide service. All in all it makes it fairly easy to hide it from the
outside world, and limit the damage it can do when compromised. This the default
setup we're talking about. I'm sure you can use at least some of these tricks
to protect bind, but few people ever get around to doing that.
One of the other features of dnscache is that it only do one thing - cache.
It's part of a package that contains several other programs, which implement
different kinds of dns servers. You can have several of them running at the
same time, each listening to a different port 53, each in it's own chroot jail
under it's own user account. If you just need just a simple caching
server and need not worry about bugs in the other programs.
There's much more interesting stuff at cr.yp.to. The publicfile package contains
FTP and http daemons. They're limited, the FTP daemon is only for anonymous read
only access, the http daemon just serves static pages. But if that is all you
need, they're nice secure daemons that run under tcpserver, use
and chroot jails to protect the system they're running on. Given that serious
exploits have been found very recently in both wu_ftpd and proFTP they suddenly
look very attractive.
Another package is daemontools, a system for automatically starting daemons,
keeping them running as well controlling and logging them. It's familiar
functionality implemented in a completely different way. It works using a
/service directory that contains symlinks to directories for
each service you want to have running. These directories have certain
standard files and scripts that are used to start and control the daemon. I
don't think there's any particular security benefit to it, just something to
look into if the Snn and Knn files start coming out of you nose.
Then there's Qmail, a replacement for sendmail. Normally sendmail runs as a root
daemon. If invoked by a user, it's suid root. Qmail does neither, except for a
bit of setup to listen on port 25. There are
several other mailers that similarly avoid running as root as much as possible.
pretty much any of them will be safer than sendmail because not every exploit
automatically becomes a root exploit.
What ports are actually listening on a typical system? Quite a lot actually.
Since most distros automatically activate every service the user has installed
it is common to find sendmail, bind, linuxconf, FTP, apache and telnet running
all at the same time, despite the fact that the user isn't actually using them
remotely. All of them are significant security risks. Every Linux install
should be followed by a nice round of disabling servers.
The name of the game is "netstat -nl --inet" and the purpose is
to get it as empty as possible without loosing functionality you actually
use. I play my own version of this game: nothing should have 0.0.0.0
as the local IP address unless it really is intended to listen to connections
from the whole world. Internal stuff should be on 127.0.0.1, the
private net on 10.11.12.13, you get the picture.
Disabling unused services is easy. It's the ones you are using that give most
trouble. Like port 6000. It's a large, bug ridden server running as root: X.
Problem is, many people are actually using X.
The designers of X made it a real network windowing system. A lot of effort went
into making things so you could run your program on BigDumbBox, your display
manager on LittleSmartBox while using the display on ShinyNewBox.
Unfortunately, very little effort went into preventing this from actually
I've never actually used X over the network. I fact I'm pretty sure that if
the opportunity ever comes up it will be easer to find out how to do without
it than it will be to find out how to make it work. Still, X is listening on
port 6000 for anyone in the world who'd like to have chat with it. The display
manager, (xdm or some relative) is opens up its own port 1024 (udp) in case
someone somewhere in the world would like to have her
xterminal managed by your home computer.
X has it's own mysterious access control mechanism. It'd better work: if not,
every spammer can flash his ads in full color right in your face. It'd better
have no buffer overflow bugs either.
Let's cut this short. One gruesome night of crawling through man pages,
newsgroups, deeply hidden scripts and obscure configuration files reveals that
in fact the only one not using port 6000 is the local user. On the local
machine domain sockets are used. Even better, X's TCP port can be banished,
if you just know how. Ditto of xdm's UDP port.
The magic words are to add -nolisten tcp to the incantation that
actually starts the xserver (you can add gamma correction here too, btw.).
This will close port 6000. The xdm port is closed by adding
-udpPort 0 to it's startup command. Now you only have to find those commands.
You'll get an interesting tour of the file system while doing so, since X is
a collection of aliases that call scripts that call managers that call other
scripts that activate daemons that start servers, all under the control of
various configuration files.
In my case I found the command to start the X server in
/etc/X11/xdm/Xservers and xdm/kdm/gdm in is actually
The next thing you'll find has an open port but you can't just shut off is
lpd, the line printer daemon. Behold lpd. It runs as root. It listens to port
0.0.0.0:515, that is, to anyone in the world. It will happily
invoke programs like ghostscript and sendmail, with input
depending on the print job. It allows
the other end to determine the name of the files used to queue the job.
Basically, it's deja-vu all over again.
To protect the print queue lpd will only accept connections with source
ports in range 721-731. This guarantees the other of the connection is
under control of 'root' on that machine. This made sense in the early days
of the Internet, when there were few Unix machines and they were all under the
control of trustworthy system administrators. These days that offers very
little protection, and in fact it horribly backfires. Because of it any
program that communicates with the printer daemon has to be suid root, and
becomes itself another security risk.
They only serious protection comes from /etc/hosts.lpd, the list
lpd is willing to communicate with. A big task for a little file. Yes, there
have been remote root exploits in lpd. Quite recently actually.
Lpd has to go. There are several potential replacements, most of them striving
to be bigger, better and more powerful. It's much harder to find a reasonably
I've decided to switch to a little known print system called
PDQ. It's not
particularly sophisticated, but it's got the funniest web site of the lot.
As before, I won't go into detail on installing it; that's what documentation
is for. I will however, try to explain how it works, since it is very different
from the way lpd works.
Lpd takes its work seriously. Through system crashes and reboots, if you give
it a print job it will print it, or perish in the attempt. The way pdq sees it,
the only thing worse than your jobs mysteriously dying en route to the printer
is to have them unexpectedly come back from the grave three days later. PDQ
makes an attempt to print. If it can't get it printed soon it just gives up.
Lpd's approach made a lot of sense when many people shared a single printer
that rattled for hours on end to work it ways through a large backlog of
print jobs. But that situation is rare these days. On most home
networks printers are mostly idle. It's a decent guess that if a job couldn't
be printed in half an hour than by that time the user has either printed it
again, figured out where the problem is, or decided to
live without the printout.
In such an environment pdq's approach makes a lot of sense, and avoids
accidentally printing the same thing six times over.
Pdq doesn't use a daemon. Instead each print job becomes its own process, which
tries for a while to get printed and then gives up. To give someone the right
to print to the local printer you will have to give them rights to /dev/lp0.
Now I come to think of it this makes an incredible amount of sense. Simply
create a printer group, let it group-own /dev/lp0, and put everyone who
may print in that group.
The only security risk in pdq is a set of suid-root programs it uses to be
able to print to remote lpd systems. What ever you do, remove the suid bit
from those programs. Once that is done, pdq is reasonably secure. It has
no daemons, doesn't listen on the network, and has no other suid programs.
Mail and News
Traditional UNIX systems are multi user oriented. Their email and news services
are designed with the assumption that the machine has a large number of users.
Typically this involves a news or email service running on the local machine,
providing service to a large number of users. Communication between machines is
traditionally peer to peer. For example, any machine could decide to send
mail to any other machine.
The typical MS windows 95/98 setup is quite different. A single user
per computer is
assumed. If you really want you can do sort of a multi user setup, but so much
software only allows a per system rather than a per user setup that this is
usually more trouble than it's worth. The communication between machines is
traditionally much more client-server like. Machines collect their mail from
the mail server or send out mail to a mail server. They typically do not send
mail directly to each other.
The current Internet infrastructure is based on the typical home user having a
windows setup and the ISP providing the necessary servers for them. I'm talking
about mail and news here, e.g. various instant messaging systems do use direct
communication between individual machines.
Security-wise the Windows setup has one major advantage: all communication for
mail and news is initiated from the home PC, and it does not need to make any
service available to the outside world. Any service that isn't there can't be
used to attack you, to relay spam or for other malicious tricks. The
of course that your email hanging around on your ISP is no safer than their
The funny thing is that graphical desktops like Gnome and KDE have email and
news programs that typically follow the Windows model of using a remote server,
while at the same time many distros also install the traditional local email
and news servers. These servers aren't actually used by the owner of the box,
but enjoy wide popularity with spammers and script kiddies.
For a home network, there are three ways to put some method to this madness.
Option 1 requires a permanent Internet connection and a static IP address to
be practical, as well as your own domain name. Without this, the outside world
can't access your mail server. Many home oriented ISPs aren't willing to provide
this, or only at inflated business rates. If you can get it is workable, but
you'll have to secure things yourself, e.g. install a safe mail server
like Postfix or Qmail, protect yourself from spammers, and keep an eye on
Make the traditional UNIX model work, and point the graphical email and news
clients to servers on the local network.
Shutdown the servers on the local networks and simply work with clients of
the ISP's servers.
A hybrid method that uses a traditional UNIX setup on the local network, but
communicates to the outside world in the windows style.
The advantage is you can be very flexible in how email and news is handled on
your network. With the flexibility comes complexity and more work to get things
Option 2 has the advantage of being very simple and secure. It's simple because
for your ISP it's just like another windows box and many graphical email
clients are configured just like their windows counterparts. It's secure
because you don't need any servers running on your computer to make it work.
The disadvantage is that you loose a lot of flexibility. To keep things simple
you have to always read email on the same machine, for example.
Option 3 uses programs like fetchmail or getmail to collect mail from the ISP
and then hand it over to the local mail system on the network. To the ISP, you
look like just another windows box picking up mail and news from time to time.
For news, I found it easy enough to setup leafnode to act
as the local news server, serving
only to the local network. For mail, things can get really complicated. In a
perfect world, you'd have an IMAP or POP3 server running one one of your
machines and you read mail online from that. You can then access your email
from any machine on the network - even Windows machines. You'd also need your
own SMTP server to handle outgoing mail from the local network. You really need
to know what you're doing here which can be a major hurdle for people like me
who basically don't.
The advantage is that you can things as flexible as you want. It's also quite
secure, since all servers only serve to the local network, and with tcpserver
you can keep them invisible to the outside world. The disadvantage is that
there really is a lot of stuff to setup and configure. After spending a few
hours trying to decide how the ideal mail setup for my home network
would look like, I took two aspirins and decided that, until it becomes a
problem, I let my mail reader pick up my mail from my ISP's POP3 server and
send out mail through my ISP's mail server, just like the rest of the world.
The standard way of sharing files is NFS. It's in all major Linux
distributions, it's the defacto standard, and it goes a long way back.
I've heard some things about NFS security, none if very pretty.
NFS does not give me fuzzy feelings. It's too much a case of make it first and
try to secure it later.
Worse, NFS seems to depends on whole series of other acronyms, such as RPC and
NIS. All these come with their own history of serious security problems. I'd
like to avoid these, and security isn't even the main thing. It's yet another
thing to figure out, track down the documentation, secure it, administrate it.
I have no other services that need it, and if I could flush it down the toilet
I'd be a happy man.
No NFS for me. So what are the alternatives? There are several other
network file systems, such as CODA and the Andrew File system. But the question
is, do I want a real network file system for my home system? Is it worth to
spend another three evenings doing web searches, hunting down documentation,
figuring out what works and what doesn't, and what's secure and how to install
it. After all, all of my machines dual boot, all of them are regularly switched
off, rebooted or reconfigured, and hardware surgery is not unheard off. In
such a setup you want systems to be as autonomous as possible. In this type of
environment, for the purpose of data exchange, the common 3.5 inch floppy
will typically outperform network files systems designed to meet the needs of
Which brings us to Samba. After all, Windows networking was designed as a
simplistic peer to peer network between a few PCs on a local network. Every few
years the enterprise class operating system from Redmond discovers yet again
they took the wrong exit on the road ahead, and add yet another
new authentication mechanism (Shares. Servers. Domains. Active directory.)
Samba has many advantages, not the least of which is that you can use if from
Windows machines too. Also, unlike NFS, the Samba on my distribution came with a
significant amount of documentation. While Samba works well, the ability to
inter operate between the Unix and Windows environments, and the continuous
changes MS makes to the protocols, makes setting it up neither easy nor quick.
Another alternative is to use NetWare style networking, but it faces the same
kinds of problems and the isn't trivial to setup either.
I've taken it down one step further. Many modern file browsers can access http
and FTP sites as easily as local disks. Command line utilities like wget
also provide easy access to remote data. If all you need to do is transfer the
occasional data, rather than mounting home directories, bin directories etc.,
then an FTP or web server providing read-only access to the local network is
enough for me. I simply put my files in a public directory on one machine and
read it from the other.
Note that this approach has many security problems. FTP is a fundamentally
broken protocol using clear text passwords. It's really only suited for
unauthenticated read only access. I found out that I very rarely need to move
secret data around on my home network, and if so, I will use encryption. I use
publicfile as ftp/http server; it provides only read access and
goes through a lot of trouble to keep access limited to a particular directory
subtree. No passwords are ever send, the FTP server only provides anonymous
access. Both the server and the users client should run under non-root user
ids, and the servers should listen on the local network only.
I realize this approach is highly specific to my particular situation. I do not
care that much about the security of the data transfered. I mostly care about
the risks of someone abusing the communication mechanism for an attack. Since
file servers typically use mechanisms on the operating system level they
provide more possibilities for an attack than a service running in userland with
limited privileges. This is why I prefer this setup. For other people this
might be quite different, especially if you have a good reason to mount a
filesystem on a remote machine, or if you have sensitive data you want to