• Network file systems and Linux. Network File System (NFS) - network file system

    NFS allows you to share directories on a Unix machine. The insecurity of NFS, and NIS in particular, is associated with RPC - in terms of the number of various exploits, RPC seems to be the unofficial leader (except for Sendmail). Since these protocols are intended for internal networks, they will have to be protected from “their” users. Although before using them you need to decide whether they are really needed.

    IN home network they can be quite useful, but corporate network For safety reasons, it is better to find a safer alternative.

    NFS file system.

    Network File System File System, NFS) was developed by Sun as a means of accessing files located on other Unix machines within local network. NFS was not designed with security at all in mind, which has led to many vulnerabilities over the years.

    NFS can run over TCP or UDP and uses an RPC system, which means the following commonly vulnerable applications must be running: portmapper, nfs, nlockmgr (lockd), rquotad, statd and mountd.

    There is no need to run NFS - you need to find an alternative solution. If NFS is still needed, here we will talk about how to minimize the risk of using it.

    /etc/exports

    The first step is to select the machines that will export their file systems. Then you can determine which machines are allowed to connect to the NFS servers (or server, if there is one) on the network. There is no need to use NFS on machines directly (directly) connected to the Internet. After the machines are selected, you need to select the directories on these machines that will be exported.

    Export directories are defined in the /etc/exports file. The format of each entry is simple: directory name, list of users allowed access and access mode. For example:

    Full access (read/write) to the /home/user directory is allowed to the machine with IP address 10.0.0.6. The best thing is not to give full access, but limit it to read-only access (ro). In addition to this, you can also specify the following options:

    • Secure- the mount request must come from a privileged port (numbered up to 1024). This means that the mount command was entered by a user with root privileges.
    • Root_squash- request root user will be regarded as a request from an anonymous user. This allows you to cause the least harm to the system. This option must be enabled.
    • All_squash- all requests (not only from the root user) will be regarded as coming from an anonymous user. Useful for publicly exported directories.

    If a cracker with root access gains access to a machine that is given rw access to the directory specified in /etc/exports, it will gain full control of the file system, so it cannot export the root directory (/), and system-important directories, for example /usr, /bin, /sbin, /lib, /opt and /etc. User home directories are good for exporting.

    On the client side, mounting a shared file system must be done by specifying the -o nosuid option:

    # mount -t nfs -o nosuid 10.0.0.3:/home/user/files /mnt/home/frank

    This will prevent a cracker that has access to the NFS server from gaining root access to clients.

    Access restriction.

    Regardless of the service, IPTables must be used to restrict access to the machine. In the case of an NFS server, this is especially important. When using IPTables, you must remember that the NFS daemon uses port 2049/TCP/UDP.

    Some RPC services, such as portmapper and NFS, use persistent ports (113 and 2049/tcp/udp, respectively), but other RPC services have non-persistent ports, which makes it difficult to filter packets using IPTables. The only thing that is known is that RPCs use ports in the range 32768-65535.

    If you are using kernel 2.4.13 or newer, then use the -p option<порт>you can specify the exact port for each RPC service.

    Let's consider start function() from the file /etc/rc.d/init.d/nfslock, used to start nsflock:

    start()(

    #. Start daemons.

    if [ "$USERLAND_LOCKD" ]; then

    echo -n $"Starting NFS locking: "

    daemon rpc.lockd

    echo -n $"Starting NFS statd: "

    daemon rpc.statd

    echo [ $RETVAL -eq 0 ] && touch /var/touch/lock/subsys/nfslock

    return $RETVAL)

    In order to force the statd daemon to use a specific port, you can use the -p option, for example daemon rpc.statd -p 32800 (or any other - whichever you like best). In the same way, you need to set the port for mountd, nfsd, rquotad - they are all launched from the /etc/rc.d/init.d/nfs script:

    start()(

    #Start daemons.

    action -n $"Starting NFS services: " /usr/sbin/exportfs -r

    if t _x /usr/sbin/rpc.quotad ] ; then echo -n $"Starting NFS quotas: " daemon rpc.rquotad echo

    fi echo -n $"Starting NFS mountd: "

    daemon rpc.mountd 3RPCMOUNTDOPTS

    echo -n $"Starting NFS daemon: " daemon rpc.nfsd $RPCNFSDOPTS echo touch /var/lock/subsys/nfs

    The technology for changing the port for lockd (nlockmgr) differs from the above (no need to try to change the /etc/rc.d/init.d/nfslock file - nothing will work).

    Lockd is either implemented as a kernel module or statically compiled into the kernel. To change the port number, you need to open the /etc/modules file and find the options passed to the module:

    options lockd nlm_udpport=33000 nlm_tcpport=33000

    Now that RPC services use static ports that are known, IPTables must be used. The following set of rules assumes that NFS is the only server running on the machine, and only the machines listed in the /usr/local/etc/nfsexports.hosts file are allowed to access it:

    IPX = "/usr/sbin/iptables"

    # Clear all chains

    $IPT --flush

    $IPT -t nat --flush $IPT -t mangle --flush $IPT -X

    # Allow loopback traffic $IPT -A INPUT -i lo -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT

    # Default rules $IPT -P INPUT DROP $IPT -P OUTPUT DROP $IPT -P FORWARD DROP

    $IPT -A INPUT -m state-state ESTABLISHED,RELATED -j ACCEPT $IPT -A OUTPUT -m state -state NEW,ESTABLISHED,RELATED -j ACCEPT

    # Allow access to each computer

    # specified in /usr/local/etc/nfsexports.hosts

    for host in "cat /usr/local/etc/nfsexports.hosts"; do $IPT -I INPUT -s $host -p tcp -dport 111 -j ACCEPT

    $IPT -I INPUT -s $host -p udp -dport 111 -j ACCEPT

    $IPT -I INPUT -s $host -p udp -dport 2049 -j ACCEPT

    $IPT -I INPUT -s $host -p tcp --dport 32800 -j ACCEPT

    $IPT -I INPUT -s $host -p tcp --dport 32900 -j ACCEPT

    $IPT -I INPUT -s $host -p tcp -dport 33000 -j ACCEPT

    $IPT -I INPUT -s $host -p tcp --dport 33100 -j ACCEPT

    Of course, this is just a skeleton; many more rules will need to be added: at least allow SSH and set some kernel parameters using /proc.

    NFS tunneling over SSH.

    The abbreviation NFS sometimes stands for “No File Security” - these three words speak for themselves. Therefore, it is very important to protect NFS traffic. This is easy to do using ssh.

    The first step is to edit the /etc/exports file on the NFS server so that file systems are exported to the local node:

    Then you need to use SSH to forward the NFS and mountd ports. NFS uses port 2049/udp, and mountd, as stated, uses port number 33000:

    # ssh [email protected]-L 200: localhost:2049 -f sleep 120m

    # ssh [email protected]-L 200: localhost:33000 -f sleep 120m

    These two commands provide the user with an interactive shell, but since it is not needed, SSH issues the command sleep 120: back to command line, but port forwarding will take another 2 hours.

    Mounting a file system from the client side looks very unusual:

    mount -t nfs -o nosuid port=200 mountport=210 nfsserver.test.net:/home/user /mnt/another

    If the ssh tunneling tricks are too complicated, you can use the SHFS (Shell Filesystem) project ( http: //shfs.sourceforge.net/ ), which makes it possible to automate this entire procedure.

    After installation, you must access SHFS using either the mount -t shfs command or the new shfsmount command. The syntax of this command is similar to the previous one:

    shfsmount [email protected]:/home/user/mnt/user

    CFS and TCFS

    The Cryptographic File System uses transparent encryption and decryption of NFS traffic using DES algorithm. In addition, it supports automatic control keys, which makes the process as transparent as possible for the user.

    Although CFS was developed for SunOS and BSD, it is quite interesting because it appears to be the first attempt at transparently encoding shared files. Transparent Cryptographic File System (TCFS) provides an even more transparent way to encrypt NFS traffic.

    In addition to data encoding, this file system supports data integrity checking.

    Network File System (NFS) is a file sharing solution for organizations that have mixed Windows and Unix/Linux machine environments. The NFS file system allows you to share files between specified different platforms while the operating system is running Windows Server 2012: NFS Services in Windows Server 2012 includes the following features and enhancements.

    1. Search in Active Directory. You have the ability to use Windows Active Directory to access files. The Identity Management for Unix schema extension for Active Directory contains Unix user identifier (UID) and group identifier (GID) fields. This allows the Server for NFS and Client for NFS services to view Windows user account mappings on Unix directly from the services domain Active Directory (Active Directory Domain Services). Identity Management for Unix makes it easy to manage the mapping of Windows user accounts on Unix to Active Directory Domain Services.

    2. Improved server performance. Services for NFS includes a file filter driver that significantly reduces overall latency when accessing files on the server.

    3. Support for special Unix devices. Services for NFS supports special Unix devices (mknod).

    4. Expanded Unix support. Services for NFS supports the following Unix versions: Sun Microsystems Solaris version 9, Red Hat Linux version 9, IBM AIX version 5L 5.2, and Hewlett Packard HP-UX version 11i, as well as many modern Linux distributions.

    One of the most common scenarios that creates the need for NFS involves allowing users access to Windows environment to an enterprise resource planning (ERP) system based on Unix. While in the ERP system, users can create reports and/or export financial data to Microsoft Excel for further analysis. The NFS file system allows these files to be accessed while still in the Windows environment, reducing the need for specialized technical skills and the time spent exporting files using a Unix script and then importing them into a specific Windows application.

    There may also be a situation where you have a Unix system that is used to store files on some kind of storage network (Storage Area Network- SAN). Running NFS services on a Windows Server 2012 machine allows users in an organization to access files stored there without the overhead of Unix-side scripting.

    Before installing NFS Services, you must remove any previously installed NFS components, such as NFS components that were included with Services for Unix.

    NFS Services Components

    The following two NFS services components are available.

    1. Server for NFS(Server for NFS). Typically, a Unix-based computer cannot access files located on a Windows-based computer. However, a computer running Windows Server 2012 R2 and Server for NFS can act as a file server for Windows and Unix computers.

    2. Client for NFS(Client for NFS). Typically, a Windows-based computer cannot access files located on a Unix-based computer. However, a computer running Windows Server 2012 R2 and the Client for NFS feature can access files that are stored on a Unix-based NFS server.

    Installing Server For NFS using PowerShell

    Let's see how to use PowerShell to install the NFS role on a server and create an NFS file share.

    1. Open a Windows PowerShell window through the taskbar as an administrator account.

    2. Enter the following commands to install the NFS role on the server:

    PS C:\> Import-Module ServerManager PS C:\> Add-WindowsFeature FS-NFS-Services PS C:\> Import-Module NFS

    3. Enter the command below to create a new NFS file share:

    PS C:\> New-NfsShare -Name "Test" -Path "C:\Shares\Test"

    4. To view all the new NFS-specific PowerShell cmdlets that are available in Windows Server 2012 R2, run the following command:

    PS C:\>Get-Command -Module NFS

    5. Right-click on the C:\Shares\Test folder, select “properties”, then go to the NFS Sharing tab. Click on the Manage NFS Sharing button, in the dialog box that appears, you can manage folder access permissions, allow anonymous access, and configure file encoding settings. You can share a folder over NFS using the NFS Advanced Sharing dialog without using PowerShell.

    Setting standard resolutions

    Now we will need to open some firewall ports for NFS to function. Ports required for normal functioning NFS services are presented in the table below.

    Everyone knows that on UNIX systems, a file system is logically a collection of physical file systems connected to a single point. One of the main advantages of such an organization, in my opinion, is the ability to dynamically modify the structure of an existing file system. Also, thanks to the efforts of the developers, today we have the opportunity to connect a file system of almost any type and in any convenient way. By “method”, I first of all want to emphasize the ability of the OS kernel to work with file systems via network connections.

    Many network protocols provide us with the ability to work with remote files, be it FTP, SMB, Telnet or SSH. Thanks to the ability of the kernel, ultimately, to not depend on the type of file system being connected, we have the ability to connect anything and however we want using the mount program.

    Today I would like to talk about NFS - Network File System. This technology allows you to connect individual file system points on a remote computer to the file system of the local computer. The NFS protocol itself allows you to perform file operations quite quickly, safely and reliably. What else do we need? :-)

    What is needed for this to work

    In order not to rant for a long time on the topic of NFS versions and their support in various kernels, we will immediately make the assumption that your kernel version is not lower than 2.2.18. In the official documentation, the developers promise full support for NFS version 3 functionality in this kernel and later versions.

    Installation

    To run the NFS server in my Ubuntu 7.10 - the Gutsy Gibbon, I needed to install the nfs-common and nfs-kernel-server packages. If you only need an NFS client, then nfs-kernel-server does not need to be installed.

    Server setup

    After all packages have been successfully installed, you need to check if the NFS daemon is running:

    /etc/init.d/nfs-kernel-server status

    If the daemon is not running, you need to start it with the command

    /etc/init.d/nfs-kernel-server start

    After everything has started successfully, you can begin exporting the file system. The process itself is very simple and takes minimal time.

    The main NFS server configuration file is located in /etc/exports and has the following format:

    Directory machine1(option11,option12) machine2(option21,option22)

    directoryabsolute path to the FS server directory to which you need to give access

    machineX— DNS name or IP address of the client computer from which access is allowed

    optionXX— FS export parameters, the most commonly used of them:

    • ro- file access is read-only
    • rw— read/write access is granted
    • no_root_squash— by default, if you connect to an NFS resource as root, the server, for the sake of security, on its side will access files as the nobody user. However, if you enable this option, files on the server side will be accessed as root. Be careful with this option.
    • no_subtree_check— by default, if you export not the entire partition on the server, but only part of the file system, the daemon will check whether the requested file is physically located on the same partition or not. If you are exporting the entire partition or the mount point of the exported file system does not affect files from other physical volumes, then you can enable this option. This will give you an increase in server speed.
    • sync— enable this option if there is a possibility of a sudden connection loss or server power outage. If this option is not enabled, there is a very high risk of data loss if the NFS server suddenly stops.

    So, let's say we need to give access to the ashep-desktop computer to the /var/backups directory of the ashep-laptop computer. Directory access is required to copy backup files from ashep-desktop. My file turned out like this:

    /var/backups ashep-desktop(rw,no_subtree_check,sync)

    After adding the line to /etc/exports, you must restart the NFS server for the changes to take effect.

    /etc/init.d/nfs-kernel-server restart

    That's it. You can start connecting the exported FS on the client computer.

    Client setup

    On the client side, the remote file system is mounted in the same way as all others - with the mount command. Also, no one forbids you to use /etc/fstab if you need to connect the FS automatically when the OS boots. So, the mount option will look like this:

    Mount -t nfs ashep-laptop:/var/backups/ /mnt/ashep-laptop/backups/

    If everything went well and you need to connect to the remote FS automatically at boot, just add the line to /etc/fstab:

    Ashep-laptop:/var/backups /mnt/ashep-laptop/backups nfs auto 0 0

    What else

    So we have a practical, tiny overview of the capabilities of NFS. Of course, this is just a small part of what NFS can do. This is enough for use at home or in a small office. If this is not enough for you, I recommend reading first

    #image.jpgHave a nice time, readers and guests of my blog. There was a very long break between posts, but I’m back in the fight). In today's article I will look at NFS protocol operation, as well as setting up NFS server and NFS client on Linux.

    Introduction to NFS

    NFS (Network File System- network file system) in my opinion - perfect solution on a local network, where fast (faster compared to SAMBA and less resource-intensive compared to remote file systems with encryption - sshfs, SFTP, etc...) data exchange is needed and the security of the transmitted information is not at the forefront. NFS protocol allows you to mount remote file systems over a network into a local directory tree, as if it were a mounted disk file system.

    Thus, local applications can work with a remote file system as if they were a local one. But you need to be careful (!) with setting up NFS, because with a certain configuration it is possible to freeze the client’s operating system waiting for endless I/O.

    NFS protocol work based RPC protocol, which is still beyond my understanding)) so the material in the article will be a little vague... Before you can use NFS, be it a server or a client, you must make sure that your kernel has support for the NFS file system. You can check whether the kernel supports the NFS file system by looking for the presence of the corresponding lines in the /proc/filesystems file:

    ARCHIV ~ # grep nfs /proc/filesystems nodev nfs nodev nfs4 nodev nfsd

    If the indicated lines are not in the /proc/filesystems file, then you need to install the packages described below. This will likely allow you to install dependent kernel modules to support the appropriate file systems.

    If, after installing the packages, NFS support is not displayed in the designated file, then you will need to recompile the kernel to enable this function.

    Story Network File System

    NFS protocol developed by Sun Microsystems and has four versions in its history. NFSv1 was developed in 1989 and was experimental, running on the UDP protocol. Version One is described in RFC 1094.

    NFSv2 was released in the same year 1989, was described by the same RFC1094 and was also based on the UDP protocol, while at the same time allowing to read at least 2GB from a file. NFSv3 was finalized in 1995 and is described in RFC 1813.

    The main innovations of the third version were file support large size, support for the TCP protocol and larger TCP packets was added, which significantly accelerated the performance of the technology. NFSv4 was finalized in 2000 and described in RFC 3010, revised in 2003 and described in RFC 3530.

    The 4th version included performance improvements, support for various authentication means (specifically, Kerberos and LIPKEY with the introduction of the RPCSEC GSS protocol) and access control lists (both POSIX and Windows types). NFS v4.1 was approved by the IESG in 2010 and received RFC 5661.

    The fundamental innovation of version 4.1 is the specification of pNFS - Parallel NFS, a mechanism for parallel access of an NFS client to data from multiple distributed NFS servers. The presence of such a mechanism in a sample network file system will help build distributed “cloud” storage and information systems.

    NFS server

    Since we have NFS- This network file system, you need to configure net in Linux. (You can also read the article main concepts of networks). Next you need to install the appropriate package. On Debian this is the nfs-kernel-server and nfs-common package, on RedHat this is the nfs-utils package.

    And also, you need to allow the demon to run at appropriate execution levels (command in RedHat - /sbin/chkconfig nfs on, in Debian - /usr/sbin/update-rc.d nfs-kernel-server defaults).

    Installed packages in Debian are launched in the following order:

    ARCHIV ~ # ls -la /etc/rc2.d/ | grep nfs lrwxrwxrwx One root root 20 Oct Eighteen 15:02 S15nfs-common -> ../init.d/nfs-common lrwxrwxrwx One root root 20 seven Oct 20 two 01:23 S16nfs-kernel-server -> ../init .d/nfs-kernel-server

    In other words, it starts first nfs-common, later the server itself nfs-kernel-server.

    In RedHat the situation is similar, with the only exception that the first script is called nfslock, and the server is called simply nfs. About nfs-common The debian website tells us this verbatim: shared files for NFS client and server, this package must be installed on the machine that will act as an NFS client or server.

    The package includes programs: lockd, statd, showmount, nfsstat, gssd and idmapd. By viewing the contents of the startup script /etc/init.d/nfs-common, you can trace the following sequence of work: the script checks for the presence of the executable binary file /sbin/rpc.statd, checks for the presence in the files /etc/default/nfs-common, /etc/fstab and /etc/exports traits that require running demons idmapd And gssd, starts the daemon /sbin/rpc.statd, then before starting /usr/sbin/rpc.idmapd and /usr/sbin/rpc.gssd checks for the presence of these executable binary files, then for the devil /usr/sbin/rpc.idmapd checks for the presence of kernel modules sunrpc, nfs and nfsd, and also support for the rpc_pipefs file system in the kernel (in other words, its presence in the /proc/filesystems file), if everything is successful, it launches /usr/sbin/rpc.idmapd. Additionally, for the demon /usr/sbin/rpc.gssd checks the rpcsec_gss_krb5 kernel module and starts the demon.

    If you view the content NFS server startup script on Debian (/etc/init.d/nfs-kernel-server), then you can follow the following sequence: at startup, the script checks the existence of the /etc/exports file, the presence of the nfsd kernel module, the presence of support for the NFS file system in the Linux kernel (others words in the /proc/filesystems file), if everything is in place, then the demon starts /usr/sbin/rpc.nfsd, then checks whether the NEED_SVCGSSD parameter is set (set in the server options file /etc/default/nfs-kernel-server) and, if set, starts the demon /usr/sbin/rpc.svcgssd, the last one to launch the demon /usr/sbin/rpc.mountd. From this script it is clear that NFS server operation consists of demons rpc.nfsd, rpc.mountd and if Kerberos authentication is used, then demon rcp.svcgssd. The demon rpc.rquotad and nfslogd are still running in the red hat (For some reason in Debian I did not find information about this demon and the reasons for its absence, apparently it was deleted...).

    From this it becomes clear that The Network File System server consists of the following processes (read - demons), located in the /sbin and /usr/sbin directories:

    • rpc.statd- Impossible monitoring of network status (aka Network Status Monitor, aka NSM). It allows you to correctly cancel the lock after a crash/reboot. To notify about violations, use the /usr/sbin/sm-notify program. The statd imp works on both servers and clients. Previously, this server was needed for rpc.lockd to work, but the kernel is now responsible for locking (note: if I'm not mistaken #image.jpg). (RPC program 100 thousand 20 one and 100 thousand 20 four - in new versions)
    • rpc.lockd- Lockd lock imp (also known as NFS lock manager (NLM)) handles file lock requests. The blocking imp works on both servers and clients. Clients request file locking, and servers allow it. (outdated and not used as a demon in new distributions. Its functions in modern distributions (with a kernel older than 2.2.18) are performed by the kernel, more precisely by the kernel module (lockd).) (RPC program 100024)
    • rpc.nfsd- The main demon of the NFS server is nfsd (in new versions it is sometimes called nfsd4). This imp serves NFS client requests. The RPCNFSDCOUNT parameter in the /etc/default/nfs-kernel-server file on Debian and NFSDCOUNT in the /etc/sysconfig/nfs file on RedHat determines the number of demons to run (the default is 8). (RPC program 100003)
    • rpc.mountd- Without mounting NFS mountd handles client requests to mount directories. The mountd imp runs on NFS servers. (RPC program 100005)
    • rpc.idmapd- The idmapd for NFSv4 on the server converts local uid/gid of users into the name@domain format, and the service on the client converts user/group names of the name@domain type into local user and group identifiers (according to the configuration file /etc/idmapd.conf, more details in man idmapd.conf):.
    • Additionally, older versions of NFS used imps: nfslogd- NFS log imp records activity for exported file systems, works on NFS servers and rquotad- remote quota server provides information about user quotas in remote file systems, can work on both servers and clients. (RPC program 100011)

    In NFSv4, when using Kerberos, demons are additionally launched:

    • rpc.gssd- Bes NFSv4 provides authentication methods via GSS-API (Kerberos authentication). Works on client and server.
    • rpc.svcgssd- NFSv4 server imp, which provides server-side client authentication.

    portmap and RPC protocol (Sun RPC)

    In addition to the packages indicated above, NFSv2 and v3 require an additional package for correct operation portmap(replaced in newer distributions by renamed to rpcbind). This package is usually installed automatically with NFS as a dependent package and implements the operation of the RPC server, in other words, it is responsible for the dynamic assignment of ports for some services registered in the RPC server.

    Literally, according to the documentation, this is a server that converts RPC (Remote Procedure Call) program numbers into TCP/UDP port numbers. portmap operates on several entities: RPC calls or requests, TCP/UDP ports, protocol version (tcp or udp), program numbers and program versions. The portmap demon is launched by the script /etc/init.d/portmap before the start of NFS services.

    In short, the job of an RPC (Remote Procedure Call) server is to process RPC calls (so-called RPC procedures) from local and remote processes.

    Using RPC calls, services register or remove themselves to/from the port mapper (aka port mapper, aka portmap, aka portmapper, aka, in new versions, rpcbind), and clients use RPC calls to send requests to the portmapper receive relevant information. User-friendly names of program services and their corresponding numbers are defined in the /etc/rpc file.

    As any service has sent the corresponding request and registered itself with the RPC server in the port mapper, the RPC server assigns maps TCP service and UDP ports on which the service started and stores in the kernel the corresponding information about the running service (name), unique service number (in agreement with /etc/rpc), about the protocol and port on which the service runs and about the version of the service and provides the indicated information to clients upon request. The port converter itself has a program number (100000), a version number of 2, TCP port 100 eleven and UDP port 111.

    Above, when indicating the composition of the NFS server demons, I indicated the main RPC program numbers. I probably confused you a little with this paragraph, so I’ll say the main phrase that should make it clear: the main function of the port mapper is to return, upon request of the client who provided the RPC program number (or RPC program number) and version to him (the client) the port on which the requested program is running. Accordingly, if a client needs to access an RPC with a specific program number, it must first contact the portmap process on the server machine and find the communication port number with the RPC service it needs.

    The operation of an RPC server can be represented by the following steps:

    To obtain information from the RPC server, use the rpcinfo utility. When specifying traits -p host program displays a list of all registered RPC programs on host host. Without specifying the host, the program will display services on localhost. Example:

    ARCHIV ~ # rpcinfo -p prog-ma vers proto port 100 thousand Two tcp 100 eleven portmapper 100 thousand Two udp 100 eleven portmapper 100 thousand 20 four One udp 50 nine thousand four hundred 50 one status 100 thousand 20 four One tcp Sixty thousand eight hundred 70 two status 100 thousand 20 one One udp 40 four thousand three hundred 10 nlockmgr 100 thousand 20 one Three udp 40 four thousand three hundred 10 nlockmgr 100 thousand 20 one Four udp 40 four thousand three hundred 10 nlockmgr 100 thousand 20 one One tcp 40 four thousand eight hundred 50 one nlockmgr 100 thousand 20 one Three tcp 40 four thousand eight hundred 50 one nlockmgr 100 thousand 20 one Four tcp 40 four thousand eight hundred 50 one nlockmgr 100 thousand three Two tcp Two thousand 40 nine nfs 100 thousand three Three tcp Two thousand 40 nine nfs 100 thousand three Four tcp Two thousand 40 nine nfs 100 thousand three Two udp Two thousand 40 nine nfs 100 thousand three Three udp Two thousand 40 nine nfs 100 thousand three Four udp Two thousand 40 nine nfs 100 thousand 5 One udp 50 one thousand three hundred 6 mountd 100 thousand 5 One tcp 40 one thousand four hundred 5 mountd 100 thousand 5 Two udp 50 one thousand three hundred 6 mountd 100 thousand 5 Two tcp 40 one thousand four hundred 5 mountd 100 thousand 5 Three udp 50 one thousand three hundred 6 mountd 100 thousand 5 Three tcp 40 one thousand four hundred 5 mountd

    As you can see, rpcinfo indicates (in columns from left to right) the number of the registered program, version, protocol, port and name.

    Using rpcinfo you can remove a program's registration or get information about a specific RPC service (more options in man rpcinfo). As you can see, demons are registered portmapper version Two on udp and tcp ports, rpc.statd version One on udp and tcp ports, NFS lock version manager 1,3,4, non-nfs server versions 2,3,4, as well as non-mounting versions 1,2,3.

    The NFS server (more precisely, rpc.nfsd) receives requests from the client in the form of UDP datagrams on port 2049. Despite the fact that NFS works with a port resolver, which allows the server to use dynamically assigned ports, the UDP port Two thousand 40 nine is strictly assigned to NFS in most implementations.

    Network File System Protocol Operation

    Mounting remote NFS

    The process of mounting a remote NFS file system can be represented by the following diagram:

    Description of the NFS protocol when mounting a remote directory:

    1. An RPC server is launched on the server and client (usually at boot), which is serviced by the portmapper process and registered on the tcp/111 and udp/111 port.
    2. Services are launched (rpc.nfsd, rpc.statd, etc.), which are registered on the RPC server and registered on random network ports (if a static port is not specified in the service settings).
    3. the mount command on the client’s computer sends the kernel a request to mount a network directory indicating the type of file system, host and practically the directory, the kernel sends and generates an RPC request to the portmap process on the NFS server on port udp/111 (if the function to work via tcp is not set on the client )
    4. The NFS server kernel queries the RPC for the presence of the rpc.mountd imp and returns to the client kernel the network port on which the imp is running.
    5. mount sends an RPC request to the port on which rpc.mountd is running. At this point, the NFS server can validate a client based on its IP address and port number to see if the client can mount the designated file system.
    6. Mountless returns a description of the requested file system.
    7. The client's mount command issues the mount system call to associate the file handle found in step 5 with the local mount point on the client's host. The file handle is stored in the client's NFS code, and from now on any access by user processes to files on the server's file system will use the file handle as a starting point.

    Data exchange between client and NFS server

    Ordinary access to a remote file system can be described by the following scheme:

    Description of the process of accessing a file located on an NFS server:

    Setting up an NFS server

    Server setup generally consists of setting local directories that are allowed to be mounted by remote systems in the /etc/exports file. This action is called export directory hierarchy. The main sources of information about exported directories are the following files:

    • /etc/exports- the main configuration file that stores the configuration of the exported directories. Used when starting NFS and by the exportfs utility.
    • /var/lib/nfs/xtab- contains a list of directories mounted by remote clients. Used by the rpc.mountd imp when a client tries to mount a hierarchy (a mount record is created).
    • /var/lib/nfs/etab- a list of directories that can be mounted by remote systems, indicating all the features of the exported directories.
    • /var/lib/nfs/rmtab- a list of directories that are not currently unexported.
    • /proc/fs/nfsd- a special file system (kernel 2.6) for managing the NFS server.
      • exports- a list of active exported hierarchies and clients to whom they were exported, as well as properties. The core receives this information from /var/lib/nfs/xtab.
      • threads- contains the number of threads (can also be changed)
      • using filehandle you can get a pointer to a file
      • etc...
    • /proc/net/rpc- contains “raw” statistics, which can be obtained using nfsstat, as well as various caches.
    • /var/run/portmap_mapping- information about services registered in RPC

    Note: In general, on the Internet there are a lot of interpretations and formulations of the purpose of the files xtab, etab, rmtab, I don’t know who to believe #image.jpg Even on http://nfs.sourceforge.net/ the interpretation is not clear.

    Setting up the /etc/exports file

    In the normal case, the /etc/exports file is the only file that requires editing for the NFS server function. This file controls the following properties:

    • What kind of clients can access files on the server
    • Which hierarchies? directories on the server can be accessed by each client
    • How will custom customer names be be displayed to local usernames

    It doesn't matter which line of the exports file has the following format:

    export_point client1(functions) [client2(functions) ...]

    Where export_point absolute path of the exported directory hierarchy, client1 - n name of 1 or more clients or IP addresses, separated by spaces, that are allowed to mount export_point. Functions outline the mounting rules for the client indicated before the options.

    Here's an ordinary one exports file configuration example:

    ARCHIV ~ # cat /etc/exports /archiv1 files(rw,sync) 10.0.0.1(ro,sync) 10.0.230.1/24(ro,sync)

    In this example, computers files and 10.0.0.1 are allowed access to the export point /archiv1, with read/write access to the files host, and read-only access for the host 10.0.0.1 and subnet 10.0.230.1/24.

    Host descriptions in /etc/exports are allowed in the following format:

    • The names of individual nodes are described as files or files.DOMAIN.local.
    • The description of the domain mask is done in following format: *DOMAIN.local includes all nodes in the DOMAIN.local domain.
    • Subnets are specified as IP address/mask pairs. For example: 10.0.0.0/255.255.255.0 includes all nodes whose addresses begin with 10.0.0.
    • Specifying the name of the @myclients network group that has access to the resource (when using an NIS server)

    General functions for exporting directory hierarchies

    The following common functions are used in the exports file:(first, the functions used by default in most systems are indicated, in brackets - not by default):

    • auth_nlm (no_auth_nlm) or secure_locks (insecure_locks)- specifies that the server should seek authentication of lock requests (using the NFS Lock Manager protocol).
    • nohide (hide)- if the server exports two directory hierarchies, while one is nested (mounted) within the other. The client must, of course, mount the second (child) hierarchy, otherwise the mount point of the child hierarchy will look like an empty directory. The nohide function results in a 2nd directory hierarchy without trivial mounting. (note: I couldn’t get this option to work...)
    • ro(rw)- Allows only read (write) requests. (Ultimately, whether it can be read/written or not is determined based on file system rights, with all this, the server is not able to distinguish a request to read a file from a request to execute, so it allows reading if the user has read or execute rights.)
    • secure (insecure)- requests that NFS requests come from secure ports (< 1024), чтобы программа без прав root не могла монтировать иерархию каталогов.
    • subtree_check (no_subtree_check)- If a subdirectory of the file system is exported, but not the entire file system, the server checks whether the requested file is in the exported subdirectory. Disabling verification reduces security but increases data transfer speed.
    • sync (async)- specifies that the server should respond to requests only after writing the configurations performed by those requests to disk. The async function tells the server not to wait for information to be written to disk, which increases performance but reduces reliability, because In the event of a connection break or equipment failure, information may be lost.
    • wdelay (no_wdelay)- instructs the server to delay executing write requests if a subsequent write request is pending, writing data in larger blocks. This improves performance when sending large queues of write commands. no_wdelay specifies not to delay execution of a write command, which can be useful if the server receives an unlimited number of unrelated commands.

    Export symbolic links and device files. When exporting a directory hierarchy containing symbolic links, the link object must be accessible to the client (remote) system, in other words, one of the following rules must be true:

    • the link object must exist on the client file system
    • need to export and mount the reference object

    The device file refers to the interface Linux kernels. When you export a device file, this interface is exported. If the client system does not have a device of the same type, the exported device will not work.

    On the client system, when mounting NFS objects, you can use the nodev option so that device files in the mounted directories are not used.

    Default functions may vary between systems and can be found in /var/lib/nfs/etab. After describing the exported directory in /etc/exports and restarting the NFS server, all missing functions (read: default functions) will be reflected in the /var/lib/nfs/etab file.

    Functions for displaying (matching) user IDs

    For a better understanding of the following, I would advise you to read the article Managing Linux Users. Each Linux user has its own UID and master GID, which are described in the /etc/passwd and /etc/group files.

    The NFS server assumes that the remote host's operating system has authenticated the users and assigned them the correct UID and GID. Exporting files gives users of the client system the same access to those files as if they were logged directly onto the server. Accordingly, when an NFS client sends a request to the server, the server uses the UID and GID to identify the user in local system, which can lead to some problems:


    The following functions set the rules for displaying remote users in local ones:

    An example of using a user mapping file:

    ARCHIV ~ # cat /etc/file_maps_users # User mapping # remote local comment uid 0-50 One thousand two # mapping users with remote UID 0-50 to local UID One thousand two gid 0-50 One thousand two # mapping users with /span remote GID 0-50 to local GID 1002

    NFS Server Management

    The NFS server is managed using the following utilities:

    • nfsstat
    • showmsecure (insecure)mount
    • exportfs

    nfsstat: NFS and RPC statistics

    The nfsstat utility allows you to view statistics of RPC and NFS servers. The command's functions can be found in man nfsstat.

    showmount: display information about NFS status

    showmount utility queries rpc.mountd on the remote host about mounted file systems. By default, a sorted list of clients is returned. Keys:

    • --all- a list of clients and mount points is displayed indicating where the client mounted the directory. This information may not be reliable.
    • --directories- a list of mount points is displayed
    • --exports- a list of exported file systems is given based on the beliefs of nfsd

    When you run showmount without arguments, information about the systems that are allowed to mount will be printed to the console local collections. For example, the ARCHIV host provides us with a list of exported directories with IP addresses of hosts that are allowed to mount the designated collections:

    FILES ~ # showmount --exports archive Export list for archive: /archiv-big 10.0.0.2 /archiv-small 10.0.0.2

    If you specify the hostname/IP in the argument, information about this host will be displayed:

    ARCHIV ~ # showmount files clnt_create: RPC: Program not registered # this message tells us that NFSd is not running on the FILES host

    exportfs: manage exported directories

    This command serves exported collections, data in the file /etc/exports, it would be more accurate to write that it does not serve, but synchronizes with the file /var/lib/nfs/xtab and removes non-existent ones from xtab. exportfs is done when running the nfsd demon with the -r argument. The exportfs utility in 2.6 kernel mode talks to the rpc.mountd imp through files in the /var/lib/nfs/ directory and does not talk to the kernel directly. Without traits, displays a list of currently exported file systems.

    exportfs properties:

    • [client:directory-name] - add or remove the designated file system for the designated client)
    • -v - display more information
    • -r - re-export all collections (synchronize /etc/exports and /var/lib/nfs/xtab)
    • -u - remove from the list of exported
    • -a - add or remove all file systems
    • -o - functions separated by commas (similar to the options used in /etc/exports; i.e. you can change the functions of already mounted file systems)
    • -i - do not use /etc/exports when adding, only properties of the current command line
    • -f - reset the list of exported systems in kernel 2.6;

    NFS client

    Before accessing a file on a remote file system, the client must mount it and receive from the server pointer to it. NFS Mount can be done using mount commands or using one of the proliferating automatic mounters (amd, autofs, automount, supermount, superpupermount). The installation process is perfectly demonstrated in the illustration above.

    On NFS clients no need to unleash any demons, client functions makes a kernel module kernel/fs/nfs/nfs.ko, which is used when mounting a remote filesystem. Exported collections from the server can be installed on the client in the following ways:

    • manually using the mount command
    • automatically at boot, when mounting file systems outlined in /etc/fstab
    • automatically using the autofs demon

    I will not consider the 3rd method with autofs in this article, due to its large amount of information. Maybe there will be a separate description in the next articles.

    Mounting the Network Files System with the mount command

    An example of using the mount command is presented in the post Block Device Control Commands. Here I will look at an example of the mount command for mounting an NFS file system:

    FILES ~ # mount -t nfs archiv:/archiv-small /archivs/archiv-small FILES ~ # mount -t nfs -o ro archiv:/archiv-big /archivs/archiv-big FILES ~ # mount ..... .. archiv:/archiv-small on /archivs/archiv-small type nfs (rw,addr=10.0.0.6) archiv:/archiv-big on /archivs/archiv-big type nfs (ro,addr=10.0.0.6)

    The 1st command mounts the exported /archiv-small directory on the archiv server to the local mount point /archivs/archiv-small with default options (in other words, read-write).

    Although mount command in the latest distributions it is able to think about what type of file system is used even without specifying the type; however, it is better to specify the -t nfs parameter. The 2nd command mounts the exported directory /archiv-big on the archiv server to the local directory /archivs/archiv-big with the read-only (ro) option. mount command without features clearly shows us the result of mounting. In addition to the read-only function (ro), others can be specified main functions when mounting NFS:

    • nosuid - This function prohibits execution of setuid programs from the mounted directory.
    • nodev(no device - not a device) - This function prohibits the use of character and block special files as devices.
    • lock (nolock)- Allows NFS locking (default). nolock disables NFS locking (does not run lockd) and is convenient when working with older servers that do not support NFS locking.
    • mounthost=name- The name of the host on which NFS mountless is running - mountd.
    • mountport=n - Port used by the mountd imp.
    • port=n- port used to connect to the NFS server (default is 2049 if rpc.nfsd is not registered on the RPC server). If n=0 (default), then NFS sends a request to the portmap on the server to find the port.
    • rsize=n(read block size - read block size) - The number of bytes read at a time from the NFS server. Standard - 4096.
    • wsize=n(write block size - write block size) - The number of bytes written at a time to the NFS server. Standard - 4096.
    • tcp or udp- For NFS mounting use TCP protocol or UDP respectively.
    • bg- If you lose access to the server, repeat tests in the background so as not to interrupt the system boot process.
    • fg- If you lose access to the server, repeat tests in priority mode. This parameter may block the system boot process by repeating mount attempts. For this reason, the fg parameter is used mainly for debugging.

    Functions affecting attribute caching on NFS mounts

    File attributes, stored in inod (index descriptors), such as modification time, size, hard links, owner, usually change occasionally for ordinary files and even less often for directories. Many programs, such as ls, access files read-only and do not change file attributes or content, but waste system resources on expensive network operations.

    To avoid wasting unnecessary resources, you can cache these attributes. The kernel uses a file's modification time to determine whether the cache is out of date by comparing the modification time in the cache and the modification time of the file itself. The attribute cache is periodically updated in accordance with these parameters:

    • ac (noac)(attrebute cache - attribute caching) - Allows attribute caching (by default). Although noac slows down the server, it avoids attribute staleness when multiple clients are actively writing information to a common hierarchy.
    • acdirmax=n(attribute cache directory file maximum - caching the maximum attribute for a directory file) - Largest quantity seconds that NFS waits before updating directory attributes (default: Sixty seconds)
    • acdirmin=n(attribute cache directory file minimum - minimum attribute caching for a directory file) - A small number of seconds that NFS waits before updating directory attributes (default 30 sec.)
    • acregmax=n(attribute cache regular file maximum - maximum attribute caching for a regular file) - The maximum number of seconds that NFS waits before updating the attributes of a regular file (default Sixty seconds)
    • acregmin=n(attribute cache regular file minimum - minimum attribute caching for a regular file) - A small number of seconds that NFS waits before updating the attributes of a regular file (default Three seconds)
    • actimeo=n(attribute cache timeout - attribute cache timeout) - Replaces the values ​​for all the above options. If actimeo is not specified, then the above values ​​take on the default values.

    NFS Error Handling Functions

    The following functions control what NFS does when there is no response from the server or when I/O errors occur:

    • fg(bg)(foreground - foreground, background - background) - Create failed NFS mount probes in the foreground/background.
    • hard (soft)- displays the message "server not responding" to the console when the timeout is reached and continues mounting tests. With this function soft- during a timeout, reports an I/O error to the program that called the operation. (it is recommended not to use the soft option)
    • nointr (intr)(no interrupt - do not interrupt) - Does not allow signals to interrupt file operations in a hard-mounted directory hierarchy when a large timeout is reached. intr- enables interruption.
    • retrans=n(retransmission value) - After n small timeouts, NFS generates a large timeout (default 3). A large timeout stops operations or prints a "server not responding" message to the console, depending on whether the hard/soft function is specified.
    • retry=n(retry value) - The number of minutes the NFS service will repeat mount operations before giving up (default 10000).
    • timeo=n(timeout value) - The number of 10th of a second the NFS service waits before retransmitting in case of RPC or a small timeout (default 7). This value increases with each timeout to a greater value of Sixty seconds or until a large timeout occurs. In the case of a busy network, a slow server, or when the request is passing through multiple routers or gateways, increasing this value may improve performance.

    Automatic NFS mount at boot (description of file systems in /etc/fstab)

    I touched on the description of the /etc/fstab file in the corresponding article. In the current example, I will look at several examples of mounting NFS file systems with a description of the options:

    FILES ~ # cat /etc/fstab | grep nfs archiv:/archiv-small /archivs/archiv-small nfs rw,timeo=4,rsize=16384,wsize=16384 Null Null nfs-server:/archiv-big /archivs/archiv-big nfs rw,timeo=50 ,hard,fg Zero 0

    The 1st example mounts the file system /archiv-small from the host archiv to the mount point /archivs/archiv-small, the file system type is specified as nfs (always must be specified for this type), the file system is mounted with the read-write (rw) option .

    The archive host is connected via a fast local channel, so to increase performance, the timeo parameter has been reduced and the rsize and wsize values ​​have been significantly increased. The fields for the dump and fsck programs are set to zero so that these programs do not use an NFS-mounted file system.

    The 2nd example mounts the /archiv-big file system from the nfs-server host. Because We are connected to the nfs-server host via a slow connection, the timeo parameter is increased to 5 seconds (50 10ths of a second), and the hard parameter is also set hard so that NFS continues to remount the file system after a long timeout, the fg parameter is also set, so that when the system boots and the nfs-server host is unavailable, it does not freeze.

    Before saving configurations in /etc/fstab, be sure to try to mount manually and make sure that everything works!!!

    Improved NFS performance

    NFS performance can be affected by several things, especially when running over slow connections. When working with slow and heavily loaded connections, it is better to use the hard parameter so that timeouts do not cause programs to stop working. But you need to consider that if you mount a file system via NFS with the hard parameter via fstab, and the remote host is unreachable, then the system will freeze when booting.

    Also, one of the easiest ways to increase NFS performance is to increase the number of bytes transferred at a time. The size of four thousand ninety six b is very small for modern fast connections By increasing this value to Eight thousand 100 ninety two or more, you can experimentally find the best speed.

    Also, one should not lose sight of timeout functions. NFS waits for a response to data transfer within the period of time specified in the timeo function; if a response is not received within this time, then a repeated transfer is made.

    But on busy and slow connections, this time may be less than the server's response time and communication channel capacity, resulting in unnecessary retransmissions that slow down work. By default, timeo is 0.7 seconds (700 milliseconds). after no response for Seven hundred ms, the server will retransmit and double the waiting time to 1.4 seconds, increasing timeo will continue to a higher value of Sixty seconds. Next, depending on the hard/soft parameter, some action will occur (see above).

    You can select the best timeo for a specific value of the transmitted packet (rsize/wsize values) using the ping command:

    FILES ~ # ping -s 30 two thousand seven hundred sixty-eight archiv PING archiv.DOMAIN.local (10.0.0.6) 32768(32796) bytes of data. 30 two thousand seven hundred 70 6 bytes from archiv.domain.local (10.0.0.6): icmp_req=1 ttl=64 time=0.931 ms 30 two thousand seven hundred 70 6 bytes from archiv.domain.local (10.0.0.6): icmp_req= 2 ttl=64 time=0.958 ms 30 two thousand seven hundred 70 6 bytes from archiv.domain.local (10.0.0.6): icmp_req=3 ttl=64 time=1.03 ms 30 two thousand seven hundred 70 6 bytes from archiv.domain.local (10.0.0.6): icmp_req=4 ttl=64 time=1.00 ms 30 two thousand seven hundred 70 6 bytes from archiv.domain.local (10.0.0.6): icmp_req=5 ttl=64 time=1.08 ms ^C --- archiv.DOMAIN.local ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4006ms rtt min/avg/max/mdev = 0.931/1.002/1.083/0.061 ms

    As you can see, when sending a packet of size 30 two thousand seven hundred sixty eight (32Kb), its travel time from the client to the server and back floats around One millisecond. If this time exceeds two hundred ms, then you should think about increasing the timeo value so that it exceeds the exchange value by three to four times. Accordingly, it is better to do this test during heavy network load

    Launching NFS and setting up Firewall

    The note was copied from the blog http://bog.pp.ru/work/NFS.html, for which many thanks!!!

    Run NFS server, mount, block, quota and status with "correct" ports (for firewall)

    • it is better to first unmount all resources on the clients
    • stop and disable rpcidmapd from starting if NFSv4 is not planned: chkconfig --level Three hundred 40 5 rpcidmapd off service rpcidmapd stop
    • if necessary, allow the portmap, nfs and nfslock services to start: chkconfig --levels Three hundred 40 5 portmap/rpcbind on chkconfig --levels Three hundred 40 5 nfs on chkconfig --levels Three hundred 40 5 nfslock on
    • if necessary, stop the nfslock and nfs services, start portmap/rpcbind, unload the modules service nfslock stop service nfs stop service portmap start # service rpcbind start umount /proc/fs/nfsd service rpcidmapd stop rmmod nfsd service autofs stop # somewhere later it must be run rmmod nfs rmmod nfs_acl rmmod lockd
    • open ports in iptables
      • for RPC: UDP/111, TCP/111
      • for NFS: UDP/2049, TCP/2049
      • for rpc.statd: UDP/4000, TCP/4000
      • for lockd: UDP/4001, TCP/4001
      • for mountd: UDP/4002, TCP/4002
      • for rpc.rquota: UDP/4003, TCP/4003
    • for the rpc.nfsd server, add the line RPCNFSDARGS="--port 2049" to /etc/sysconfig/nfs
    • for the mount server, add the line MOUNTD_PORT=4002 to /etc/sysconfig/nfs
    • for the rpc.rquota function for new versions you need to add the line RQUOTAD_PORT=4003 to /etc/sysconfig/nfs
    • for the rpc.rquota function it is necessary for older versions (after all, you must have the quota package 3.08 or newer) add to /etc/services rquotad 4003/tcp rquotad 4003/udp
    • will check the adequacy of /etc/exports
    • run the services rpc.nfsd, mountd and rpc.rquota (rpcsvcgssd and rpc.idmapd are launched at the same time, if you remember to delete them) service nfsd start or in new versions service nfs start
    • for the blocking server for new systems, add the lines LOCKD_TCPPORT=4001 LOCKD_UDPPORT=4001 to /etc/sysconfig/nfs
    • for the lock server for older systems, add directly to /etc/modprobe[.conf]: options lockd nlm_udpport=4001 nlm_tcpport=4001
    • bind the rpc.statd status server to port Four thousand (for older systems, run rpc.statd with the -p 4000 key in /etc/init.d/nfslock) STATD_PORT=4000
    • start the lockd and rpc services.statd service nfslock start
    • make sure that all ports are bound normally using "lsof -i -n -P" and "netstat -a -n" (some of the ports are used by kernel modules that lsof does not see)
    • if before the “rebuilding” the server was used by clients and they could not be unmounted, then you will have to restart the automatic mounting services on the clients (am-utils, autofs)

    Example NFS server and client configuration

    Server configuration

    If you wish to make your NFS partitioned directory public and writable, you can use the all_squash option in combination with the anonuid and anongid options. For example, to set permissions for user "nobody" in group "nobody", you could do the following:

    ARCHIV ~ # cat /etc/exports # Read and write access for client on 192.168.0.100, with rw access for user Ninety-nine with gid Ninety-nine /files 192.168.0.100(rw,sync,all_squash,anonuid=99,anongid= 99)) # Read and write access for client on 192.168.0.100, with rw access for user Ninety nine with gid Ninety nine /files 192.168.0.100(rw,sync,all_squash,anonuid=99,anongid=99))

    This also means that if you wish to allow access to a designated directory, nobody.nobody must be the owner of the shared directory:

    # chown -R nobody.nobody /files

    Client Configuration

    On the client, you need to mount the remote directory in a convenient way, for example with the mount command:

    FILES ~ # mount -t nfs archive:/files /archivs/files

    Resume

    Phew... The article is finished. Currently we have studied what is Network File System and how to eat it, in the next article I’ll try to make a HOWTO with Kerberos authentication. I hope the material was intelligible and necessary.

    I will be glad to see your additions and comments!

    NFS HOWTO, nfs.sourceforge, man nfs? man mount, man exports

    RFC One thousand ninety-four - NFSv1, v2
    RFC One thousand eight hundred thirteen - NFSv3
    RFC Three thousand 500 30 - NFSv4
    RFC 5 thousand 600 sixty one - NFSv4.1
    NFS HOWTO
    nfs.sourceforge.net
    man mount
    man exports

    Not everyone is familiar with data transfer protocols. But many people would like to connect their computers into one network or use a server to store files. One way to do this is NFS. How to set up an NFS server in Ubuntu - read on.

    By correctly configuring NFS, you can combine computers on different OSes into one network.

    Network File System - protocol network access to files. As usual, it consists of two parts. One is the client one, which is located on the computer from which remote data is viewed. The other - server - is located on the computer where this data is stored. It is quite convenient to use additional disk space, especially on a local network. And if we are talking about some corporate PCs, then this is simply necessary.

    How is it different?

    Today there are a large number of protocols and a wide variety of software that perform the same functions. What makes NFS stand out?

    • Possibility of connecting computers on different operating systems into one network. It is often convenient to connect Windows OS via NFS to a Unix system, for example, Ubuntu. Samba exists and is used for the same purposes, but NFS is lighter, simpler and faster than this program, since it is implemented at the kernel level. Therefore, setting up access through it will usually be easier.
    • NFS provides transparent access to files. This means that all remote files are played exactly the same as local ones. The programs do not need to be upgraded to play any file located on the server.
    • NFS only sends the requested portion of the file, not the entire file.

    To fully operate, Network File System must be installed on at least two computers: a server and a client. Naturally, a beginner will have to work most hard on the server part, since this is where it is necessary to “share” (open access) folders. However, all this is done quite easily.

    Like most data transfer protocols, NFS is not at all young. It was developed in 1984 and was intended for UNIX systems. This is still the main role of NFS, but many have found that it is very convenient to connect Windows computers to Linux ones. In addition, NFS is great for playing multimedia content over a local home network. Samba in this role often freezes and slows down.

    Installing the NFS backend

    We will install the server part of the protocol on Ubuntu 16.04. Naturally, if you have the Server edition, the process is in no way different. It’s just that in the traditional version of Ubuntu, some actions can be performed using the graphical interface.

    Install the program. To do this, you can use the application download center, or you can simply enter the command:

    sudo apt install nfs-kernel-server

    After this, it would be useful to check the correctness of the installation. It's not necessary to do this, but we'll check anyway. Enter the command:

    The port should be 2049 everywhere.

    Now we check whether the kernel supports NFS. To do this, enter:

    cat /proc/filesystems | grep nfs

    The resulting value should look like this: nodev nfsd

    This means that everything is functioning correctly. If not, then enter the command:

    Using it, we install the kernel module ourselves.

    Add the protocol to autorun. It is not necessary to do this, but it is very inconvenient to turn it on every time. Again, you can add using special point menu in the settings, or you can do it yourself using the command:

    sudo systemctl enable nfs

    So, we have installed the server part, all that remains is to configure it correctly and move on to the client part.

    Settings

    Setting up NFS in Ubuntu involves sharing certain folders.

    In addition to simply allowing access, you must also specify parameters that determine the user's capabilities in relation to this folder.

    • rw - reading and writing This option allows reading and writing files in the folder.
    • ro - reading only - allows only reading the folder.
    • sync (default) - the parameter ensures transmission reliability. If it is enabled, you will not be able to transfer multiple files at the same time or to different computers. This setting will prevent you from responding to other requests. Prevents data loss, but transfer may be slower.
    • async is the inverse of the previous parameter. The transfer is faster, but there is a risk of information loss.
    • secure - this option allows you to use only ports below 1024. Enabled by default.
    • insecure - allows the use of any ports.
    • nohide - if you mount several directories, including nested ones, then the nested directories, unlike the parent directory, will be displayed as empty. The parameter will help fix this
    • anonuid - specifies the uid for anonymous users. This is a special user ID.
    • anongid - specifies the gid for anonymous users. GID (Group ID) - another user identifier.
    • no_subtree_check - the function disables subtree control. The fact is that without it, NFS additionally checks that users access only the necessary sections of the directory. This slows things down. This parameter speeds it up, but reduces security.

    We will use them depending on what is needed in a particular situation.

    Let's create new folder. You can also use a new one. Our folder will be /var/network.

    Now you need to add this folder to the /etc/exports file. All files and folders with open network access are stored there. The entry should look like this:

    /var/network168.1.1(rw,async,no_subtree_check)

    192.168.1.1 is the IP over which we transmit. It is mandatory to indicate it.

    Update the export table:

    Now let's try to access the folder from the client side.

    Installing and configuring the NFS client part

    Ubuntu

    On Ubuntu, connecting a configured server is not difficult. This is done in just a couple of commands.

    Install a special client package:

    sudo apt install nfs-common

    sudo mount 192.168.1.1:/var/network/ /mnt/

    The network folder is connected. Using df you can check all connected network folders:

    You can also check your access level with a special command:

    Disable the file system as follows:

    The mount command is used almost everywhere. It is responsible for the mounting process, that is, preparing space on the hard drive for use by the operating system. It sounds complicated, but if we simplify it, it turns out that we simply transfer network files to our computer into a newly created folder. Here it is called /mnt/.

    Windows

    With Windows, as a rule, everything is much more complicated. The NFS client can be run on all Windows servers without any problems. Of the standard ones, it is present on:

    • Windows 7 Ultimate/Enterprise
    • Windows 8/8.1 Enterprise
    • Windows 10 Enterprise

    Can't find it anywhere else. If you have one of these versions, do the following:

    1. Open the “Programs and Features” menu.
    2. Click “Adding Components”.
    3. We find NFS there and install only “Client for NFS”; we don’t need another component.

    After connecting, everything is mounted with the same command:

    mount 192.168.1.1:/var/network/ /mnt/

    You can unmount it as follows:

    Commands are entered into the command line launched as an administrator. After this, you can easily find the desired network drive using Explorer.

    What to do if there is no NFS client on the computer? You can try downloading the software through the Microsoft website or from third-party resources. It is possible that other commands or actions will be needed here.

    Now you have a basic understanding of how to use NFC and carry out basic setup. This knowledge is enough to establish access from one computer to another. Moreover, a Windows PC can also act as a client.