• NFS file system. Network File Service

    Everyone knows that on UNIX systems, a file system is logically a collection of physical file systems connected to a single point. One of the main advantages of such an organization, in my opinion, is the ability to dynamically modify the structure of an existing file system. Also, thanks to the efforts of the developers, today we have the opportunity to connect a file system of almost any type and in any convenient way. By “method”, I first of all want to emphasize the ability of the OS kernel to work with file systems via network connections.

    Many network protocols provide us with the ability to work with remote files, be it FTP, SMB, Telnet or SSH. Thanks to the ability of the kernel, ultimately, to not depend on the type of file system being connected, we have the ability to connect anything and however we want using the mount program.

    Today I want to talk about NFS - Network File System. This technology allows you to connect individual FS points to remote computer to the file system of the local computer. The NFS protocol itself allows you to perform file operations quite quickly, safely and reliably. What else do we need? :-)

    What is needed for this to work

    In order not to rant for a long time on the topic of NFS versions and their support in various kernels, we will immediately make the assumption that your kernel version is not lower than 2.2.18. In the official documentation, the developers promise full support for NFS version 3 functionality in this kernel and later versions.

    Installation

    To run the NFS server in my Ubuntu 7.10 - the Gutsy Gibbon, I needed to install the nfs-common and nfs-kernel-server packages. If you only need an NFS client, then nfs-kernel-server does not need to be installed.

    Server setup

    After all packages have been successfully installed, you need to check if the NFS daemon is running:

    /etc/init.d/nfs-kernel-server status

    If the daemon is not running, you need to start it with the command

    /etc/init.d/nfs-kernel-server start

    After everything has started successfully, you can begin exporting the file system. The process itself is very simple and takes minimal time.

    The main NFS server configuration file is located in /etc/exports and has the following format:

    Directory machine1(option11,option12) machine2(option21,option22)

    directoryabsolute path to the FS server directory to which you need to give access

    machineX— DNS name or IP address of the client computer from which access is allowed

    optionXX— FS export parameters, the most commonly used of them:

    • ro- file access is read-only
    • rw— read/write access is granted
    • no_root_squash— by default, if you connect to an NFS resource as root, the server, for the sake of security, on its side will access files as the nobody user. However, if you enable this option, files on the server side will be accessed as root. Be careful with this option.
    • no_subtree_check— by default, if you export not the entire partition on the server, but only part of the file system, the daemon will check whether the requested file is physically located on the same partition or not. If you are exporting the entire partition or the mount point of the exported file system does not affect files from other physical volumes, then you can enable this option. This will give you an increase in server speed.
    • sync— enable this option if there is a possibility of a sudden connection loss or server power outage. If this option is not enabled, there is a very high risk of data loss if the NFS server suddenly stops.

    So, let's say we need to give access to the ashep-desktop computer to the /var/backups directory of the ashep-laptop computer. Directory access required for copying backup copies files from ashep-desktop. My file turned out like this:

    /var/backups ashep-desktop(rw,no_subtree_check,sync)

    After adding the line to /etc/exports, you must restart the NFS server for the changes to take effect.

    /etc/init.d/nfs-kernel-server restart

    That's it. You can start connecting the exported FS on the client computer.

    Client setup

    On the client side, the remote file system is mounted in the same way as all others - with the mount command. Also, no one forbids you to use /etc/fstab if you need to connect the FS automatically when the OS boots. So, the mount option will look like this:

    Mount -t nfs ashep-laptop:/var/backups/ /mnt/ashep-laptop/backups/

    If everything went well and you need to connect to the remote FS automatically at boot, just add the line to /etc/fstab:

    Ashep-laptop:/var/backups /mnt/ashep-laptop/backups nfs auto 0 0

    What else

    So we have a practical, tiny overview of the capabilities of NFS. Of course, this is just a small part of what NFS can do. This is enough for use at home or in a small office. If this is not enough for you, I recommend reading first

    The Network File Server (NFS) protocol is an open standard for providing remote user access to file systems. Centralized file systems created on its basis make it easier to perform daily tasks such as backup or virus scanning, and consolidated disk partitions are easier to maintain than many small and distributed ones.

    In addition to providing centralized storage, NFS has proven to be very useful for other applications, including diskless and thin clients, network clustering, and middleware collaboration.

    A better understanding of both the protocol itself and the details of its implementation will make it easier to cope with practical problems. This article is dedicated to NFS and consists of two logical parts: first, it describes the protocol itself and the goals set during its development, and then the implementation of NFS in Solaris and UNIX.

    WHERE IT ALL BEGAN...

    The NFS protocol was developed by Sun Microsystems and appeared on the Internet in 1989 as RFC 1094. next name: Network File System Protocol Specification (NFS). It is interesting to note that Novell's strategy at that time was aimed at further improving file services. Until recently, before the open source movement gained momentum, Sun was reluctant to reveal the secrets of its network solutions, but even then the company understood the importance of interoperability with other systems.

    RFC 1094 contained two original specifications. By the time of its publication, Sun was already developing the next, third version of the specification, which is set out in RFC 1813 “NFS Version 3 Protocol Specification”. Version 4 of this protocol defined in RFC 3010, NFS Version 4 Protocol.

    NFS is widely used on all types of UNIX hosts, on Microsoft and Novell networks, and on IBM solutions such as the AS400 and OS/390. Although unknown outside the network kingdom, NFS is perhaps the most widespread platform-independent network file system.

    UNIX WAS THE GENESIS

    Although NFS is a platform-independent system, its ancestor is UNIX. In other words, the architecture's hierarchical architecture and file access methods, including file system structure, ways of identifying users and groups, and file handling techniques, all closely resemble the UNIX file system. For example, the NFS file system, being structurally identical to the UNIX file system, is mounted directly on it. When working with NFS on other operating systems, user identification parameters and file access rights are subject to mapping.

    NFS

    NFS is designed for use in a client-server architecture. The client accesses the file system exported by the NFS server through a mount point on the client. This access is usually transparent to the client application.

    Unlike many client-server systems, NFS uses Remote Procedure Calls (RPC) to exchange information. Typically, the client establishes a connection to a known port and then, in accordance with the protocol, sends a request to perform a specific action. In the case of remote procedure calls, the client creates a procedure call and then sends it to the server for execution. Detailed Description NFS will be presented below.

    As an example, suppose a client has mounted the usr2 directory on the local root file system:

    /root/usr2/ -> remote:/root/usr/

    If a client application needs resources from this directory, it simply sends a request to the operating system for it and the file name, which grants access through the NFS client. For example, consider the simple UNIX cd command, which “knows nothing” about network protocols. Team

    Cd /root/usr2/

    will place the working directory on a remote file system, “without even realizing” (the user also has no need for this) that the file system is remote.

    Having received the request, the NFS server will check whether the user has the right to perform the requested action and, if the answer is positive, will carry it out.

    LET'S GET TO KNOW BETTER

    From the client's point of view, the process of locally mounting a remote file system using NFS consists of several steps. As mentioned, the NFS client will issue a remote procedure call to execute on the server. Note that in UNIX, the client is a single program (the mount command), while the server is actually implemented as several programs with the following minimum set: port mapper service, mount daemon and NFS server.

    The client mount command first communicates with the server's port translation service, which listens for requests on port 111. Most implementations of the client mount command support multiple versions of NFS, which increases the likelihood of finding a common protocol version for the client and server. The search is carried out starting with the highest version, so when a common one is found, it will automatically become the newest version supported by the client and server.

    (The material presented is focused on the third version of NFS, since it is the most widespread at the moment. The fourth version is not yet supported by most implementations.)

    The server's port translation service responds to requests based on the supported protocol and the port on which the mount daemon is running. The mount client program first establishes a connection to the server mount daemon and then issues the mount command to it via RPC. If this procedure successful, the client application connects to the NFS server (port 2049) and, using one of the 20 remote procedures that are defined in RFC 1813 and listed in Table 1, gains access to the remote file system.

    The meaning of most commands is intuitive and does not cause any difficulties for system administrators. The following listing, obtained using tcdump, illustrates the read command produced by the UNIX cat command to read a file named test-file:

    10:30:16.012010 eth0 > 192.168.1.254. 3476097947 > 192.168.1.252.2049: 144 lookup fh 32.0/ 224145 "test-file" 10:30:16.012010 eth0 > 192.168.1.254. 3476097947 > 192.168.1.252.2049: 144 lookup fh 32.0/ 224145 "test-file" 10:30:16.012729 eth0 192.168.1.254.3476097947: reply ok 128 lookup fh 32.0/224 307 (DF) 10:30: 16.012729 eth0 192.168.1.254.3476097947: reply ok 128 lookup fh 32.0/224307 (DF) 10:30:16.013124 eth0 > 192.168.1.254. 3492875163 > 192.168.1.252.2049: 140 read fh 32.0/ 224307 4096 bytes @ 0 10:30:16.013124 eth0 > 192.168.1.254. 3492875163 > 192.168.1.252.2049: 140 read fh 32.0/ 224307 4096 bytes @ 0 10:30:16.013650 eth0 192.168.1.254.3492875163: reply ok 108 read (DF 10:3) 0:16.013650 eth0 192.168.1.254.3492875163 : reply ok 108 read (DF)

    NFS has traditionally been implemented over UDP. However, some versions of NFS support TCP (TCP support is defined in the protocol specification). The main advantage of TCP is a more efficient retransmission mechanism in unreliable networks. (In the case of UDP, if an error occurs, then full message The RPC, consisting of multiple UDP packets, is resent. If TCP is available, only the damaged fragment is retransmitted.)

    NFS ACCESS

    NFS implementations typically support four ways to grant access rights: through user/file attributes, at the shared resource level, at the master node level, and as a combination of other access methods.

    The first method is based on UNIX's built-in system of file permissions for an individual user or group. To simplify maintenance, user and group identification should be consistent across all NFS clients and servers. Security must be carefully considered: NFS can inadvertently provide access to files that were not intended when they were created.

    Share-level access allows you to restrict rights to allow only certain actions, regardless of file ownership or UNIX privileges. For example, working with the NFS file system can be limited to read-only. Most NFS implementations allow you to further restrict access at the shared resource level to specific users and/or groups. For example, the Human Resources group is allowed to view information and nothing more.

    Master node level access allows you to mount a file system only on specific nodes, which is generally a good idea since file systems can easily be created on any NFS-enabled nodes.

    Combined access simply combines the above types (for example, shared-level access with access granted to a specific user) or allows users to access NFS only from a specific node.

    NFS IN THE PENGUIN STYLE

    Linux-related material is based on Red Hat 6.2 with kernel version 2.4.9, which ships with nfs-utils version 0.1.6. There are also newer versions: at the time of writing, the most recent update to the nfs-utils package is 0.3.1. It can be downloaded from: .

    The nfs-utils package contains the following executable files: exportfs, lockd, mountd, nfsd, nfsstat, nhfsstone, rquotad, showmount and statd.

    Unfortunately, NFS support is sometimes a source of confusion for Linux administrators because the availability of a particular feature is directly dependent on the version numbers of both the kernel and the nfs-utils package. Fortunately, things are improving in this area, with the latest distributions including the latest versions of both. For previous releases, Section 2.4 of the NFS-HOWTO provides a complete list of system functionality available for each kernel and nfs-utils package combination. The developers maintain backward compatibility of the package with earlier versions, paying a lot of attention to ensuring security and eliminating software errors.

    NFS support should be initiated during kernel compilation. If necessary, the ability to work with NFS version 3 should be added to the kernel.

    For distributions that support linuxconf, it is easy to configure NFS services for both clients and servers. However, the quick way to install NFS using linuxconf does not provide information about what files were created or edited, which is very important for the administrator to know the situation in the event of a system failure. The NFS architecture on Linux is loosely coupled to the BSD version, so the necessary support files and programs are easy to find for administrators running BSD, Sun OS 2.5, or earlier versions of NFS.

    The /etc/exports file, as in earlier versions of BSD, defines the file systems that NFS clients are allowed to access. In addition, it contains a number of additional features related to management and security issues, providing the administrator with a means for fine-tuning. This is a text file consisting of entries, empty or commented lines (comments begin with the # symbol).

    Let's say we want to give clients read-only access to the /home directory on the Lefty node. This would correspond to the following entry in /etc/exports:

    /home (ro)

    Here we need to tell the system which directories we are going to make accessible using the rpc.mountd mount daemon:

    # exportfs -r exportfs: No hostname is specified in /home (ro), enter *(ro) to avoid the warning #

    When run, the exportfs command issues a warning that /etc/exports does not restrict access to a particular node, and creates a corresponding entry in /var/lib/nfs/etab from /etc/exports telling which resources can be viewed using cat:

    # cat /var/lib/nfs/etab /home (ro,async,wdelay,hide,secure,root_squash, no_all_squash,subtree_check, secure_locks, mapping=identity,anonuid= -2,anongid=-2)

    Other options listed in etab include the default values ​​used by NFS. The details will be described below. To provide access to the /home directory, you must start the appropriate NFS services:

    # portmap # rpc.mountd # rpc.nfsd # rpc.statd # rpc.rquotad

    At any time after starting the mount daemon (rpc.mountd), you can view the individual files available for output by viewing the contents of the /proc/fs/nfs/exports file:

    # cat /proc/fs/nfs/exports # Version 1.0 # Path Client(Flags) # IPs /home 192.168.1.252(ro,root_squash,async, wdelay) # 192.168.1.252 #

    The same can be viewed using the showmount command with the -e parameter:

    # showmount -e Export list for lefty: /home (everyone) #

    Jumping ahead a bit, the showmount command can also be used to determine all mounted file systems, or in other words, to find out which nodes are NFS clients for the system running the showmount command. The showmount -a command will list all client mount points:

    # showmount -a All mount points on lefty: 192.168.1.252:/home #

    As stated above, most NFS implementations support various versions of this protocol. The Linux implementation allows you to limit the list of NFS versions that can be launched by specifying the -N switch for the mount daemon. For example, to run NFS version 3, and only version 3, enter the following command:

    # rpc.mountd -N 1 -N 2

    For picky users It may be inconvenient that on Linux the NFS daemon (rpc.nfsd) waits for version 1 and version 2 packages, although this has the desired effect of not supporting the corresponding protocol. Let's hope that the developers of future versions will make the necessary corrections and will be able to achieve greater consistency between the components of the package in relation to different versions of the protocol.

    "SWIM WITH PENGUINS"

    Access to the Lefty configured above, exported NFS file system on Linux based, depends on the client operating system. The installation style for most UNIX family operating systems is the same as that of either the original Sun OS and BSD systems or the newer Solaris. Since this article is about both Linux and Solaris, let's look at the Solaris 2.6 client configuration from the point of view of establishing a connection with the Linux version of NFS that we described above.

    Thanks to features inherited from Solaris 2.6, it can be easily configured to act as an NFS client. This requires only one command:

    # mount -F nfs 192.168.1.254:/home /tmp/tmp2

    Assuming the previous mount command was successful, then the mount command without parameters will output the following:

    # mount / on /dev/dsk/c0t0d0s0 read/write/setuid/ largefiles on Mon Sep 3 10:17:56 2001 ... ... /tmp/tmp2 on 192.168.1.254:/home read/ write/remote on Mon Sep 3 23:19:25 2001

    Let's analyze the tcpdump output received on the Lefty node after the user issued the ls /tmp/tmp2 command on the Sunny node:

    # tcpdump host lefty and host sunny -s512 06:07:43.490583 sunny.2191983953 > lefty.mcwrite.n.nfs: 128 getattr fh Unknown/1 (DF) 06:07:43.490678 lefty.mcwrite.n.nfs > sunny. 2191983953: reply ok 112 getattr DIR 40755 ids 0/0 sz 0x000001000 (DF) 06:07:43.491397 sunny.2191983954 > lefty.mcwrite.n.nfs: 132 access fh Unknown/10001 (DF) 06:07: 43.491463 lefty. mcwrite.n.nfs > sunny.2191983954: reply ok 120 access c0001 (DF) 06:07:43.492296 sunny.2191983955 > lefty.mcwrite.n.nfs: 152 readdirplus fh 0.1/16777984 1048 bytes @ 0x0000 00000 (DF) 06:07:43.492417 lefty.mcwrite.n.nfs > sunny.2191983955: reply ok 1000 readdirplus (DF)

    We see that node Sunny requests a file handle (fh) for ls, to which node Lefty responds with OK and returns the directory structure. Sunny then checks the permission to access the contents of the directory (132 access fh) and receives a permission response from Lefty. The Sunny node then reads the entire contents of the directory using the readdirplus routine. Remote procedure calls are described in RFC 1813 and are listed at the beginning of this article.

    Although the command sequence for accessing remote file systems is very simple, a number of circumstances can cause the system to mount incorrectly. Before mounting a directory, the mount point must already exist, otherwise it must be created using the mkdir command. Usually the only cause of errors on the client side is the lack of a local mount directory. Most problems associated with NFS are due to a mismatch between the client and server or incorrect server configuration.

    The easiest way to troubleshoot problems on a server is from the node on which the server is running. However, when someone else administers the server for you, this is not always possible. A quick way to ensure that the appropriate server services are configured correctly is to use the rpcinfo command with the -p option. From the Solaris Sunny host, you can determine which RPC processes are registered on the Linux host:

    # rpcinfo -p 192.168.1.254 program vers proto port service 100000 2 tcp 111 rpcbind 100000 2 udp 111 rpcbind 100024 1 udp 692 status 100024 1 tcp 694 status 100005 3 udp 1024 mount d /100005 3 tcp 1024 mountd 100003 2 udp 2049 nfs 100003 3 udp 2049 nfs 100021 1 udp 1026 nlockmgr 100021 3 udp 1026 nlockmgr 100021 4 udp 1026 nlockmgr #

    Note that version information is also provided here, which is quite useful when the system requires support for various NFS protocols. If any service is not running on the server, then this situation must be corrected. If the mount fails, the following rpcinfo -p command will help you determine that the mountd service on the server is not running:

    # rpcinfo -p 192.168.1.254 program vers proto port service 100000 2 tcp 111 rpcbind ... ... 100021 4 udp 1026 nlockmgr #

    The rpcinfo command is very useful for finding out whether a particular remote process is active. The -p parameter is the most important of the switches. To see all the features of rpcinfo, see the man page.

    Another useful tool is the nfsstat command. With its help, you can find out whether clients are actually accessing the exported file system, and also display statistical information in accordance with the protocol version.

    Finally, another quite useful tool for determining the causes of system failures is tcpdump:

    # tcpdump host lefty and host sunny -s512 tcpdump: listening on eth0 06:29:51.773646 sunny.2191984020 > lefty.mcwrite.n.nfs: 140 lookup fh Unknown/1"test.c" (DF) 06:29:51.773819 lefty.mcwrite.n.nfs > sunny.2191984020: reply ok 116 lookup ERROR: No such file or directory (DF) 06:29:51.774593 sunny.2191984021 > lefty.mcwrite.n.nfs: 128 getattr fh Unknown/1 ( DF) 06:29:51.774670 lefty.mcwrite.n.nfs > sunny.2191984021: reply ok 112 getattr DIR 40755 ids 0/0 sz 0x000001000 (DF) 06:29:51.775289 sunny.2191984022 > lefty. mcwrite.n.nfs : 140 lookup fh Unknown/1"test.c" (DF) 06:29:51.775357 lefty.mcwrite.n.nfs > sunny.2191984022: reply ok 116 lookup ERROR: No such file or directory (DF) 06:29: 51.776029 sunny.2191984023 > lefty.mcwrite.n.nfs: 184 create fh Unknown/1 "test.c" (DF) 06:29:51.776169 lefty.mcwrite.n.nfs > sunny.2191984023: reply ok 120 create ERROR: Permission denied (DF)

    The above listing obtained after executing touch instructions test.c, reflects the following sequence of actions: first the touch command tries to access a file named test.c, then it looks for a directory with the same name, and after unsuccessful attempts it tries to create a file test.c, which also does not lead to success.

    If the file system is mounted, then most typical mistakes associated with normal UNIX permissions. Using a uid or NIS+ on Sun helps avoid setting permissions globally on all file systems. Some administrators practice "open" directories, where read access is given to "the whole world." However, this should be avoided for safety reasons. Security concerns aside, this approach is still a bad practice, since users rarely create data with the intention of making it readable by everyone.

    Access by a privileged user (root) to mounted NFS file systems is treated differently. To avoid giving a privileged user unlimited access, requests from the privileged user are treated as if they were coming from the user nobody. This powerful mechanism limits privileged user access to globally readable and writable files.

    NFS SERVER SOLARIS VERSION

    Configuring Solaris to act as an NFS server is as easy as it is with Linux. However, the commands and file locations are slightly different. At bootstrap Solaris, upon reaching run level 3, NFS services are automatically started and all file systems are exported. To start these processes manually, enter the command:

    #/usr/lib/nfs/mountd

    To start the mount daemon and NFS server, enter:

    #/usr/lib/nfs/nfsd

    Beginning with version 2.6, Solaris no longer uses an export file to specify which file systems to export. The files are now exported using the share command. Let's say we want to allow remote hosts to mount /export/home. To do this, enter the following command:

    Share -F nfs /export/home

    Security measures

    SECURITY IN LINUX

    Some Linux-based NFS system services have an additional mechanism for restricting access through control lists or tables. At the internal level, this mechanism is implemented using the tcp_wrapper library, which uses two files to generate access control lists: /etc/hosts.allow and /etc/hosts/deny. A comprehensive overview of the rules for working with tcp_wrapper is beyond the scope of this article, but the basic principle is as follows: the comparison is first made with etc/hosts.allow, and then with /etc/hosts. deny. If the rule is not found, then the requested system service is not presented. To get around this last requirement and provide a very high level of security, you can add the following entry to the end of /etc/hosts.deny:

    ALL: All

    After this, you can use /etc/hosts.allow to set one or another operating mode. For example, the file /etc/hosts. allow, which I used when writing this article, contained the following lines:

    Lockd:192.168.1.0/255.255.255.0 mountd:192.168.1.0/255.255.255.0 portmap:192.168.1.0/255.255.255.0 rquotad:192.168.1.0/255.255.255.0 statd:192.1 68.1.0/255.255.255.0

    This allows a specific type of access to hosts before granting application-level access. On Linux, application-level access is controlled by the /etc/exports file. It consists of entries in the following format:

    Export directory (space) host|network(options)

    An "export directory" is a directory that the nfsd daemon is allowed to process requests for. A "node|network" is the node or network that has access to the exported file system, and the "options" define the restrictions that the nfsd daemon imposes on the use of this shared resource - read-only access or user id mapping. .

    The following example gives the entire mcwrite.net domain read-only access to /home/mcwrite.net:

    /home/mcwrite.net *.mcwrite.net(ro)

    More examples can be found in the exports man page.

    NFS SECURITY IN SOLARIS

    In Solaris, the ability to provide access to NFS is similar to Linux, but in this case, restrictions are set using certain parameters in the share command with the -o switch. The following example shows how to enable read-only mounting of /export/mcwrite.net on any host in the mcwrite.net domain:

    #share -F nfs -o ro=.mcwrite.net/ export/ mcwrite.net

    The man page for share_nfs details granting access using control lists on Solaris.

    Internet Resources

    NFS and RPC are not without holes. Generally speaking, NFS should not be used when surfing the Internet. You can't poke holes in firewalls to allow any kind of access via NFS. Keep a close eye on any emerging RPC and NFS patches, and numerous sources of security information can help. The two most popular sources are Bugtraq and CERT:

    The first one can be viewed regularly in search of the necessary information or by subscribing to periodic newsletters. The second one provides, perhaps, not as prompt information as compared to others, but in a fairly complete volume and without the shade of sensationalism characteristic of some sites dedicated to information security.

    Network Services

    Lecture 10

    The set of server and client parts of the OS that provide access to a specific type of computer resource via a network is called network service. The client part makes network requests to the server part of another computer. The server part satisfies requests to local resources server. The client part is active, the server part is passive.

    In network communication, network access to the file system plays a significant role. In this case, the client and server parts, together with the network file system, form file service

    A key component of a distributed OS is the network file system. A network file system is supported by one or more computers storing files (file servers)

    Client computers attach or mount these file systems to their local file systems

    File service includes server programs and client programs that interact over the network using a protocol.

    File services includes the file service itself (file operations) and the directory service (directory management)

    The network file service model includes the following elements:

    Local file system (FAT, NTFS)

    Local file system interface (system calls)

    Network File System Server

    Network File System Client ( Windows Explorer, UNIX shell, etc.)

    Network file system interface (replicates local file system system calls)

    Network File System Client-Server Protocol (SMB-Server Message Block for Windows, NFS (Network File System) and FTP (File Transfer Protocol) for UNIX)

    Network File System Interface

    There are several types of interfaces, which are characterized by:

    File structure. Most network file systems support flat files

    File Modifiability. Most network file systems have the ability to modify a file. Some distributed file systems prohibit modification operations. Only create and read are possible. For such files it is easier to organize caching and replication.

    File separation semantics:

    Semantics UNIX (centralized). If a read follows multiple writes, the latest update is read. This principle is also possible in a distributed file system, provided there is one file server and there is no file caching on the client.

    Session semantics. Changes begin after the file is opened and are completed after the file is closed. In other words, for other processes, changes are visible only after the file is closed. IN in this case There is a problem sharing a file. Semantics of immutable files. The file can only be created and read. You can also recreate the file under a different name. Therefore the file cannot be modified, but can be replaced with a new file. There is no problem of sharing.



    Transaction mechanism. This is a way of working with shared files using a transaction mechanism (indivisible operations)

    Access Control. For example, for Windows NT/2000 there are two mechanisms: at the directory level (for FAT) and at the file level (NTFS)

    Access unit. Full file upload/download model (FTP). The second model is the use of file operations.

    1.4 Network file system

    The CIFS file system dominates the network file system market for the Windows platform. On the UNIX platform, the main one is the Network File System (NFS). Additionally, NFS is considered to be the first widely adopted file system, dating back to the mid-1980s. However, despite some common functionality between CIFS and NFS (network file systems that allow clients to access server resources), these systems have completely different architectural features. With the release of NFS version 4, some differences have been revised.
    The CIFS protocol stores service data specific to each client. Prior to version 3, the NFS file system did not retain client status, which changed in version 4.
    The NFS client does not "negotiate" with the NFS server to establish a session. Security measures are taken for the entire session or each communication between the client and the server. The latter option is prohibitively expensive to implement, so NFS leaves the security responsibility to the client. The server "assumes" that the user IDs on the client and server systems are the same (and the client has verified the user's identity before allowing him to register under the specified ID). In addition, NFS provides a certain level of security by controlling the list of file systems that a client can mount. Every time a CIFS client opens a file, receives a file handle (that is, service data that the server must store), and uses it to perform client-side read or write operations, the NFS server queries the server, which returns the file handle. This file descriptor is processed by clients that support the NFS 3 and NFS 2 standards. The client caches the resulting file descriptor and expects the descriptor to always point to the same file.
    For those familiar with UNIX, a file descriptor typically consists of an inode number, an inode generation count, and the file ID that is associated with the disk partition. Suffice it to say that the inode is an extremely important data structure used in UNIX file systems. Sufficient information is stored to remove handles cached by clients if the corresponding file for the handle has changed and the handle must point to a different file. For example, if a file is deleted and a file of the same name is copied in its place, the inode generation counter will be changed and the file descriptor cached by the client will be invalid. The NFS 4 file system has differences in implementation.
    Some NFS clients perform client-side caching by storing data on disks, which is similar to CIFS caching. Also, some NFS clients change the value of timeouts depending on the server response time. The slower the server responds, the greater the timeout value, and vice versa.
    The NFS file system was designed to be transport independent and initially used the UDP transport protocol. Different types of NFS can use TCP and other protocols.

    1.4.1 Network File System version 3

    The NFS 3 file system improves performance, especially for large files, allowing the client and server to dynamically select the maximum amount of data that is transferred in one logical packet element when writing or reading. In the NFS 2 file system, the packet size was limited to 8 KB. In other words, the client could send a maximum of 8 KB in a write request, and the server could send a maximum of 8 KB in a read request response. In addition, NFS 3 has redefined file offsets and data sizes. These are now 64-bit values, instead of 32-bit in NFS 2.
    Below are some of the features of NFS 3.
    ■ File descriptors in NFS 3 are variable size; their maximum size is 64 bits.
    ■ The NFS 3 file system allows clients and servers to choose maximum size file and directory names.
    ■ NFS 3 defines a list of errors that the server can return to clients. The server must return one of the specified errors or no error at all.
    ■ In NFS 3, the server is allowed to cache data that the client sent with a write request. The server can cache data and send a response to the request to the client before the data is written to disk. A COMMIT command has also been added, which allows the client to ensure that all submitted data has been written to disk. This makes it possible to strike a balance between increasing performance and maintaining data integrity.
    ■ NFS 3 reduces the number of request/response operations between client and server. To do this, file attribute data is sent along with the initial request. In NFS 2, the client was required to obtain filenames and a descriptor for each file, only then were the file attributes transmitted.

    1.4.2 Network File System version 4

    NFS 4 completely overhauled the fundamental principles and implemented many CIFS-specific features, which greatly upset some NFS apologists. If you look at the history of network file systems, you can see that NFS has become widespread. The SMB file system was designed to take into account the strengths and weaknesses of NFS and now, at least among customers, CIFS/SMB is more common, and NFS is evolving to take into account the advantages and disadvantages of CIFS/SMB. The following highlights features that were added to NFS 4 to improve performance, security, and interoperability with CIFS.
    ■ NFS 4 introduced the COMPOUND request, which allows you to package multiple requests into a single request and multiple responses into a single response. This innovation is designed to improve performance by reducing network load and reducing latency as requests and responses travel across the network. If this sounds somewhat like the CIFS AndX SMB feature (see Section 3.3.5.1), then it may not be a simple coincidence.
    ■ Network File System version 4 borrows some features from Sun's WebNFS. In particular, NFS 4 supports some secondary protocols in the base specification, making NFS more suitable for use with firewalls. NFS 3 and earlier used a special protocol to mount a server share into the local file system tree. Because the mount protocol service did not have an assigned TCP port or UDP, the client first sends a request to the portmapper daemon, which provides the port number on which the mount service listens for requests. Thus, in addition to NFS, mounting and port mapping protocols took part in the process. Moreover, since the mount service could use an arbitrary port, configuring the firewall became very difficult. In NFS 4, the mount and port mapping protocols were removed. Additionally, locking was included in the base NFS protocol specification, and the NLM (Network Lock Manager) protocol, which was used in earlier versions of NFS, has been permanently deprecated.
    ■ NFS 4 file system requires the use of transport protocol, which provides the ability to detect network congestion. This means that NFS clients and servers will gradually move to TCP instead of UDP, which is commonly used with NFS 3.
    ■ NFS 2 and NFS 3 allowed the use of the U.S. character set. ASCII or ISO Latin 1. This caused problems when a client using one character set created a file and that file was accessed by a client using a different character set. NFS 4 uses the UTF-8 character set, which supports compact compression of 16- and 32-bit characters for transmission over the network. In addition, the UTF-8 character set contains enough information to avoid problems when creating a file using one character set and accessing a file using another character set.
    ■ The NFS 4 file system requires the client to handle file descriptors separately. In NFS 3, the client could cache the handle as an object, while the server ensured that the handle always pointed to a file. NFS 4 defines two types of file descriptors. One is called persistent file descriptors and has the capabilities of file descriptors from NFS 3. The second, temporary file descriptors, assumes that the descriptor expires after a certain period of time or event. This is a feature for servers whose file systems (such as NTFS) cannot provide a consistent mapping between displayed files and handles.
    ■ NFS 4 adds support for OPEN and CLOSE operations, the semantics of which allow interaction with CIFS clients. The OPEN command creates state data on the server.
    ■ NFS 4's OPEN request support allows a client to issue a file open request that is structured similarly to Windows application open requests. Selection is also supported sharing file with other clients or exclusive access to the file.

    1.4.2.1 NFS 4 Security

    The NFS 4 file system allows you to enhance the security of stored data. In particular, NFS 4 adds support for more file attributes. One of these attributes is a Windows NT style access control list (ACL). This allows for improved interoperability between file systems and a stronger security structure.
    While in NFS 2 and NFS 3 the use of security features was only recommended, in NFS 4 it has become mandatory. The NFS 4 file system requires the implementation of a security mechanism using the RPCSEC_GSS (Generic Security Services) interface in general and the Kerberos 5/LIPKEY protocols in particular. Please note that RPCSEC_GSS simply fulfills the role API interface and a transport mechanism for security-related tags and data. The NFS 4 file system allows for multiple authentication and security schemes and the ability to choose the appropriate scheme for clients and servers.
    Let's pay some attention to studying the LIPKEY technology, which uses a combination of symmetrical and asymmetric encryption. The client encrypts the user data and password using a randomly generated 128-bit key. Encryption is performed using a symmetric algorithm, i.e. the same key must be used for decryption. Since the server needs this key to decrypt messages, a randomly generated key must be sent to the server. The client encrypts the key (which is randomly generated) with public key server. The server decrypts the data with its private key, extracts the symmetric key, and decrypts the user data and password.
    Clients can authenticate servers using a server certificate, and certificate authority services are used to verify the certificate. One of the popular hacking methods is to intercept “alien” data packets and then send them after a certain period of time. When using Kerberos, the NFS file system adds a timestamp to each packet. The server records recently received timestamps and compares them with the timestamps of new RPC packets. If the packets' timestamps are older than those previously received by the server, the server ignores the received packets

    1.5 Access problems when using multiple protocols

    Several companies have begun to offer systems that simultaneously support CIFS, NFS, and other network file system clients. Vendors have done a lot of work trying to overcome the technical challenges that arise from customers potentially using different operating systems and file systems. Please note that the problems do not arise with the data itself, but with the metadata of the files. A simple test for such problems would be to copy a file from the server to the client and back to the server (or vice versa). Once the file is placed in the initial resource, the metadata should contain the base values, i.e. File permissions and timestamps should not change. If this is not true, then the problem has been detected.
    The following are examples of some possible technical problems.
    ■ Different operating systems use different methods to track user and group access permissions.
    ■ Different operating systems and file systems have different semantics for opening and locking files.
    ■ File naming conventions are handled in different ways. Different file systems have different representations of the maximum size of a file name, the case value of a file name, and the character set allowed in names.
    ■ Data and their structure differ in different file systems; for example, some file systems track two timestamps, while others track three timestamps (the time the file was last accessed, the file was last modified, and the file was created). Even if both file systems track two timestamps, the units of measurement may differ. Another example is units for measuring offsets in files. Some file systems support 32-bit offsets, and some support 16- or 64-bit offsets.
    ■ Problems with addressing displayed locks. The CIFS server enforces locking: if one client has locked a region of a file, then any write operation to that file region by another client will result in an error. However, forced locking is not supported by NFS servers. Therefore, you must choose whether the lock will be enforced, which will result in an error message being sent to the NFS client.

    Konstantin Pyanzin

    Main features of the NFS file system on the UNIX platform.

    Happiness is when our desires coincide with other people's capabilities.

    "Vremechko"

    Network file systems have played, are playing and will play an important role in information infrastructure. Despite the growing popularity of application servers, the file service remains a universal means of organizing collective access to information. Moreover, many application servers also act as file servers.

    The UNIX operating system is currently experiencing something of a renaissance, and it owes much of this rise in interest to the freely available Linux operating system. At the same time, various versions of Windows are used on desktop computers, primarily Windows 9x and Windows NT/2000, although freely distributed varieties of UNIX are gradually gaining citizenship here.

    For many organizations, hosting a network file service on UNIX computers is a very attractive solution, provided that the service has sufficient performance and reliability. Given the numerous differences in the UNIX and Windows file systems, primarily in file naming schemes, access rights, locks, and system calls When accessing files, ensuring access transparency in a heterogeneous UNIX/Windows environment is of particular importance. In addition, UNIX file servers are often installed as an addition to existing ones. Windows servers NT and NetWare.

    For the UNIX operating system, there are implementations of all more or less popular network file systems, including those used in Microsoft (SMB), NetWare (NCP), Macintosh (AFP) networks. Of course, UNIX networks have their own protocols, most notably NFS and DFS. Keep in mind that any UNIX server can simultaneously provide NFS and SMB services (as well as NCP and AFP) and thus provide additional flexibility when creating a network infrastructure.

    Despite the variety of UNIX network file systems, the undisputed leaders are NFS (Network File System, literal translation - network file system) and SMB (Service Message Block). This article will discuss the capabilities of NFS. At the same time, in one of the upcoming issues we plan to consider the characteristics of SMB on the UNIX platform and, first of all, the Samba product, which has proven itself well in UNIX/Windows networks.

    NFS VERSIONS

    The first implementation of the NFS network file system was developed by Sun Microsystems back in 1985. Since then, NFS has become widespread in the UNIX world, with installations numbering in the tens of millions. In addition to UNIX, the NFS system as a server platform has found application in the VMS, MVS and even Windows operating systems.

    NFS is the native file system for UNIX and follows the logic of UNIX file operations like no other. This applies to file namespace and permissions. Moreover, NFS support is natively built into the kernel of all popular versions of UNIX-like operating systems.

    Currently, NFS is represented by the second and third versions (the first version of NFS has never appeared on the market). Despite a number of limitations, NFS v2 is very popular; it is part of the freely distributed UNIX (in particular, Linux), as well as some commercial UNIX.

    The third version of NFS was developed in the mid-90s by joint efforts of Sun, IBM, Digital and other companies to improve the performance, security and ease of administration of the network file system. NFS v3 is backward compatible with the previous NFS specification, meaning that an NFS v3 server can serve not only NFS v3 clients, but also NFS v2 clients.

    Despite its fairly long presence on the market, NFS v3 is still inferior to NFS v2 in terms of the number of installations. Based on these considerations, we will first focus on the main characteristics of NFS v2, and then get acquainted with the innovations in the third version of NFS.

    Please be aware that specific implementations of the same version of NFS may differ slightly from each other. The differences relate primarily to the composition of the demons, their names, location and title configuration files NFS. In addition, NFS implementations depend on the capabilities and features of UNIX itself. For example, NFS v2 supports ACLs, but only on flavors of UNIX that have such support built into the kernel. Therefore, when describing NFS, we will consider the most general case.

    NFS V2 PROTOCOLS

    Figure 1 shows the NFS v2 network model according to the reference OSI model. Unlike most TCP/IP network services, NFS explicitly uses presentation and session protocols. NFS works based on the concept of remote procedure calls (RPC). According to this concept, when a remote resource (such as a file) is accessed, a program on the local computer makes a normal system call (say, a call to the file open function), but the procedure is actually executed remotely on the resource server. In this case, the user process is not able to determine whether the call is being made locally or remotely. Having established that the process is accessing a resource on a remote computer acting as a server, the kernel or a special daemon of the system packs the arguments of the procedure along with its identifier into a network packet, opens a communication session with the server and forwards this packet to it. The server unpacks the received packet, determines the requested procedure and arguments, and then executes the procedure. Next, the server sends the procedure return code to the client, which passes it on to the user process. Thus, RPC fully complies with the session layer of the OSI model.

    A fair question arises: why does the NFS network model need a special presentation-level protocol? The point is that Sun wisely relied on the use of NFS in heterogeneous networks, where computers have different system architectures, including different byte ordering in a machine word, different floating point representations, incompatible structure alignment boundaries, etc. Because Since the RPC protocol involves sending procedure arguments, i.e., structured data, the presence of a presentation-level protocol is an urgent need in a heterogeneous environment. This is the external data representation protocol (eXternal Data Representation, XDR). It describes the so-called canonical form of data representation, which is independent of the processor system architecture. When transmitting RPC packets, the client translates local data into canonical form, and the server does the reverse operation. It should be kept in mind that the canonical form of XDR corresponds to the data representation adopted for the SPARC and Motorola processor family. In servers that implement a similar form of data presentation, this allows one to achieve some (though most likely microscopic) performance advantage over competitors in cases of intensive access to the file server.

    In NFS v2, UDP was chosen as the transport protocol. The developers explain this by the fact that the RPC session lasts a short period of time. Moreover, from the point of view of executing remote procedures, each RPC packet is self-contained, that is, each packet carries complete information about what needs to be performed on the server, or about the results of the procedure. RPC services are typically connectionless, meaning the server does not store information about what client requests have been processed in the past, such as where in a file the client last read data. For a network file system, this is a definite advantage in terms of reliability, since the client can continue file operations immediately after the server is rebooted. But this scheme is fraught with problems when writing and locking files, and in order to get around them, NFS developers were forced to use various workarounds (using UDP gives rise to another set of specific problems, but we will touch on them later).

    An important difference between the RPC services included in NFS and other network server services is that they do not use the inetd super daemon. Ordinary network services, like telnet or rlogin, are usually not launched as daemons at system startup, although this is not prohibited. Most often, they use the so-called super daemon inetd, which “listens” to software ports of the TCP and UDP protocols. Services are specified in the superdaemon's configuration file (usually /etc/inetd.conf). When a request is received for software port on the client side, inetd runs the corresponding network service(for example, in.rlogind), which processes the request.

    RPC services do not use the inetd super daemon because, as noted, an RPC session only lasts for a very short time, in fact only for the duration of a single request. That is, for each request, inetd would be forced to launch a new child process of the RPC service, which is very expensive for UNIX. For similar reasons, an RPC process cannot spawn new processes and cannot serve multiple requests in parallel. Therefore, to improve performance, RPC services are run as multiple daemon instances running simultaneously. However, the number of instances of a particular daemon is not directly related to the number of clients. Even one daemon can serve many clients, but at a time it is capable of processing a single request, the rest will be placed in a queue.

    Another important difference between RPC services and regular network services is that they do not use predefined UDP software ports. Instead, a so-called port mapping system is used. To support it, a special portmap daemon is initialized when the system boots. As part of the port translation system, each RPC service is assigned a program number, version number, procedure number, and protocol (UDP or TCP). The program number uniquely identifies a specific RPC service. The relationship between RPC service names and program numbers can be traced based on the contents of the /etc/rpc file. Each RPC program supports many procedures, which are identified by their procedure numbers. The procedure numbers can be found in the corresponding header files: for example, for the NFS service they are specified in the file /usr/include/nfs/nfs.h.

    In particular, the NFS service has program number 100003 and includes procedures such as “open file”, “read block”, “create file”, etc. When calling remote procedures, the service program number is passed along with the procedure arguments in the RPC packet, procedure number and version number. The version number is used to identify the capabilities of the service. The fact is that developers are constantly improving the NFS service, and each new version Fully backwards compatible with previous ones.

    The operating principle of the portmap translator is quite simple. When any RPC service is initialized (in particular, at the time the OS boots), it is registered using the portmap daemon. When launched on a server, the RPC service looks for an unoccupied software port, reserves it for itself, and reports the port number to the portmap daemon. In order to communicate with the server, the RPC client must first contact the server's portmap and ask it which software port is occupied by a particular RPC service on the server. Only then can the client directly contact the service. In some cases, the client communicates with the desired service indirectly, that is, it first contacts the portmap daemon, which requests the RPC service on behalf of the client. Unlike RPC services, the portmap port translator is always bound to a predefined port 111, so that the client communicates with the portmap in the standard way.

    COMPOSITION OF NFS V2

    In general, in addition to portmap, the NFS server includes the rpc.mountd, nfsd, rpc.lockd, rpc.statd daemons. An NFS client machine running on a UNIX platform must have the biod (optional), rpc.lockd, and rpc.statd daemons running.

    As mentioned earlier, NFS support is implemented at the kernel level in UNIX, so not all daemons are necessary, but they can significantly improve the performance of file operations and allow file write locking.

    The rpc.mountd daemon handles client requests to mount file systems. The mount service is implemented as a separate daemon, since the mount protocol is not part of NFS. This is because the mount operation is tightly tied to file naming syntax, and file naming principles differ between UNIX and, say, VMS.

    The nfsd daemon accepts and services NFS RPC requests. Typically, to improve performance, multiple instances of nfsd are run on the server.

    The rpc.lockd daemon, running on both the client and the server, is designed to lock files, while the rpc.statd daemon (also running on the server and client) keeps statistics on locks in case they need to be automatically restored if the NFS service crashes.

    The biod daemon running on the client is capable of read-ahead and lazy-write operations, which greatly improves performance. However, the presence of biod is not required for the client to work. To further improve performance, multiple biod daemons can be loaded on the client machine.

    Another daemon running on the server is responsible for authentication and printing services for DOS/Windows clients; on some systems it is named pcnfsd, on others in.pcnfsd.

    In addition, the NFS package includes various system utilities and diagnostic programs (showmount, rpcinfo, exportfs, nfsstat).

    EXPORT RULES

    The file systems and directories that clients can remotely mount on the NFS server must be explicitly specified. This procedure is called "exporting" resources in NFS. At the same time, an NFS server, unlike, say, an SMB server, does not broadcast a list of its exported resources. However, the client can request such a list from the server. On the server side, the rpc.mountd daemon is responsible for servicing mount requests.

    Exporting NFS file resources follows four basic rules.

    1. The file system can be exported either as a whole or in parts, such as directories and files. It should be remembered that the largest exported unit is the file system. If on the server a certain file system (/usr/bin) is mounted in the hierarchy below another file system (/usr), then exporting the /usr system will not affect the /usr/bin system.
    2. Only local file resources can be exported, in other words, if someone else’s file system is mounted on the server, that is, located on another server, then it cannot be re-exported.
    3. You cannot export subdirectories of an already exported file system unless they are separate file systems.
    4. You cannot export the parent directories of a directory that has already been exported, unless the parent directory is an independent file system.

    Any violation of these rules will result in an error in NFS operation.

    The table of exported resources is located in the /etc/exports file. Unfortunately, the syntax of this file is UNIX-specific, so we'll use Solaris as an example. The /etc/exports file consists of text lines in the format:

    -

    Some of the most popular options are listed in Table 1. In fact, the options describe the access rights of clients to the exported resources. It is important to remember that the permissions listed during export in no way override the permissions that apply directly to the file system. For example, if the file system is exported writable and a particular file has a read-only attribute, then it will not be possible to change it. Thus, when exporting, access rights act as an additional filter. Moreover, if, say, a file system is exported with the ro (read only) option, then the client has the right to mount it with the rw (read/write) option, but attempting to write will result in an error message.

    The access option allows you to specify hosts with the right to mount a resource. Accordingly, no other host, except those mentioned in it, has the ability to mount, and therefore carry out operations on the resource.

    The list of hosts that can write information is specified using the rw option. If the rw option does not specify a list of hosts, then any host has the right to write.

    The root option allows you to specify hosts in which local root superusers obtain server root rights to the exported resource. Otherwise, even if the host is given rw rights, the root user on it is equivalent to the user nobody (uid=-2), i.e., a user with minimal access rights. The above applies specifically to access rights to a remote resource and does not affect access rights to local client resources.

    The anon and secure options will be discussed when describing the NFS authentication scheme.

    INSTALLATION RULES

    If for the server the exported resources can act as a file system or a separate directory, then for the client they always look like file systems. Since NFS support is built into the UNIX kernel, the operation of mounting NFS file systems is performed by the standard mount utility (a separate daemon is not required to mount NFS), and you only need to specify that the mounted file system is NFS. Another way to mount is using the /etc/fstab file (/etc/filesystems on some versions of UNIX). In this case, remote NFS systems (as well as local ones) are mounted at the OS boot stage. Mount points can be any, including as part of other NFS file systems, i.e. NFS systems can be “strung” on top of each other.

    Basic NFS mount options are listed in Table 2.

    The bg option allows you to mount in the background, in which case you can run other mount commands.

    The pair of hard/soft options seems very interesting. With a "hard" mount, the client will try to mount the file system at all costs. If the server is down, this will cause the entire NFS service to freeze: processes accessing the file system will go into a state of waiting for RPC requests to complete. From the point of view of user processes, the file system will look like a very slow local disk. When the server is returned to working condition, the NFS service will continue to function as if nothing had happened. Using the intr option allows you to interrupt the hard mount process using the INTERRUPT system signal.

    During a soft mount, the NFS client will make several attempts to connect to the server, as specified by the retans and timeo options (some systems also support a special retry option). If the server does not respond, the system displays an error message and stops attempting to mount. From the point of view of the logic of file operations when the server fails, a “soft” mount emulates a local disk failure. If the retrans (retry) option is not specified, the number of retries is limited to the default value for the given UNIX system. The retrans and timeo options apply not only to mounts, but also to any RPC operations performed on the NFS file system. That is, if the client performs a write operation, and at this time there is a failure on the network or on the server, the client will try to repeat the requests.

    The question of which mode, “soft” or “hard,” is better cannot be answered unequivocally. If the data on the server must be consistent when it temporarily fails, then a “hard” mount is preferable. This mode is also indispensable in cases where the mounted file systems contain programs and files that are vital for the client’s operation, in particular for diskless machines. In other cases, especially when it comes to read-only systems, soft mount mode seems to be preferable.

    AUTHENTICATION AND SECURITY

    As noted, each RPC package is self-contained. Moreover, in general, NFS does not provide statefulness, i.e., it does not keep track of what requests clients have previously made, and it does not track client performance. Therefore, in systems that use remote procedure calls, the security problem turns out to be extremely relevant.

    In NFS, authentication is performed exclusively at the stage of mounting the file system and only based on the domain name (or IP address) of the client machine. That is, if an NFS client (here we mean a computer, not a computer user) contacts the server with a mount request, the server determines access rights using the /etc/exports table, and the client is identified by the name (IP address) of the computer. If the client is allowed to perform certain operations on the exported resource, then it is told a certain “magic number” (magic cookie). In the future, the client must include this number in every RPC request to prove its credentials.

    This, in fact, is the entire simple set of client authentication tools; users are not authenticated in any way. However, each RPC request contains the uid of the user who initiated the request and a list of group ids, gid, to which the user belongs. But these identifiers are not used for authentication, but to determine the access rights of a specific user to files and directories.

    Please note that uid and gid are determined on the client side, not the server side. Therefore, administrators are faced with the problem of coordinating the contents of /etc/passwd (and /etc/group) between clients and NFS servers so that user Vasya on the server is not assigned the rights of user Petya. For large networks this presents serious difficulties. To ensure consistency of the user database, as well as system files such as /etc/hosts, /etc/rpc, /etc/services, /etc/protocols, /etc/aliases, etc., you can use the Network Information service System, NIS), developed by Sun back in 1985 and included in most versions of UNIX (its more advanced version NIS+ is not widely used). NIS is an information service, loosely reminiscent of the Windows NT directory service, that allows you to centrally store and process system files. By the way, NIS is built on the same principle as NFS, in particular it uses the RPC and XDR protocols.

    Another important feature of NFS is that each RPC request carries a list of the user's gid groups. To limit the size of the RFC packet, most NFS implementations limit the number of groups to no more than 8 or 16. If a user is a member of more groups, this can lead to errors in determining permissions on the server. This problem is very relevant for corporate file servers. A radical solution is to use ACLs, but, unfortunately, not all UNIX flavors support them.

    The authentication system adopted by NFS is very poor and does not provide reliable protection. Anyone who has dealt with NFS knows how easy it is to bypass its security. To do this, it is not even necessary to use methods of forging IP addresses (IP-spoofing) or names (DNS-spoofing). An attacker just needs to intercept the “magic number”, and in the future he can carry out actions on behalf of the client. In addition, the "magic number" does not change until the next server reboot.

    On numerous Internet servers you can find other, including very exotic, methods of hacking NFS. The number of discovered “holes” is in the thousands. Therefore, NFS v.2 is recommended to be used only within secure networks.

    Based on these considerations, Sun developed the SecureRPC protocol using both asymmetric and symmetric encryption keys. In this case, cryptographic methods are used to authenticate not only hosts, but also users. However, the data itself is not encrypted. Unfortunately, due to US government export restrictions, not all UNIXes ship with SecureRPC support. Therefore, we will not dwell on the capabilities of this protocol. However, if your version of UNIX supports SecureRPC, then Hal Stein's book "Managing NFS and NIS" by O'Reilly & Associates will provide invaluable assistance in setting it up.

    Another problem is with NFS clients on MS-DOS and Windows 3.x/9x platforms. These systems are single-user, and it is not possible to identify the user using normal NFS tools. For the purpose of identifying DOS/Windows users, the pcnfsd daemon is launched on the server. When connecting (mounting) NFS disks on the client machine, it prompts for the user name and password, which allows not only identification, but also authentication of users.

    Although Windows NT is a multi-user operating system, its user database and user identification scheme are incompatible with those of UNIX. Therefore, NFS client sites based on Windows NT are also forced to use the capabilities of pcnfsd.

    In addition to user authentication, pcnfs allows printing on UNIX from DOS/Windows client sites. True, Windows NT originally included the LPR.EXE program, which also allows printing on UNIX servers.

    To access the file service and NFS service on DOS/Windows machines, you need to install special client software, and the prices for these products are quite steep.

    Let's return, however, to the NFS file export options (see Table 1). The anon option determines the user identifier uid in the case when the DOS/Windows user could not authenticate himself (set the wrong password) or when the user of the host connected via SecureRPC failed authentication. By default anon has uid=-2.

    The secure option is used when the SecureRPC protocol is used.

    ARCHITECTURAL FEATURES OF NFS V2

    NFS file systems must obey two conditions (by the way, the same requirements apply not only to NFS, but also to other network file systems).

    1. From the point of view of client user programs, the NFS file system is located as if on local disk. Programs have no way to distinguish NFS files from regular files.
    2. The NFS client is unable to determine which platform is being used as the server. This could be UNIX, MVS, or even Windows NT. Differences in server architecture only affect specific operations, not NFS capabilities. For the client file structure NFS is similar to local system.

    The first level of transparency is achieved through UNIX's use of the Virtual File System (VFS). VFS is responsible for interacting not only with NFS, but also with local systems like UFS, ext2, VxFS, etc.

    The second level of transparency is provided through the use of so-called virtual nodes (vnodes), the structure of which can be correlated with inodes in UNIX file systems.

    Operations on NFS file systems are VFS operations, while interactions with individual files and directories are determined by vnode operations. The RPC protocol from NFS v2 describes 16 procedures associated with operations not only on files and directories, but also on their attributes. It is important to understand that RPC calls and the vnode interface are different concepts. vnode interfaces define OS services for accessing file systems, whether they are local or remote. RPC from NFS is a specific implementation of one of the vnode interfaces.

    Read/write operations are cached on the client side, i.e. the client caches the contents of files and directories. Typically, the NFS cache buffer size is 8 KB. If biod daemons are running on the client, then reading is done ahead, and writing is done in lazy mode. For example, if a user process writes information, the data is accumulated in a cache buffer and only then is it sent, usually in a single RPC packet. When a write operation is performed, the kernel immediately returns control to the process, and RPC request forwarding functions are transferred to biod. If the biod daemons are not running and the kernel does not support multithreaded RPC processing, then the kernel must handle the forwarding of RPC packets in single-threaded mode, and the user process goes into a state of waiting for the forwarding to complete. But in this case, the NFS cache is still used.

    In addition to the contents of NFS files and directories, file and directory attributes are cached on the client side, and the attribute cache is updated on a periodic basis (usually every few seconds). This is due to the fact that the value of the attributes can be used to judge the state of a file or directory. Let us explain this approach with an example. When a user performs a read operation from a file, the contents of the file are placed in the NFS cache, but the file's attributes (creation/update time, size, etc.) are also placed in the attribute cache. If at this moment another client writes to the same file, this may lead to mismatch of the contents in the caches of different clients. However, since the first client's attribute cache is updated every few seconds, it is able to determine that the attributes have changed (in this case, the time the file was updated), so the client must perform an update operation on the file's content cache (this operation is performed automatically).

    To service client requests, nfsd daemons must be running on the server. In this case, the daemons cache information when reading from server disks. All daemons serve the same queue of client requests, which allows for optimal use of processor resources.

    Unfortunately, determining the optimal number of biod and nfsd daemons is very difficult. On the one hand, the greater the number of running daemons, the greater the number of requests that can be processed simultaneously; on the other hand, increasing the number of daemons may adversely affect system performance due to increased process switching overhead. Fine-tuning NFS is a very tedious procedure and requires taking into account not only the number of clients and user processes, but also characteristics such as switching time between process contexts (i.e. processor architecture features), RAM size, system load, etc. . It is better to determine such settings experimentally, although in most cases the standard ones will do (usually 8 nfsd daemons are run on the server, and 4 biod daemons are run on the clients).

    Figure 2. Write operation in NFS v2.

    A very important feature of NFS v2 is that writes are not cached on the server side (see Figure 2). This was done in order to ensure high reliability of the NFS service and allows you to guarantee data integrity after a server reboot in the event of a server failure. The lack of caching of information when writing is the biggest NFS problem v2. In write operations, NFS is significantly inferior to competing technologies, although in read operations it loses little to them. The only method to combat low write performance is to use disk subsystems with a power-independent built-in cache, as in rather expensive RAID arrays.

    When working in distributed and global networks, NFS v2 has another drawback due to the choice of UDP as the transport protocol for the service. As you know, UDP does not guarantee the delivery of packets; in addition, the order in which packets are received may not correspond to the order in which they are sent.

    This can lead to the following two unpleasant consequences: packet loss and a long delay in processing it. Imagine that a client is performing a read operation on a large file. In this case, the server needs to send several packets to fill the client's cache buffer. If one of the packets is lost, the client will be forced to repeat the request again, and the server will be forced to generate responses, etc.

    The situation of delay in processing RPC requests due to, say, heavy server load or network problems is also quite unpleasant. If the specified time limit is exceeded, the client will assume that the packet is lost and will try to repeat the request. For many NFS operations this is not a problem, since even the write operation can be repeated by the server. But what about operations like "remove directory" or "rename file"? Fortunately, most NFS implementations support server-side caching of duplicate requests. If the server receives a repeated request for any operation within a short period of time, then such a request is ignored.

    The RPC system does not track connection state, which creates problems when multiple clients are accessing the same file at the same time. There are two difficulties here:

    • how to lock a file, in particular when writing to it;
    • how to guarantee the integrity of locks in the event of a crash and reboot of the NFS server or client?

    To do this, NFS uses two special daemons: rpc.lockd is responsible for locking files, and rpc.statd is responsible for monitoring the state of locks (see Figure 3). These daemons run on both the client and server sides. The rpc.lockd and rpc.statd daemons are assigned two special directories (sm and sm.bak), where locking information is stored.

    A unique and quite convenient additional service, automounter, allows you to automatically mount file systems when user processes access them. Subsequently, automounter periodically (once every five minutes by default) tries to unmount the system. If it is busy (for example, a file is open), then the service continues to work as usual. If the file system is no longer accessed, it is automatically unmounted. The automounter function is implemented by several programs, amd and autofs being particularly popular among them.

    NFS V3 FEATURES

    The third version of NFS is completely backwards compatible with the second version, i.e. the NFS v3 server “understands” NFS v2 and NFS v3 clients. Likewise, an NFS v3 client can access an NFS v2 server.

    An important innovation in NFS v3 is support for the TCP transport protocol. UDP is great for local networks, but is not suitable for slow and not always reliable global communication lines. In NFS v3, all client traffic is multiplexed into a single TCP connection.

    In NFS v3, the cache buffer size is increased to 64 KB, which has a beneficial effect on performance, especially in light of the active use of high-speed network technologies Fast Ethernet, Gigabit Ethernet and ATM. In addition, NFS v3 allows you to store information cached on the client not only in RAM, but also on the client’s local disk (in fairness, it is worth noting that some NFS v2 implementations also provide this feature). This technology is known as CacheFS.

    Figure 4. Write operation in NFS v3.

    But perhaps an even more important innovation of NFS v3 can be considered a radical increase in performance on write operations. Now caching of recorded information is also performed on the server side, while registration and confirmation of the fact of writing data to disk is carried out using a special commit request (see Figure 4). This technology is called secure asynchronous recording. After the data has been sent to the server's cache, the client sends it a commit request, which initiates a write operation to the server's disk. In turn, after writing information to disk, the server sends the client confirmation of its successful completion.

    New in NFS v3 is support for 64-bit file systems and improved support for ACLs.

    As for the future, Sun is now promoting WebNFS technology, the use of which allows you to access file systems from any Web browser or through applications written in Java. There is no need to install any client software. WebNFS (according to Sun) provides a performance gain of three to five times over ftp or HTTP.

    CONCLUSION

    Knowing the principles of operation of NFS protocols, the administrator can optimally configure the service. The NFS network file system is ideal for UNIX networks, as it comes with almost all versions of this OS. Moreover, NFS support is implemented at the UNIX kernel level. As Linux begins to gradually gain weight at the desktop level, NFS has a chance to gain recognition here as well. Unfortunately, using NFS on Windows client computers creates certain problems associated with the need to install specialized and rather expensive client software. In such networks, the use of SMB services, in particular Samba software, seems more preferable. However, we will return to SMB products for UNIX in one of the upcoming LAN issues.