Detailed explanation of NFS principle

1, NFS introduction

1) What is NFS

Its main function is to share files and directories between different machine systems through the network. Server for NFS allows NFS clients to mount the shared directory of the remote NFS server to the local NFS client. In the view of the local NFS client machine, the directory shared by the NFS server is like its own disk partition and directory. Generally, the name of the client attached to the local directory can be arbitrary, but for the convenience of management, we should be as good as the server.

NFS is generally used to store static data such as shared videos and pictures.

What is NFS

Through the network shared directory, other servers on the network can mount and access the data in the shared directory. (generally share static data such as videos and pictures)

To simplify this, it is equivalent to sharing a file in windows, and then other hosts map the shared file to a local disk for use. Next, we will learn two parts: NFS principle (sharing principle), how the server supports NFS (how to share), and how the client mounts (how to map the network disk)

Mounting structure diagram

2) Introduction to NFS mounting principle

As shown in the figure above, after setting up a shared directory / data on the NFS server, other NFS clients that have access to the NFS server can mount this directory locally, and can see all the data of the server / data. Because the local / data directory is actually the server / data directory. If the client configured on the server is read-only, the client The client can only read-only. If read-write is configured, the client can read-write. After mounting, the NFS client can view disk information command: #df – h

NFS transfers data between the server and the client through the network. To transfer data between the two, you need to have the corresponding network port for transmission. What network port does the NFS server use to transfer data? In fact, the NFS server randomly selects the port for data transmission. How does the NFS client know which network port is used by the NFS server How about ports? In fact, the NFS server is implemented through the remote procedure call (RPC) protocol / service. That is, the RPC service will uniformly manage the NFS ports. The client and server communicate which ports NFS uses first through RPC, and then use these ports (less than 1024) for data transmission.

PS: Oh, it turns out that RPC manages the NFS port allocation of the server. If the client wants to transmit data, the client's RPC will first ask the server's port with the server's RPC, and then establish a connection after reaching the port, and then transmit data.

rpc and nfs

pc(portmap) It is a service used to uniformly manage NFS ports, and the unified external port is 111. The NFS server needs to start RPC first, and then start NFS, so that NFS can register port information with RPC. The client's RPC can obtain the server's NFS port information by requesting RPC from the server. When the NFS port information is obtained, data will be transmitted with the actual port. (because NFS ports are random.)

How RPC and NFS communicate

Because NFS has many functions, different functions need to use different ports. Therefore, NFS cannot fix ports. RPC will record NFS port information, so that we can communicate port information through RPC between server and client.

How do RPC and NFS communicate with each other?

First, after NFS is started, it will randomly use some ports, and then NFS will register these ports with RPC. RPC will record these ports. RPC will start port 111 and wait for the client's RPC request. If the client has a request, the server's RPC will inform the client of the recorded NFS port information.

Tip: before starting NFS SERVER, start RPC service (i.e. portmap service, the same below) Otherwise, NFS SERVER cannot register with the RPC service area. In addition, if the RPC service is restarted, all the previously registered NFS port data will be lost. Therefore, the NFS program managed by the RPC service should also be restarted to re register with RPC. Special note: generally, after modifying the NFS configuration document, it is not necessary to restart NFS. Directly execute / etc/init.d/nfs reload in the command Or exportfs – rv to make the modified / etc/exports effective.

PS: here is a starting order point. Let's make it clear first. Because NFS wants to register the port information of RPC, RPC must start earlier than NFS. I'll compare it to a game of left hand overlapping right hand. At this time, we must ensure that the palm of NFS (left) is in the palm of RPC (right) The normal sequence is to stack RPC first and then NFS. If RPC is restarted, it is equivalent to pulling out the palm and then stacking it again. In this way, RPC is on NFS, so it can't be started. At this time, NFS needs to be restarted again. After NFS is pulled out and then stacked, NFS is on RPC. If NFS modifies the configuration, it can be restarted Just pick up reload

Communication process between client NFS and server NFS

1) First, the server starts the RPC service and opens port 111
2) Start service for NFS and register port information with RPC
3) The client starts the RPC (portmap service) and requests the server's NFS port from the server's RPC(portmap) service
4) The server RPC(portmap) service feeds back NFS port information to the client.
5) The client establishes an NFS connection with the server through the obtained NFS port and transmits data.


The principle and structure of NFS is actually quite simple. NFS is a network shared directory, that is, shared files. Shared by the server and used by the client. The principle of the mounting process is the five processes mentioned above. And why this process is also mentioned. Because NFS needs to register port information with RPC. Because NFS ports are obtained randomly.

2, NFS deployment


Directly install portmap software and NFS software. Refer to the server side for details.

Server side

1) View system information

#uname -r view system kernel version

[root@CT5_6-32-220-NFS01 ~]# cat/etc/redhat-release

CentOS release 5.6 (Final)

[root@CT5_6-32-220-NFS01 ~]# uname  -r  


A habit is the first mock exam of the system version and kernel parameters. The same software is different from the kernel in different versions, so the deployment method is different. Do not cause unnecessary errors because of this. Before doing application migration, you should make a complete registration of the environment system, and some parameters in the new environment should be modeled with the old environment. Avoid mistakes.

#uname -a view operating system information

[root@CT56-32-220-NFS01 ~]# uname -a

Linux CT56-32-220-NFS01 2.6.18-238.el5 #1 SMP ThuJan 13 16:24:47 EST
2011 i686 i686 i386 GNU/Linux

2) NFS software installation

To deploy services for NFS, you must install the following two packages: NFS utils: NFS master and Portmap:RPC master

Both NFS server side and client side need to install these two software.

NFS package

1. NFS utils: the main program of NFS, including rpc.nfsd rpc.mount and two deamons

2. portmap: RPC main program. NFS can be regarded as a subroutine under RPC

2.1) view NFS package

[root@CT5_6-32-220-NFS01 ~]# rpm  -qa | egrep "nfs|portmap"   ####You can see that the system is installed by default.




If it is not installed, you can install it using Yum install NFS utils portmap package name.

3) NFS boot

Because NFS and its auxiliary programs are based on the RPC Protocol (using RPC port 111 to listen for requests), first ensure that the portmap service is running in the system. Both the client and the server should start the portmap service. The client does not need to start the NFS service, but the server needs to start the NFS service.

portmap start command:

#/etc/init.d/portmap start

[root@CT5_6-32-220-NFS01 ~]# /etc/init.d/portmapstart

Starting portmap: [ OK  ]  ##The service has started normally

#netstat – lnt to view the enabled ports in the system

[root@CT56-32-220-NFS01 ~]# netstat  -lnt

Active Internet connections (only servers)

Proto Recv-Q Send-Q Local Address               Foreign Address             State      

tcp       0      0       *                   LISTEN      ###You can see that there is an additional port 111, which is the listening port of RPC.

tcp       0      0        *                   LISTEN      

tcp       0      0       *                   LISTEN      

tcp       0      0      *                   LISTEN    

Tip: if the portmap service is not started, an error will be reported when we check through rpcinfo – p (RPC information RPC) Localhost.

[root@CT56-32-220-NFS01 ~]# rpcinfo  -p  ##Normal display information

programvers proto   port

100000    2   tcp   111  portmapper

100000    2   udp   111  portmapper

100024    1   udp   820  status

[root@CT56-32-220-NFS01 ~]# rpcinfo –p  ##Error message

rpcinfo: can't contact portmapper: RPC: Remotesystem error - Connection refused

Rpfinfo is used to view the port information registered in rpc. After the nfs system service is started, it will register the information like rpc. At this time, you can check the registered information.

NFS start command:

#/etc/init.d/nfs start

#/etc/init.d/nfs status

[root@CT56-32-220-NFS01 ~]# /etc/init.d/nfs status  ####We check the status of NFS here, but the status of three programs is shown here. That's because NFS contains programs with the management mechanism of mountd mount and quotad quota.

rpc.mountd is stopped-->Administration client Whether the client can log in

nfsd is stopped###This is the permission that the main program - > management client can obtain

rpc.rquotad is stopped

Note: from the information about service for NFS startup, we can see that the processes that NFS needs to start by default are RPC, mountd, NFSD, RPC, rquotad, RPC and idmapd. At least two daemons are required for NFS server startup. One manages whether the client can log in and the other manages the permissions that the client can obtain. If you also need to manage quota, NFS also loads the rpc.rquota program.

[root@CT5_6-32-220-NFS01 ~]# /etc/init.d/nfs status

rpc.mountd (pid 12920) is running...

nfsd (pid 12917 12916 12915 12914 12913 12912 12911 12910) isrunning...

rpc.rquotad (pid 12892) is running...


The main function of this daemon is to manage whether the client can log in to the host, including the ID discrimination of the login user.


The main function of this daemon is to manage NFS file system. After the client successfully logs in to the host through rpc.nfsd, it will also go through the authentication program of file permission before it can use NFS server to provide specified files. It will read the NFS configuration file / etc/exports to compare the permissions of the client. After passing this pass, the client will obtain the permission to use NFS files. This is why setting NFS permissions in / etc/exports alone is not enough.

Configure NFS boot

#chkconfig nfs on

#chkconfig protmap on

(the client only needs portmap to start itself)


root@CT5_6-32-220-NFS01 ~]# chkconfig portmapon  

[root@CT5_6-32-220-NFS01 ~]# chkconfig nfs on  

[root@CT5_6-32-220-NFS01 ~]# chkconfig --list  | egrep "nfs|port"  ####Note | no spaces can be added on both sides, or you won't be unable to query.

nfs            0:off   1:off   2:on   3:on    4:on    5:on   6:off

nfslock        0:off   1:off  2:off   3:on    4:on   5:on    6:off

portmap        0:off   1:off   2:on   3:on    4:on    5:on   6:off

3, Configure NFS services

Path to NFS configuration file

#/By default, the contents of etc/exports are empty. This is the nfs configuration file.

Format: NFS shared directory client address 1 (parameter 1, parameter 2, read-only or writable) client address 2 (parameter 1, parameter 2)

Description of parameter options:

Shared directory: a directory that exists on our local computer and we want to share it with other hosts on the network. If I want to share the / tmp/data directory, this option can directly write to the / tmp/data directory.

Client address 1 (parameter 1, parameter 2): the client address can set a network or a single host. Parameters: such as read-write permission rw, synchronous update sync, and compressed access account all_squash, compressed anonymous account anonuid=uid, anongid=gid, etc

Description of client address options:

Common configuration examples of production environment:

NFS permission settings

NFS configuration permission settings, that is, the parameter set in brackets () in the configuration format of the / etc/exports file.


1. In addition, you can refer to the description of the exports parameter through man exports.

2. After nfs is configured, we can view the parameters of nfs configuration through cat /var/lib/nfs/etab. And this directory is very important/ var/lib/nfs/rmtab from this file, we can see which clients have mounted the nfs shared directory. These two files are more important.

Server share configuration format:

1) Basic format: shared directory ip/24 (shared attribute) - > note that there are no spaces

2) Share permission settings:

rw read / write properties

sync files are not returned until they are actually written to disk

all_ Square: all access users are compressed into subsequent users.

anonuid: default compressed user

anongid: default compressed user group

What identity does the client access?

The client access server uses the user nfsnobody by default. Uid and GID are 65534. When the server shares by default, all is added_ Squash this parameter. anonuid is 65534 (i.e. nfsnobayd user). Of course, if nfsnobody is another uid in the system, it may cause access problems. Therefore, it's best that we can set up a user to access and unify uid and GID.

How is the mount?

There are two important documents that can solve this question/ var/lib/nfs/etab and / var/lib/nfs/rmtab can view what directories are shared on the server, how many clients are shared, and the specific information of client mounting.

1. etab this file can see which directories are shared on the server, who can use them, and what parameters are set.

2. The rmtab file can view the mounting of the shared directory.

4, NFS configuration instance

Instance 1. Share the / atong directory to the network segment

Server side operation:

1) Check and start portmap

[root@CT5_6-32-220-NFS01 /]# /etc/init.d/portmapstatus

portmap (pid 2506) is running...

root@CT5_6-32-220-NFS01 /]# rpcinfo-p

-bash: rpcinfo-p: command not found

[root@CT5_6-32-220-NFS01 /]# rpcinfo –p  ###View RPC record information. Wow, there are so many, but you can see nfs rquotad and mount, which indicate that nfs has registration information.

programvers proto   port

100000    2   tcp   111  portmapper

100000    2   udp   111  portmapper

100024    1   udp   601  status

100024    1  tcp    604  status

100011   1   udp    773 rquotad rquotad

100011    2   udp   773  rquotad

100011    1   tcp   776  rquotad

100011    2   tcp   776  rquotad

100003    2   udp  2049  nfs

100003    3   udp  2049  nfs

100003    4  udp   2049  nfs

100005   1   tcp    803 mountd

100005    2   udp   800  mountd

100005    2   tcp   803  mountd

100005    3   udp   800  mountd

100005    3   tcp   803  mountd

2) View the running status of NFS

[root@CT5_6-32-220-NFS01 /]# /etc/init.d/nfs status

rpc.mountd (pid 12920) is running...

nfsd (pid 12917 12916 12915 12914 12913 12912 1291112910) is running...

rpc.rquotad (pid 12892) is running...

3) Create directory

[root@CT5_6-32-220-NFS01 /]# mkdir atong

[root@CT5_6-32-220-NFS01 /]# ls -d atong


[root@CT5_6-32-220-NFS01 /]# ll -d atong

drwxr-xr-x 2 root root 4096 May 27 17:22 oldbo ####Note that the permission of the shared directory is that only root has write permission.

4) Configure / etc/exports(NFS configuration file)

Reload after modifying the configuration/ etc/init.d/nfs reload

[root@CT5_6-32-220-NFS01 /]# cat  /etc/exports

[root@CT5_6-32-220-NFS01 /]# cat /etc/exports


[root@CT5_6-32-220-NFS01 /]# /etc/init.d/nfs reload

exportfs: /etc/exports:1: unknown keyword "rw.sync"###roload prompts an error, and the configuration file is rewritten, which is correct again. In the future, develop the habit of writing configuration backups.

[root@CT5_6-32-220-NFS01 /]# vi /etc/exports


[root@CT5_6-32-220-NFS01 /]# /etc/init.d/nfs reload succeeds.

Here we are NFS The directory has been shared and the corresponding permissions have been set.

Client operation:

Now our server has set sharing on the configuration file and set permissions rw. But in fact, the rwxr-xr-x permissions of the server-side directory files have not been opened. This is also very similar to our windows sharing. We should not only have sharing permissions, but also have local security permissions for the directory. Now let the client mount it.

1) Check whether portmap starts normally

[root@CT56-32-220-NFS01 ~]# /etc/init.d/portmap status

portmap (pid 2725) is running...

2) View the shared information on the server side.

Showmount – e192.168.1.1 to view the sharing provided by the server.

[root@CT56-32-220-NFS01 ~]# showmount  -e

Export list for

/atong  ---->See that there is already this share.

3) Mount the directory shared by the server on the client.

#mount -t nfs /mnt (local directory) we can create a new directory to mount ourselves.

[root@CT56-32-220-NFS01 ~]# mount -t nfs  /atong  

#The format of mount command is as follows:

####mount -t type device localdir (local directory)

#The above command device=

[root@CT56-32-220-NFS01 ~]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1655612  5708704  23% /    

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm   7765152   1655296  5709024  23% /atong

[root@CT56-32-220-NFS01 ~]# touch  /atong/test.txt

touch: cannot touch `/atong/test.txt ': Permissiondenied - "it is found that there is no permission now, because the local rwx permission of our server is not enabled.

[root@CT56-32-220-NFS01 ~]# ll /atong/

total 4

-rw-r--r-- 1 root root    0 May 28 08:14 test1

drwxr-xr-x 2 root root 4096 May 28 08:15 test-dir1

###A directory is created on the server side, and it will take some time to synchronize to the client. Well, is there a solution to this situation?

4) Check whether the mounted and shared files are consistent.

#df to check the file system in our system.

[root@CT56-32-220-NFS01~]# df

Filesystem           1K-blocks      Used Available Use% Mounted on

/dev/sda3              7765136   1655612  5708704  23% /

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm   7765152   1655296  5709024  23% /atong

Check whether the content mounted in the client is the same as the directory on the server.

5) Perform write operations on the client.

Permission Description: when we give NFS rw permission in / etc/exports, why can't the client write? Because if we only configure rw permission in NFS configuration file, it only indicates that the host on the network side can have permission to write files on the server side, but it also needs permission through the local directory on the server side. That is to say, the client needs two layers of permissions NFS configuration file - > permissions of shared directory files. The client writes files to the server as nfsnobody and nfsnobody UID=65534.

Error prompt

[ atong@LiWenTong ~]$/etc/init.d/portmap stauts - > portmap is not started

Networking not configured – exiting

Client NFS mount parameters

We can also set many parameters for client-side mounting NFS, just like the windows mapping disk. Client mounting can be set: non executable, read-write permission, RPC call mode after disconnection, read-write block size, etc. Generally speaking, when the NFS server provides only ordinary data (images, html, css, jss, video, etc.) There should be no need to execute suid, exec and other permissions. Because there is no device in the shared directory, there is no device dev to mount. Therefore, when the client is mounted, you can add these commands to mount.

#mount –t  nfs -o  nosuid,noexec,nodev,rw  /local/mnt

You can use the mount parameter table:

In addition, some additional parameters for NFS mounting are available. If NFS is used in a high-speed environment, it is recommended to add these parameters, so that when the NFS server is offline for some reason, NFS clients can continue to repeat calls in the background until the NFS server is connected online again.

For some high concurrency situations, there are also some parameters that can be optimized:

The command format is as follows: mount – t nfs – o nosuid,noexec,nodev,rw,hard,intr,rsize=32768,wsize=32768 /local/dir

How can NFS clients mount best

1)noexec,nosuid,nodev,Because sharing stores simple data, there is no need to suid Bit does not need to be executed, there is no device file.

2)hard,intr,bg When NFS After the link is broken, the server will be monitored all the time NFS Service until reconnected after recovery.

3)rsize=32768 wsize=32768 tuning  NFS The block size of the transfer.

4)Basic parameters: rw Read and write permissions.

Common operations of the client after mounting

1) Reaction after restarting the device after mounting.

[root@CT56-32-221-NFS02 ~]# mount -t nfs192.168.41.220:/atong /atong/

[root@CT56-32-221-NFS02 ~]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1634308  5730008  23% /

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm   7765152   1655296  5709024  23% /atong

[root@CT56-32-221-NFS02 ~]# ll /atong/####When we mount the shared directory on the server side to the local directory, the contents of the original local directory will be replaced by the contents of the remote server.

total 4

-rw-r--r-- 1 root root    0 May 28 08:14 test1

drwxr-xr-x 2 root root 4096 May 28 08:15 test-dir1

[root@CT56-32-221-NFS02 ~]# umount /atong

[root@CT56-32-221-NFS02 ~]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1634308  5730008  23% /

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm

[root@CT56-32-221-NFS02 ~]# ll  /atong/ ####When we uninstall the mounted directory again, the contents of the original directory can be seen again.

total 0

-rw-r--r-- 1 root root 0 May 28 08:27 test1

-rw-r--r-- 1 root root 0 May 28 08:28 test2

2) How to set auto mount on startup

Special note: after we restart the client, we need to mount nfs again. We can implement it in two ways.

<1> Write the command mount – t nfs / atong /mnt to / etc/rc.local, and let it execute after startup.

<2> Add our NFS configuration in / etc/fstab (system boot partition add in): /atong/video nfs defaults 1 1 .

However, in the production environment, shared NFS directories are generally not configured in / etc/fstab. When the client host restarts, if the nfs server cannot be connected due to network and other reasons, the client will fail to start. Generally, mount -t nfs /local/dir command is put into rc.local to automatically mount NFS after startup.

Automatic mount of nfs after startup

There are two ways to realize automatic mount after startup. 1. Write the mount command in the rc.local file. 2. Writing the specific configuration in / etc/fstab is for you.

However, the first one is recommended here. If the NFSserver is not connected due to network reasons, the second one may cause the system to fail to start.

3) Unmount mount point

<1> Normal unloading

#umount /local/dir uninstall using the normal uninstall command.

<2> Umoutn prompt busy error

How to uninstall nfs mount points? Uninstall through umount /local/dir. If a local user is still in the mount point, you will be prompted that the mount point is busy. We need to get the local to exit the mount point first, and then uninstall again. If other users are using it, you also need to exit the mount directory before uninstalling it.

Or perform forced uninstall: umount – lf /local/dir to perform forced uninstall.

Briefly describe a complete NFS mount process

1) Confirm that portmap and nfs have been started. And it is started after nfs is better than portmap.

Configure the startup through chkconfig. The default startup sequence of portmap is earlier than nfs.

2) vi /etc/export configure the shared directory and permissions of nfs service.

#/etc/init.d/nfs reload reload

Confirm that the directory to be shared on the server side already exists and the permissions are correct.

3) Start the portmap of the client and add it to the startup self startup. Use showmount to check whether the server has provided a shared NFS directory. Use the rpfinfo command to view the rpc information on the server side. When the local side wants to mount, it is also necessary to confirm that the locally mounted directory is not occupied.

3.1) when the client cannot write, it is necessary to judge whether the permissions in / etc/exports on the server side and the local directory permissions of the directory shared by the server side are correct. If it is incorrect, we can change the owner of the directory to nfsnobody to enable the client to write. After the re client is written, you can view the owner and permission of the written file. It can be found that as long as the document is created from the client, the owner and user group of the file are nfsnobody. If you add all_ After squash.

(but make sure that all clients have the same uidnfsnobody. When our system is 32-bit, we can make sure that the uid of anonymous users is 65534. For 64 bit operating system, it is a string of other numbers.)

3.2) when all systems are 64 bit systems. When we want to share, we can make a modification on our server side: all_squash,anonuid=2000,anongid=2000

3.3) 32-bit or 64 bit without viewing the system

Create a new user and user group on all machines in the network. Then configure / etc/exports all_squash,anonuid=1207,anongid=1207

4) Configure the default boot mount for the client.

Write the mount – t nfs 192.68... 1.1: / share/dir /local/dir command to rc.local.

------------------------------Follow up self summary-----------------------------------------------

NFS is a network shared file system. The reason is very simple. That is, the server shares files, and the client mounts the files shared by the server. For shared files, you need to configure / etc/exports, add the corresponding shared directory and shared target network and permission configuration, and open the local permission of shared files. Then, the client mounts with parameters and sets the read-write permissions rw,nodev,noexec,nouser,hard,intr,rsize, wsize and other mounting parameters. Then start to mount and use.

In terms of startup sequence, we should note that portmap must be started before NFS. It is better to load the command into / etc/rc.local for NFS startup and mounting of the client.

FAQ supplement:

1) When the server-side network fails or the network is disconnected.

When the server's network is disconnected, the client df When viewing the local partition information, I have been waiting and sometimes get stuck.

[root@CT56-32-221-NFS02 atong]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1635964  5728352  23% /

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm

...I've been waiting

[root@CT56-32-222-NFS03 ~]# cd /atong

...Waiting.Even our original/atong Can't get into the directory because it's mounted now.

[root@CT56-32-221-NFS02 ~]# umount  /atong ###Uninstallation is also not possible through umount /atong.

umount.nfs: not found /mounted or server not reachable

[root@CT56-32-221-NFS02 ~]# umount -lf /atong  ####You can uninstall by -lf performing forced uninstall.

[root@CT56-32-221-NFS02 ~]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1635976  5728340  23% /

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm

When the server network returns to normal, df You can view the information.

[root@CT56-32-222-NFS03 ~]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1636016  5728300  23% /

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm    7765152   1655360  5708992  23% /atong

2) Modify the server NFS configuration, disable sharing, and do not reload the NFS service.

[root@CT56-32-221-NFS02 ~]# showmount  -e

Export list for


[root@CT56-32-222-NFS03 atong]# touch nfs4

[root@CT56-32-222-NFS03 atong]# ll  

total 4

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 13:24 nfs2

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:05 nfs3

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:05 nfs4

###As long as there is no reload, there will be no impact, because the RPC of the server still remembers the old information of NFS. Therefore, the client will not be affected.

[root@CT56-32-221-NFS02 ~]# df  

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3             7765136   1635976  5728340    23% /

/dev/sda1               101086     11601    84266   13% /boot

tmpfs                    62532         0    62532   0% /dev/shm

df: `/atong':Permission denied

###After the server modifies the configuration and reloads again, permission restrictions will appear on the originally mounted directory. The reason is that after reload, the configuration file will be reloaded, and the new configuration file will take effect again. After reconfiguring and reloading, the client can mount again.

3) Client network outage

If the client network is interrupted, all connections will be completely disconnected, so there is no need to say more. After the network is restored, it can be used again.

4) Client portmap service stopped

[root@CT56-32-222-NFS03 atong]# /etc/init.d/portmapstatus

portmap (pid 2726) is running...

[root@CT56-32-222-NFS03 atong]# /etc/init.d/portmapstop

Stopping portmap: [ OK  ]

[root@CT56-32-222-NFS03 atong]# ll

total 4

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 13:24 nfs2

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:05 nfs3

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:05 nfs4

-rwxrwxrwx 1 root      root         0 May 28 08:14 test1

drwxrwxrwx 2 root      root     4096 May 28 08:15 test-dir1

[root@CT56-32-222-NFS03 atong]# touch nfs5

[root@CT56-32-222-NFS03 atong]# ll -- it can be used normally

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:39 nfs5

[root@CT56-32-222-NFS03 /]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1636016  5728300  23% /

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm  7765152   1655360  5708992  23% /atong

[root@CT56-32-222-NFS03 /]# showmount -e

Export list for  --->Can view the original NFS Server mount information


#####After the portmap of the client is stopped, the originally mounted nfs can still work. And can synchronize with the server.

[root@CT56-32-222-NFS03 /]# umount /atong

[root@CT56-32-222-NFS03 /]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1636020  5728296  23% /

/dev/sda1              101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm

[root@CT56-32-222-NFS03 /]# showmount -e

Export list for


[root@CT56-32-222-NFS03 /]# mount -t nfs  /atong

mount.nfs:Input/output error

####After uninstalling the original mount, the error of unable to mount will appear when you mount again. After the recovery portmap is started, it can be used again.

5) When the NFS process on the server side is stopped.

Server side termination NFS process

[root@CT5_6-32-220-NFS01 ~]# /etc/init.d/nfs stop

Shutting down NFS mountd: [  OK  ]

Shutting down NFS daemon: [  OK  ]

Shutting down NFS quotas: [  OK  ]

Shutting down NFS services:  [ OK  ]

[root@CT5_6-32-220-NFS01 ~]# cat /var/lib/nfs/etab

[root@CT5_6-32-220-NFS01 ~]# cat /var/lib/nfs/rmtab

[root@CT5_6-32-220-NFS01 ~]# /etc/init.d/nfs status

rpc.mountd is stopped

nfsd is stopped

rpc.rquotad is stopped

###The client can't connect immediately because NFS Is the process that needs to run at all times.###

[root@CT56-32-222-NFS03 /]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1636028  5728288  23% /

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0%

###After the server restarts NFS, the client can mount again.

[root@CT56-32-221-NFS02 ~]# mount -t nfs  /atong

[root@CT56-32-221-NFS02 ~]# df

Filesystem          1K-blocks      Used Available Use%Mounted on

/dev/sda3              7765136   1634396  5729920  23% /

/dev/sda1               101086     11601    84266  13% /boot

tmpfs                    62532         0    62532   0% /dev/shm    7765152   1655360  5708992  23% /oldbo

Three processes of NFS

nfsd This is nfs If this main program is stopped, it means nfs Completely paralyzed and unable to work, of course, it can't be connected

rpc.mountd This is a mount mechanism that manages sharing

rpc.quotad Manage shared quotas

6) The portmap on the server side is stopped.

------Theoretical derivation----------------

In fact, we can probably know the practical principle of the fault problem demonstration here, and we can know what the performance will be. Then we can use experiments to verify whether our inference is correct.

The portmap of the server is related to the port registration of NFS. As long as the client connects to the server for NFS, it will not ask the portmap for the port information of NFS. Now that portmap is stopped, the old NFS of the client will not be affected. But if you want to establish another mount, it won't work. The server cannot create a new shared directory successfully, because the new NFS share needs to register with portmap. OK, after derivation, let's verify it as follows:

Episode: in fact, what we should learn is the ability to theoretically deduce phenomena. Troubleshooting in the future is a very important ability. At least I've been on the Internet for so long.

Server side stop portmap After service, the original nfs Shared directory, normal operation.

[root@CT5_6-32-220-NFS01 ~]# /etc/init.d/portmapstop

Stopping portmap: [ OK  ]

[root@CT5_6-32-220-NFS01 ~]# cd /atong/

[root@CT5_6-32-220-NFS01 atong]# ll  

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 13:24 nfs2

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:05 nfs3

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:05 nfs4

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:39 nfs5

###The new client cannot view the mount information or mount

[root@CT56-32-222-NFS03 ~]# showmount -e192.168.41.220

mount clntudp_create: RPC: Port mapper failure -RPC: Unable to receive

##The original old mount is OK, but it also fails to query the mount information again. It seems that the NFS port information channel can be remembered once. If you need to re mount a new share or query the share, you must re request the portmap of the server, and the 111 port of the server has been stopped. Therefore, it is impossible to request the corresponding data of the new port.

[root@CT56-32-221-NFS02 atong]# showmount  -e

mount clntudp_create: RPC: Port mapper failure -RPC: Unable to receive

7) Restart the server portmap or start NFS before portmap

------Theoretical derivation---------------------------------------------------------------------------

Ha ha, deduce again. Portmap restarts and the original registration information is gone. Moreover, NFS is not restarted, so the information in portmap is empty. However, the old NFS mount already exists, and it will not be affected when it is established. But not with a new mount. Moreover, the server cannot establish a new NFS share. If you reload after modifying the configuration, you can create a new share, because this is equivalent to re registering the portmap information.

Ha ha, in fact, I pushed these when I was writing a blog. It depends on whether the experiment is correct. That's it.

[root@CT56-32-221-NFS02 atong]# touch  sdfasd

[root@CT56-32-221-NFS02 atong]# ll

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 13:24 nfs2

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:05 nfs3

-rw-r--r-- 1 nfsnobody nfsnobody    0 May28 14:05 nfs4

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 14:39 nfs5

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 15:01 nfs6

-rw-r--r-- 1 root      root         0 May 28 15:02 nfs7

-rw-r--r-- 1 nfsnobody nfsnobody    0 May 28 16:00 sdfasd

-rwxrwxrwx 1 root      root         0 May 28 08:14 test1

drwxrwxrwx 2 root      root     4096 May 28 08:15 test-dir1

[root@CT56-32-221-NFS02 atong]# showmount  -e

mountclntudp_create: RPC: Program not registered

#####The original shared mount of the client will not be affected. However, if you want to mount again or any operation that requires communication with portmap, you will be prompted that RPC has no registration information.

[root@CT5_6-32-220-NFS01 atong]# rpcinfo -p -- "viewing portmap information is brand new! Consistent with derivation.

  programvers proto   port

  100000    2   tcp   111  portmapper

  100000   2   udp    111 portmappe

###At this time, the portmap only records 111 port information.

[root@CT5_6-32-220-NFS01 atong]# /Etc / init.d/nfs restart re registration information

Shutting down NFS mountd: [  OK  ]

Shutting down NFS daemon: [  OK  ]

Shutting down NFS quotas: [  OK  ]

Shutting down NFS services:  [ OK  ]

Starting NFS services:  [ OK  ]

Starting NFS quotas: [  OK  ]

Starting NFS daemon: [  OK  ]

Starting NFS mountd: [  OK  ]

[root@CT5_6-32-220-NFS01 atong]# rpcinfo -p

programvers proto   port

100000    2   tcp   111  portmapper

100000    2   udp   111  portmapper

100011    1   udp   660  rquotad

100011    2   udp   660  rquotad

100011    1   tcp   663  rquotad

100011   2   tcp    663 rq

###After the server-side NFS restarts, you can register information.


1) portmap failure

Portmap failure on the server side: the original mounted will not be affected. If all clients want to mount the share of this server, or re execute the operation (uninstall, re mount) of the share of this server, an error will appear. Because you still need to request port information from portmap. There is an error in the new mount or the new share on the server.

Client portmap failure: the original mounted will not be affected. If the client needs to be re mounted, an error will be reported if the share of any server is uninstalled. New mount error

2) Server NFS failure

Server NFS failure: NFS is the main program that provides mounting. If a failure occurs, the shared clients that mount the server will fail. The main program has made mistakes. You can imagine the results. It's like how to drive a car when the engine doesn't work.

3) Network failure

Network failure: the network is the most basic condition for providing network services. If there is a failure, all the services based on the network will fail.

NFS benefits

1,Easy to master

2,Convenient and rapid deployment, simple and easy maintenance

3,Reliable - at the software level, data is reliable and durable

NFS limitations

1,The limitation is that there is a single point of failure if NFSserver In case of downtime, all clients cannot access the shared directory,#####We can pass rsync To synchronize data. Or high availability through load balancing.######

2,In high concurrency situations, NFS Limited efficiency performance (generally less than tens of millions) pv Your website is not a bottleneck unless the website architecture is too poor.)

3,Client authentication for server shared files is based on IP The security of and host name is general (but it is not a problem for intranet)

4,NFS The data is in clear text, and the integrity of the data is not verified (generally stored in the intranet and used by the intranet server. Therefore, security is not a problem)

5,When multiple machines mount the server, connection management and maintenance are troublesome. especially NFS After the server fails, all clients are hung up (available for use) autofs Automatic mount resolution.)

Production application scenario

Online applications of small and medium-sized websites (less than 20 million pv) have a place to play. The portal will also have other applications,. Because the concurrency of portal websites is super large. Therefore, some people will use professional storage to do this.

Tags: Linux Windows

Posted by Aikon on Tue, 21 Sep 2021 06:08:29 +0530