Chinese description of configuration parameters of Nginx

preface

Nginx is a lightweight Web server / reverse proxy server and e-mail (IMAP/POP3) proxy server, which is distributed under BSD like protocol. Its characteristics are that it occupies less memory and has strong concurrency. In fact, nginx has better concurrency in the same type of Web servers. Users of nginx websites in Chinese Mainland include Baidu, jd.com, Sina, Netease, Tencent, Taobao, etc.

It can be compiled and run on most UnixLinux OS with Windows porting. It is a powerful high-performance Web and reverse proxy service. It has many very superior features. In the case of high concurrency of connections, Nginx is a good substitute for Apache service: Nginx is one of the software platforms often selected by bosses in the virtual host business in the United States, and can support responses up to 50000 concurrent connections.

Nginx as a load balancing service: nginx can directly support external services of Rails and PHP programs internally, and can also support external services as an HTTP proxy service. Nginx is written in C, which is much better than Perlbal in terms of system resource overhead and CPU efficiency. For the corresponding configuration of nginx, parameters play a very important role when some problems occur in the cluster. However, as you know, many parameter configurations, including the official website, are explained in English. Today, I will give you some detailed explanations in Chinese

Detailed description of Nginx configuration parameters in Chinese:

#definition Nginx Running users and user groups
user www www;
#
#nginx Number of processes,Recommended setting is equal to CPU Total number of cores.
worker_processes 8;
#
#Global error log definition type,[ debug | info | notice | warn | error | crit ]
error_log /var/log/nginx/error.log info;
#
#Process file
pid /var/run/nginx.pid;
#
#One nginx Maximum number of file descriptors opened by the process,Theoretical value should be the maximum number of open files (system value ulimit -n)And nginx Divide the number of processes,however nginx Uneven distribution of requests,So it is suggested that ulimit -n The values of are consistent.
worker_rlimit_nofile 65535;
#
#Operating mode and maximum number of connections
events
{
    #Reference event model,use [ kqueue | rtsig | epoll | /dev/poll | select | poll ]; epoll Model is Linux 2.6 High Performance Networking in the above kernel I/O model,If you run in FreeBSD above,Just use kqueue model.
    use epoll;
    #Maximum connections per process (maximum connections=Number of connections*Number of processes)
    worker_connections 65535;
}
#
#setting http server
http
{
    include mime.types; #File extension and file type mapping table
    default_type application/octet-stream; #Default file type
    #charset utf-8; #Default encoding
    server_names_hash_bucket_size 128; #Server name hash Table size
    client_header_buffer_size 32k; #Upload file size limit
    large_client_header_buffers 4 64k; #Set request delay
    client_max_body_size 8m; #Set request delay
    
    # Enable directory list access,Appropriate Download Server,Default off.
    autoindex on; # display contents
    autoindex_exact_size on; # Display file size defaults to on,Shows the exact size of the file,Unit is bytes Change to off after,Displays the approximate size of the file,Unit is kB perhaps MB perhaps GB
    autoindex_localtime on; # Display file time defaults to off,The file time displayed is GMT Time changed to on after,The file time displayed is the server time of the file
    
    sendfile on; # Enable efficient file transfer mode,sendfile Instruction assignment nginx Call or not sendfile Function to output a file,Set to for normal applications on,If it is used for downloading and other application disks IO Heavy duty applications,Can be set to off,To balance disk and network I/O processing speed ,Reduce the load on the system.Note: if the picture is abnormal, change this to off.
    tcp_nopush on; # Prevent network congestion
    tcp_nodelay on; # Prevent network congestion
    
    keepalive_timeout 120; # (unit s)Set the timeout for the client connection to remain active,After this time, the server will close the link
    
    # FastCGI Related parameters are used to improve the performance of the website: reduce resource consumption,Improve access speed.The following parameters can be understood literally.
    fastcgi_connect_timeout 300;
    fastcgi_send_timeout 300;
    fastcgi_read_timeout 300;
    fastcgi_buffer_size 64k;
    fastcgi_buffers 4 64k;
    fastcgi_busy_buffers_size 128k;
    fastcgi_temp_file_write_size 128k;
    
    # gzip Module settings
    gzip on; #open gzip compression out
    gzip_min_length 1k; #Minimum number of bytes of pages allowed to be compressed,Page bytes from header Steal content-length Get from.Default is 0,No matter how many pages are compressed.It is recommended to set it to greater than 1 k Bytes of,Less than 1 k The pressure may increase
    gzip_buffers 4 16k; #16 for 4 companies k Memory as compressed result stream cache,The default value is to apply for memory space with the same size as the original data for storage gzip Compression results
    gzip_http_version 1.1; #Compressed version (default 1.1,Most browsers currently support gzip decompression.Front end if yes squid2.5 Please use 1.0)
    gzip_comp_level 2; #Compression class.1 Minimum compression ratio,Fast processing speed.9 Maximum compression ratio,Compare consumption cpu resource,Slowest processing speed,But because of the maximum compression ratio,So the package is the smallest,Fast transmission speed
    gzip_types text/plain application/x-javascript text/css application/xml;
    #Compression type,It is already included by default text/html,So there's no need to write any more,There's no problem with it,But there will be one warn.
    gzip_vary on;#Option allows the front-end cache server to cache through gzip Compressed pages.for example:use squid Cache pass nginx Compressed data
    
    #Open limit IP The number of connections needs to be used
    #limit_zone crawler $binary_remote_addr 10m;
    
    ##upstream Load balancing for,Four scheduling algorithms(The following example is the keynote speaker)##
    
    #Configuration of virtual host
    server
    {
        # Listening port
        listen 80;
        # There can be multiple domain names,Space
        server_name ably.com;
        # HTTP automatic skip HTTPS
        rewrite ^(.*) https://$server_name$1 permanent;
    }
    
    server
    {
        # Listening port HTTPS
        listen 443 ssl;
        server_name ably.com;
        
        # Configure domain name certificate
        ssl_certificate      C:\WebServer\Certs\certificate.crt;
        ssl_certificate_key  C:\WebServer\Certs\private.key;
        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;
        ssl_protocols SSLv2 SSLv3 TLSv1;
        ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP;
        ssl_prefer_server_ciphers  on;
    
        index index.html index.htm index.php;
        root /data/www/;
        location ~ .*\.(php|php5)?$
        {
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_index index.php;
            include fastcgi.conf;
        }
        
        # Configure address interception and forwarding to solve cross domain authentication problems
        location /oauth/{
            proxy_pass https://localhost:13580/oauth/;
            proxy_set_header HOST $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
        
        # Picture cache time setting
        location ~ .*\.(gif|jpg|jpeg|png|bmp|swf)$ {
            expires 10d;
        }
        
        # JS and CSS Cache time setting
        location ~ .*\.(js|css)?$ {
            expires 1h;
        }

        # Log format setting
        log_format access '$remote_addr - $remote_user [$time_local] "$request" '
        '$status $body_bytes_sent "$http_referer" '
        '"$http_user_agent" $http_x_forwarded_for';
        # Define the access log of this virtual host
        access_log /var/log/nginx/access.log access;
        
        # Set view Nginx Address of the status.StubStatus Module can obtain Nginx The working status since the last startup. This module is not a core module and needs to be Nginx Manually specified during compilation and installation
        location /NginxStatus {
            stub_status on;
            access_log on;
            auth_basic "NginxStatus";
            auth_basic_user_file conf/htpasswd;
            #The contents of the htpasswd file can be generated using the htpasswd tool provided by apache
        }
    }
}

Nginx multi server load balancing:

1.Nginx load balancing server:

IP: 192.168.0.4(Nginx-Server)

2.Web server list:

Web1:192.168.0.5(Nginx-Node1/Nginx-Web1) ;Web2:192.168.0.7(Nginx-Node2/Nginx-Web2)

3. purpose: users access nginx server(“
http://mongo.demo.com:8888 ”)Through Nginx load balancing to Web1 and Web2 servers

Nginx Conf configuration notes are as follows:

events
{
    use epoll;
    worker_connections 65535;
}
http
{
    ##upstream Load balancing for,Four scheduling algorithms##
    #Scheduling algorithm 1:polling .Each request is allocated to a different back-end server in chronological order,If one of the back-end servers goes down,Fault system is eliminated automatically,Make user access unaffected
    upstream webhost {
        server 192.168.0.5:6666 ;
        server 192.168.0.7:6666 ;
    }
    #Scheduling algorithm 2:weight(weight).Weights can be defined according to the machine configuration.The higher the weight, the greater the probability of being assigned
    upstream webhost {
        server 192.168.0.5:6666 weight=2;
        server 192.168.0.7:6666 weight=3;
    }
    #Scheduling algorithm 3:ip_hash. Per request per access IP of hash Result distribution,So it comes from the same IP Of visitors have fixed access to a back-end server,Effectively solve the problem of dynamic web pages session Sharing issues
    upstream webhost {
        ip_hash;
        server 192.168.0.5:6666 ;
        server 192.168.0.7:6666 ;
    }
    #Scheduling algorithm 4:url_hash(Third party plug-ins need to be installed).This method is accessed by url of hash Results to allocate requests,Make each url Directed to the same back-end server,Can further improve the efficiency of the backend cache server.Nginx Not supported by itself url_hash of,If you need to use this scheduling algorithm,Must be installed Nginx of hash software package
    upstream webhost {
        server 192.168.0.5:6666 ;
        server 192.168.0.7:6666 ;
        hash $request_uri;
    }
    #Scheduling algorithm 5:fair(Third party plug-ins need to be installed).This is a more intelligent load balancing algorithm than the above two.This algorithm can intelligently balance the load according to the page size and loading time,That is, the request is allocated according to the response time of the back-end server,Priority allocation with short response time.Nginx Not supported by itself fair of,If you need to use this scheduling algorithm,Must download Nginx of upstream_fair module
    #
    #Configuration of virtual host(Using scheduling algorithm 3:ip_hash)
    server
    {
        listen 80;
        server_name mongo.demo.com;
        #right "/" Enable reverse proxy
        location / {
            proxy_pass http://webhost;
            proxy_redirect off;
            proxy_set_header X-Real-IP $remote_addr;
            #Backend Web The server can X-Forwarded-For Get the user's real IP
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            #Here are some reverse proxy configurations,Optional.
            proxy_set_header Host $host;
            client_max_body_size 10m; #Maximum number of single file bytes allowed for client requests
            client_body_buffer_size 128k; #The maximum number of bytes that the buffer agent can buffer client requests,
            proxy_connect_timeout 90; #nginx Timeout for connecting to the back-end server(Proxy connection timeout)
            proxy_send_timeout 90; #Back end server data return time(Agent send timeout)
            proxy_read_timeout 90; #After successful connection,Back end server response time(Agent receive timeout)
            proxy_buffer_size 4k; #Set up proxy server( nginx)Buffer size for saving user header information
            proxy_buffers 4 32k; #proxy_buffers Buffer,The average page size is 32 k The following settings
            proxy_busy_buffers_size 64k; #Buffer size under high load( proxy_buffers*2)
            proxy_temp_file_write_size 64k;
            #Set the cache folder size. If it is larger than this value, it will be transferred from the upstream server
        }
    }
}

The load balancing operation is demonstrated as follows:

Operating object: 192.168.0.4 (nginx server)

# Create a folder to store configuration files
$ mkdir -p /opt/confs
$ vim /opt/confs/nginx.conf
# The editing contents are as follows:
events
{
  use epoll;
  worker_connections 65535;
}

http
{
    upstream webhost {
        ip_hash;
        server 192.168.0.5:6666 ;
        server 192.168.0.7:6666 ;
    }
    
    server
    {
        listen 80;
        server_name mongo.demo.com;
        location / {
            proxy_pass http://webhost;
            proxy_redirect off;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            client_max_body_size 10m;
            client_body_buffer_size 128k;
            proxy_connect_timeout 90;
            proxy_send_timeout 90;
            proxy_read_timeout 90;
            proxy_buffer_size 4k;
            proxy_buffers 4 32k;
            proxy_busy_buffers_size 64k;
            proxy_temp_file_write_size 64k;
        }
    }
}
# Then save and exit
#Start load balancing server 192.168.0.4 (nginx server)
docker run -d -p 8888:80 --name nginx-server -v /opt/confs/nginx.conf:/etc/nginx/nginx.conf --restart always nginx

Operation object: 192.168.0.5 (nginx node1 / nginx web1)

# Create folder for storage web page $ mkdir -p /opt/html$ vim /opt/html/index.html# The editing contents are as follows:<div>  <h1>    The host is 192.168.0.5(Docker02) - Node 1!  </h1></div># Then save and exit#Start 192.168.0.5 (Nginx-Node1/Nginx-Web1) $docker run -d -p 6666:80 --name nginx-node1 -v /opt/html:/usr/share/nginx/html --restart always nginx

Operation object: 192.168.0.7 (nginx node2 / nginx web2)

# Create folder for storage web page
$ mkdir -p /opt/html
$ vim /opt/html/index.html
# The editing contents are as follows:
<div>
  <h1>
    The host is 192.168.0.7(Docker03) - Node 2!
  </h1>
</div>
# Then save and exit
#Start 192.168.0.7 (nginx node2 / nginx web2)
$ docker run -d -p 6666:80 --name nginx-node2 -v $(pwd)/html:/usr/share/nginx/html --restart always nginx

Test:

Domain name: mongo Demo COM, where the Windows system host is used to access the server. To add the resolution "mongo.demo.com 192.168.0.4" to the hosts of the current host, the path where the hosts file is located is "C:\Windows\System32\drivers\etc". Here you can access it through the browser on the Windows host“
http://mongo.demo.com:8888 ”During the visit to this site, Nginx will use the IP address of the visiting host_ Hash value, load balancing to 192.168.0.5 (Nginx-Node1/Nginx-Web1) and 192.168.0.7 (Nginx-Node2/Nginx-Web2) servers. If one of the Web servers is invalid, the load balancing server will automatically forward the request to the normal Web server.

The following figure shows the access effect of another set of demo s, and the port and IP of the container are different (all information has been modified accordingly):

1.Nginx-Server: 192.168.2.129(Docker01);

2.Nginx-Node1: 192.168.2.56(Docker02);

3.Nginx-Node2: 192.168.2.77(Docker03);

 

In fact, as the darling of the programming industry in the Internet era, Nginx is playing a more and more important role in the technical stack of programmers. Therefore, whether it is work use or interview preparation, Nginx is a knowledge point that needs to be studied. Here, I also share some information that I have screened from a large number of materials in the process of learning, including use and interview, Recently, I am learning Nginx or want to learn Nginx. I believe this information will be helpful to you. If necessary, after following + forwarding, view and obtain the "information" in private letters

 

Specific display

nginx practical books

 

Nginx interview topics and answers

Posted by spider661 on Wed, 01 Jun 2022 23:15:28 +0530