Detailed explanation of Nginx configuration and use

1. Common commands

You need to enter the sbin directory in the nginx installation directory (you can also configure environment variables, and you can execute the following commands in any directory), which contains an nginx script file

1,start-up nginx
    ./nginx
2,close nginx
	./nginx -s stop
3,Reload nginx (nginx.conf)
	./nginx -s reload
4,View version number
	./nginx -v

2. Nginx configuration file (nginx.conf)

2.1 overview

By default, nginx is installed on Linux, and the configuration file is in the conf directory under the installed nginx directory, called nginx.conf

nginx.conf is mainly composed of three parts

  • Global block,
  • events block
  • http block

2.2 configuration file structure

2.3 overview of real configuration file

# Global fast
------------------------------------------------------------------------------
#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;

------------------------------------------------------------------------------

# events block
events {
    worker_connections  1024;
}

# http block 
http {
------------------------------------------------------------------------------# http global block
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;        
------------------------------------------------------------------------------    
# server block
server {
# server global block
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

# location block
        location / {
            root   html;
            index  index.html index.htm;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ .php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ .php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /.ht {
        #    deny  all;
        #}
}
	
# Multiple server blocks can be configured	

}

2.2 global block

It is the content from the beginning of the configuration file to the events block. It mainly sets the configuration instructions that affect the overall operation of the nginx server, such as worker_ The larger the value of process, the more concurrent processing capacity it can support, but it is still related to the hardware of the server

2.3events block

The instructions involved in the events block mainly affect the network connection between the Nginx server and the user. Common settings include whether to enable the serialization of network connections under multiple work process es, whether to allow multiple network connections to be received at the same time, which event driven model is selected to process connection requests, and the maximum number of connections that each word process can support at the same time.

The above example shows that the maximum number of connections supported by each work process is 1024
The configuration of this part has a great impact on the performance of Nginx, so it should be configured flexibly in practice

2.4http block

Including http global blocks and multiple server blocks

2.4.1http global block

The instructions of http global block configuration include file import, MIME-TYPE definition, log customization, connection timeout, upper limit of single link requests, etc.

2.4.2server block

  • This is closely related to the virtual host. From the perspective of users, the virtual host is exactly the same as an independent hardware host. This technology is produced to save the hardware cost of Internet servers.
  • Each http block can include multiple server blocks, and each server block is equivalent to a virtual host
  • Each server block is also divided into global server blocks, and can contain multiple location blocks at the same time.

2.4.2.1server global block

The most common configurations are the listening configuration of the virtual machine host and the name or IP configuration of the virtual machine host.

	#This line indicates that the port monitored by this server block is 80. As long as a request accesses port 80, this server block will process the request
  listen       80;
  #  Indicates the name of the virtual host represented by this server block
  server_name  localhost;

2.4.2.2location block

  • A server block can be configured with multiple location blocks.

  • The main function is to perform specific processing according to the matching of the requested address path

  • The main function of this part is to match the strings other than the virtual host name (or IP alias) (such as the previous /uri string) based on the request string received by the Nginx server (such as server_name / URI string), and process specific requests. Address orientation, data caching, response control and other functions, as well as the configuration of many third-party modules are also carried out here.

    Indicates that if the request path is / it is processed with this location block

    location / {
    root html;
    index index.html index.htm;
    }

3. Reverse proxy

3.1 overview of forward agency and reverse agency

3.1.1 forward agency

  • The forward proxy is the client, which needs to be configured on the client. What we access is the real server address

3.1.2 reverse proxy

  • The reverse proxy proxy is the server side. The client does not need any configuration. The client only needs to send the request to the reverse proxy server. The proxy server distributes the request to the real server and forwards the data to you after obtaining the data. It hides the real server, which is a bit like a gateway.

3.1.3 differences and summary

The difference between forward proxy and reverse proxy

The most fundamental difference is that the objects of the proxy are different

  • The forward proxy is the client. A proxy server needs to be built for each client. The access path of the client is the target server
  • The reverse proxy proxy is the real server, and the client does not need to make any configuration. The access path is the proxy server, and the proxy server forwards the request to the real server

3.2 configuration

3.2.1 application I

Achieve effect access http://192.168.80.102:80(Nginx homepage), the final agent to http://192.168.80.102:8080(Tomcat homepage)

First, start a Tomcat server (Tomcat has been installed)

Enter the bin directory under the Tomcat installation directory, and use the./startup.sh command to start Tomcat

Configure in the configuration file of Nginx

1. Create a new server block and configure listening port 80 in the server global block

2. Configure the address of the path request proxy to tomcat in the location block

The following three configurations mean that when accessing Linux http://192.168.80.102:80 When configuring this address, because Nginx listens to port 80, it will enter the server block for processing, and then look at your access path. According to the different paths configured in the location block, enter the corresponding processing. Because / request is configured, enter the location processing of / and then configure proxy_pass, so proxy to the specified path.

server {
#	Listen on port 80, that is, when the port of the access server is 80, enter this server block for processing
        listen       80;
# server_name does not work when listen is configured        
        server_name  localhost;

# The address after location represents the access path. When it is / requested, the address of the proxy to tomcat
        location / {
# Using proxy_pass (fixed writing) is followed by the proxy server address            
            proxy_pass http://192.168.80.102:8080;
        }
}                

After testing, when input http://192.168.80.102:80 Nginx proxy Tomcat to us, so the Tomcat page is displayed, that is, the configuration is successful

3.2.2 application II

Application 1 accesses the / path, and gives us a proxy to the specified server

Application II Implementation:

  • Let Nginx listen on port 9001
  • We implement when visiting http://192.168.80.102:9001/edu(Nginx address), nginx will represent us to http://192.168.80.102:8081 ,
  • When visiting http://192.168.80.102:9001/vod nginx will represent us to http://192.168.80.102:8082

Start two Tomcat servers

  • The ports are 8081 and 8082 respectively,
  • Create an edu directory under webapps of 8001 server and write a test.html
  • Create a vod directory under webapps of 8002 server and write a test.html

Since the ip of the virtual machine is 192.168.80.102, access is guaranteed http://192.168.80.102:8081/edu/test.html and http://192.168.80.102:8082/vod/test.html Can be successfully accessed

Write Nginx configuration file

server {
# Listen to 9001 port
        listen       9001;
# Match the path to the edu agent to 8081
        location ~/edu/ {
            proxy_pass http://192.168.80.102:8081;
        }
# Match the path to the vod agent to 8082
        location ~/vod/ {
            proxy_pass http://192.168.80.102:8082;
        }
}

After testing, the visit is successful!!!

3.3location details

Blog Garden

3.4server_name role and access process

When the client accesses the server through the domain name, it will put the domain name together with the resolved ip in the request. When the request is sent to nginx. Nginx will match the ip first. If the corresponding ip is not found in listen, it will match through the domain name. After the match is successful, it will match the port. When these three steps are completed, you will find the resources corresponding to the location of the corresponding server.

CSDN

4. Load balancing

4.1 overview

Simply put, using the distributed scenario, make the original server into a cluster, and then distribute the requests to each server. However, how to forward the requests to different servers each time, Nginx can do it. Originally, we all access the server directly. Now we can use Nginx for reverse proxy, and then we access Nginx, which distributes our requests to different servers to achieve load balancing

4.2 configuration

realization:

visit http://192.168.80.102:80/edu/test.html Nginx allocates the request to two tomcat servers, 8081 and 8082.

1. Start two tomcat

Write a test.html under the edu under webapps respectively, and the content of the file can be inconsistent, in order to clearly see the effect of load balancing

2. Profile

# Configure in global block in http block
# myserver behind the fixed writing method of upstream can be customized
upstream myserver{
    server 192.168.80.102:8081;
    server 192.168.80.102:8082;
}

# server configuration
    server {
      # Listen to port 80
        listen 80;   
    	#location block
        location / {
# Reverse proxy to the above two servers and write the customized name
        proxy_pass http://myserver;
        }
    }

visit http://192.168.80.102:80/edu/test.html It can be distributed to 8081 and 8082 servers, and the test is successful

4.3 load balancing rules

4.3.1 polling (default)

Each request is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated

4.3.2weight weight

Weight means the weight is 1 by default. The higher the weight, the more clients are assigned

upstream myserver { 
	server 192.168.80.102:8081 weight=1 ;
	server 192.168.80.102:8082 weight=2 ;
}
server {  
    listen       80;  
    location / {
    proxy_pass http://myserver; 
}

4.3.3ip_hash

Each request is allocated according to the hash result of the access ip, so that each visitor can access a back-end server, which can solve the session problem

#Configure load balanced servers and ports
upstream myserver { 
	server 192.168.80.102:8081;
	server 192.168.80.102:8082;
    ip_hash;
}
server {  
    listen       80;  
    location / {
    proxy_pass http://myserver; 
   }
}

4.3.4fair

Requests are allocated according to the response time of the back-end server, and those with short response time are allocated first.

#Configure load balanced servers and ports
upstream myserver {   
	server 192.168.80.102:8081;
	server 192.168.80.102:8082;
    fair;
}
server {  
    listen       80;   
    location / {
    proxy_pass http://myserver; 
    }    
}

5. Dynamic and static separation

5.1 overview

  • Deploy static resources CSS, HTML JS, etc. separately from dynamic resources (JSP servlets). We can deploy static resources directly on a special server, or directly on the server where the reverse proxy server (Nginx) is located, and then deploy dynamic resources on the server, such as tomcat.
  • Then, when the request comes, the static resources are obtained from a special static resource server, and the dynamic resources are forwarded to the back-end server.

5.2 configuration

Preparatory work: create two folders, www and image, under the root directory of Linux / under the staticResource directory of, create an okc.html under the www directory, and put a ttt.jpg under the image directory

Achieve results, visit http://192.168.80.102:80/www/okc.html and http://192.168.80.102:80/image/ttt.img Can successfully access resources when

to configure

server {
        listen       80;
    # When the access path carries WWW, enter the location processing and go to the corresponding www directory under the /staticResource directory to find okc.html
 #  That is, the final implementation accesses this path
  #  http://192.168.80.102:80/staticResource/www/okc.html
        location /www/{
            root   /staticResource/;
            index  index.html index.htm;
        }
    # Same as above
        location /image/{
            root  /staticResource/;
      }   
}  

Tested and successfully accessed

5.3 differences and access paths between root and alias

  • The actual access file path of alias will not splice the path in the URL
  • root's actual access file path will splice the path in the URL

Examples are as follows:

alias

location ^~ /sta/ {  
   alias /usr/local/nginx/html/static/;  
}
  • Request: http://test.com/sta/sta1.html
  • Actual access: /usr/local/nginx/html/static/sta1.html file

root

location ^~ /tea/ {  
   root /usr/local/nginx/html/;  
}
  • Request: http://test.com/tea/tea1.html
  • Actual access: /usr/local/nginx/html/tea/tea1.html file

6. High availability cluster

6.1 overview

Active / standby architecture

  • As a standby server, when the primary server goes down, the configured standby server will automatically switch,
  • keepalived provides virtual ip. Externally, we access virtual ip and bind the active and standby ip

6.2 configuration

6.2.1 environment construction

Start two virtual machines and install nginx and kept

The ip addresses of the two virtual machines are 192.168.80.102 192.168.80.103 respectively

Install keepalived directly using yum

yum install -y keepalived

By default, it is installed in /etc, and a keepalived directory is generated, which contains a keepalived.conf file, which is mainly configured later

6.2.2 configuration

First of all, I would like to introduce myself. I graduated from Jiaotong University in 13 years. I once worked in a small company, went to large factories such as Huawei OPPO, and joined Alibaba in 18 years, until now. I know that most junior and intermediate Java engineers who want to improve their skills often need to explore and grow by themselves or sign up for classes, but there is a lot of pressure on training institutions to pay nearly 10000 yuan in tuition fees. The self-study efficiency of their own fragmentation is very low and long, and it is easy to encounter the ceiling technology to stop. Therefore, I collected a "full set of learning materials for java development" and gave it to you. The original intention is also very simple. I hope to help friends who want to learn by themselves and don't know where to start, and reduce everyone's burden at the same time. Add the business card below to get a full set of learning materials

Tags: Android Interview Back-end Front-end

Posted by theprofession on Sun, 07 Aug 2022 22:48:58 +0530