Tomcat is developed in Java language. Tomcat server is a free open source Web application server. It is a core project of the Jakarta project of the Apache Software Foundation. It is jointly developed by Apache, Sun and other companies and individuals.
Tomcat is a lightweight application server. It is widely used in small and medium-sized systems and when there are not many concurrent access users. It is the first choice for developing and debugging JSP programs. Generally speaking, Tomcat, like Apache or Nginx Web servers, has the function of processing HTML pages. However, because its ability to process static html is far less than Apache or Nginx, Tomcat usually runs on the back end as a Servlet and JSP container.
Tomcat consists of a series of components, of which the core components are three:
(1) Web container: complete the functions of the web server.
(2) Servlet container: named catalina, used to process servlet code.
(3) JSP container: used to translate JSP dynamic web pages into Servlet code.
Therefore, Tomcat is a Web application server and a Servlet/JSP container. Tomcat, as a Servlet container, is responsible for handling client requests, transmitting the requests to the Servlet, and transmitting the response of the Servlet back to the client.
What is a servlet?
Servlet is the abbreviation of Java Servlet. It can be understood as a service connector. It is a server-side program written in Java. It is independent of the platform and protocol. A simple understanding: servlet is a middleware that includes interfaces and methods to connect the client and database, so as to realize the creation of dynamic web pages.
What is JSP?
JSP, the full name of Java Server Pages, is a dynamic web page development technology. It uses JSP tags to insert Java code into HTML pages. Labels usually start with <% and end with% >.
JSP is a Java servlet, which is mainly used to implement the user interface part of Java web applications.
JSP obtains user input data, accesses database and other data sources through web page forms, and then dynamically creates web pages.
Tomcat function component structure:
Tomcat has two core functions: a Connector that receives and feeds back external requests, and a Container that processes requests. The Connector and Container complement each other and together constitute the basic web Service. Each Tomcat server can manage multiple services.
● Connector: responsible for receiving and responding to external requests. It is the transportation hub between Tomcat and the outside world. The listening port receives external requests, processes the requests and passes them to the container for business processing. Finally, the container responds to the outside world with the processed results.
● Container: responsible for internal processing of business logic. It is internally composed of four containers: Engine, Host, Context and Wrapper, which are used to manage and call Servlet related logic.
● Service: Web Service provided externally. It mainly includes two core components, Connector and Container, as well as other functional components. Tomcat can manage multiple services, which are independent of each other.
Container structure analysis:
Each Service contains a Container. The Container contains 4 sub containers:
The functions of the four sub containers are:
(1) Engine: engine, used to manage multiple virtual hosts. A Service can only have one engine at most;
(2) Host: represents a virtual host, which can also be called a site. You can add a site by configuring the host;
(3) Context: represents a Web application, including multiple Servlet wrappers;
(4) Wrapper: wrapper, the bottom layer of the container. Each wrapper encapsulates a Servlet, which is responsible for the creation, execution and destruction of object instances.
Engine, Host, Context and Wrapper belong to parent-child relationship.
Containers can be managed by multiple virtual hosts by one engine. Each virtual host can manage multiple Web applications. Each Web application will have multiple Servlet wrappers.
Tomcat request process:
1. The user enters the web address in the browser, and the request is sent to the local port 8080, which is obtained by the Connector listening there;
2. The Connector sends the request to the Engine (Container) of the Service where it is located for processing, and waits for the Engine's response;
3. The request is called layer by layer between the four containers of Engine, Host, Context and Wrapper. Finally, the corresponding business logic and data storage are executed in the Servlet.
4. After execution, the request response is returned layer by layer among the Context, Host and Engine containers, and finally to the Connector, and to the client through the Connector.
---------------------Tomcat service deployment-------------------------
Before deploying tomcat, you must install jdk, because jdk is a necessary environment for Tomcat to run.
1. close the firewall and transfer the software packages required for installing Tomcat to the /opt directory
jdk-8u201-linux-x64.rpm apache-tomcat-9.0.16.tar.gz systemctl stop firewalld systemctl disable firewalld setenforce 0 2.install JDK cd /opt rpm -qpl jdk-8u201-linux-x64.rpm rpm -ivh jdk-8u201-linux-x64.rpm java -version 3.set up JDK environment variable vim /etc/profile.d/java.sh export JAVA_HOME=/usr/java/jdk1.8.0_201-amd64 export JRE_HOME=$JAVA_HOME/jre export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH source /etc/profile.d/java.sh java -version
---------------------------------Small knowledge-------------------------------------------------------------------
CLASSPATH: when compiling and running a Java program, JRE will search the path specified by the variable for the required class (.Class) file.
JDK:java development kit (java development tool)
JRE:java runtime environment
JVM:java virtual machine (Java virtual machine), which enables Java programs to run class files on a variety of platforms.
First, use a text tool to write java source code, such as hello Java;
On the command line, enter the command: javac hello Java, compile the source code and generate the class bytecode file;
After compilation, if there is no error message, enter the command: java Hello, run the class bytecode file, the JVM will interpret and run the bytecode, and print "Hello World".
vim Hello.java
#Class name and interface name commands: English upper and lower case letters, numeric characters, $and \, Cannot start with keywords and numbers;
When a word is named, the first letter of the first word should be capitalized; In case of multiple words, the first letter of all words is capitalized: xxxyyyzz (naming method of big hump)
public class Hello { public static void main(String[] args){ System.out.println("Hello world!"); } } javac Hello.java java Hello 4.Installation startup Tomcat cd /opt tar zxvf apache-tomcat-9.0.16.tar.gz mv apache-tomcat-9.0.16 /usr/local/tomcat ##start-up tomcat ## #Background start /usr/local/tomcat/bin/startup.sh or /usr/local/tomcat/bin/catalina.sh start #Foreground start /usr/local/tomcat/bin/catalina.sh run netstat -natp | grep 8080 Browser access Tomcat Default home page for http://192.168.80.100:8080 5.optimization tomcat Starting speed First start tomcat May find Tomcat The startup is very slow. It may take tens of seconds by default. It can be modified jdk Change the parameters. vim /usr/java/jdk1.8.0_201-amd64/jre/lib/security/java.security --117 line--modify securerandom.source=file:/dev/urandom
● the slow startup of tomcat is due to the blocking of random number generation, and the blocking is due to the size of entropy pool.
● /dev/random: blocking type. Reading it will generate random data, but the data depends on the entropy pool noise. When the entropy pool is empty, the reading operation of /dev/random will also be blocked.
● /dev/urandom: a non blocking random number generator, which will reuse the data in the entropy pool to generate pseudo-random data. This means that the read operation to /dev/urandom will not cause blocking, but the entropy of its output may be less than that of /dev/random. It can be used as a pseudo-random number generator to generate low-strength passwords, and is not recommended to generate high-strength long-term passwords.
● Linux kernel uses entropy to describe the randomness of data. Entropy is a physical quantity that describes the disorder degree of a system. The greater the entropy of a system, the worse the order of the system, that is, the greater the uncertainty. In Informatics, entropy is used to represent the uncertainty of a symbol or system. The greater the entropy, the less useful information the system contains, and the greater the uncertainty. The computer itself is a predictable system, so it is impossible to generate real random numbers with computer algorithms. However, the environment of the machine is full of all kinds of noise, such as the interruption time of the hardware equipment, the time interval of the user clicking the mouse, etc. are completely random and unpredictable in advance. The random number generator implemented by Linux kernel uses these random noises in the system to generate high-quality random number sequences. The kernel maintains an entropy pool to collect ambient noise from device drivers and other sources. In theory, the data in the entropy pool is completely random, which can generate a true random number sequence. To track the randomness of data in the entropy pool, the kernel will estimate the randomness of data when adding data to the pool. This process is called entropy estimation. The entropy estimate describes the number of random digits contained in the pool. The larger the entropy estimate, the better the randomness of the data in the pool.
/usr/local/tomcat/bin/shutdown.sh
/usr/local/tomcat/bin/startup.sh
ll /usr/local/tomcat/
------Description of main contents----------------------------------------------------------------------------------------------
● bin: stores the script files for starting and closing Tomcat, such as catalina SH, startup SH, shutdown Sh
● conf: stores various configuration files of the Tomcat server, such as the main configuration file server XML and application default deployment description file web XML
● lib: the jar package that stores the library files required for Tomcat operation. Generally, no changes are made
● logs: store the logs during Tomcat execution
● temp: store files generated during Tomcat operation
● webapps: the directory where Tomcat's default Web application project resources are stored
● work: the working directory of Tomcat, which stores Web application code generation and compilation files
---------------------Tomcat virtual host configuration-------------------------
Many times, the company will have multiple projects to run. Generally, it will not run multiple Tomcat services on one server, which will consume too many system resources. At this point, you need to use the Tomcat virtual host.
For example, two new domain names www.kgc COM and www.benet COM, hoping to access different project contents through these two domain names.
1.establish kgc and benet Project directory and documents mkdir /usr/local/tomcat/webapps/kgc mkdir /usr/local/tomcat/webapps/benet echo "This is kgc page\!" > /usr/local/tomcat/webapps/kgc/index.jsp echo "This is benet page\!" > /usr/local/tomcat/webapps/benet/index.jsp 2.modify Tomcat Master profile server.xml vim /usr/local/tomcat/conf/server.xml --165 Anterograde--insert <Host name="www.kgc.com" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context docBase="/usr/local/tomcat/webapps/kgc" path="" reloadable="true" /> </Host> <Host name="www.benet.com" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context docBase="/usr/local/tomcat/webapps/benet" path="" reloadable="true" /> </Host>
Host
Name: host name
appBase: Tomcat program working directory, that is, the directory where web applications are stored; The relative path is webapps, and the absolute path is /usr/local/tomcat/webapps
unpackWARs: whether to expand the WAR format archive first when enabling this webapps; Default to true
autoDeploy: whether to automatically deploy the application files placed in the appBase directory when Tomcat is running; Default to true
xmlValidation: whether to validate the xml file for validation
xmlNamespaceAware: whether to enable xml namespace. Set the value and xmlValidation to true, indicating that the xml file validation
Context
docBase: the storage location of the corresponding Web application; Relative paths can also be used. The starting path is the path defined by appBase in the Host to which the Context belongs;
Path: the URI relative to the Web server root path; If it is empty "", it means the root path of this webapp /;
reloadable: whether to allow reloading classes of Web applications related to this context; The default is false
/usr/local/tomcat/bin/shutdown.sh
/usr/local/tomcat/bin/startup.sh
3. client browser access verification
echo "192.168.80.100 www.kgc.com www.benet.com" >> /etc/hosts
Browser access http://www.kgc.com:8080/ This is kgc page!
Browser access http://www.benet.com:8080/ This is benet page!
HTTP request process:
(1) The port that the Connector listens on is 8080. Since the requested port is consistent with the listening port, the Connector accepted the request.
(2) Because the default virtual host of the engine is www.kgc COM, and the directory of the virtual host is webapps. So the request found the tomcat/webapps directory.
(3) The access path is the root path, and the URI is empty, that is, empty is the application name of the Web program, that is, context. At this time, request to find the /usr/local/tomcat/webapps/kgc directory and parse the index JSP and return.
---------------------Tomcat optimization-------------------------
The default configuration under the default installation of Tomcat is not suitable for the production environment. It may frequently appear suspended and need to be restarted. Only through continuous pressure test and optimization can it run with maximum efficiency and stability. Optimization mainly includes three aspects: operating system optimization (kernel parameter optimization), Tomcat configuration file parameter optimization, and Java virtual machine (JVM) optimization.
##Tomcat configuration file parameter optimization##
Common optimization related parameters are as follows:
[redirectPort] if the protocol supported by a connector is HTTP, when receiving the HTTPS request from the client, it will be forwarded to port 8443 defined by this attribute.
[maxThreads] Tomcat uses threads to process each received request. This value represents the maximum number of threads that Tomcat can create, that is, the maximum number of concurrent connections supported. The default value is 200.
[minSpareThreads] the minimum number of idle threads, the number of initialized threads when Tomcat is started, indicating that so many empty threads are opened to wait even if no one is using them. The default value is 10.
[maxSpareThreads] the maximum number of standby threads. Once the created thread exceeds this value, Tomcat will close the socket threads that are no longer needed. The default value is -1 (unlimited). Generally, it is not required to specify.
[processorCache] process buffer, which can promote concurrent requests. The default value is 200. If there is no restriction, it can be set to -1. Generally, the value of maxThreads or -1 is used.
[URIEncoding] specifies the URL encoding format of Tomcat container. Websites generally use UTF-8 as the default encoding.
[connectiontimeout] network connection timeout, unit: milliseconds. Setting it to 0 means never timeout. This setting has hidden dangers. Usually the default is 20000 milliseconds.
[enableLookups] whether to reverse query the domain name to return the host name of the remote host. The value is: true or false. If it is set to false, the IP address will be returned directly. In order to improve processing capacity, it should be set to false.
[disableUploadTimeout] whether to use the timeout mechanism during uploading. Should be set to true.
[connectionUploadTimeout] upload timeout. After all, file uploading may take more time. This can be adjusted according to your own business needs, so that the Servlet has a longer time to complete its execution. It will take effect only when it is used in conjunction with the previous parameter.
[acceptCount] specifies the maximum queue length of incoming connection requests when all available threads for processing requests are used. Requests exceeding this number will not be processed. The default is 100.
[maxKeepAliveRequests] specifies the maximum number of requests for a long connection. The default long connection is open. When set to 1, it means that the long connection is closed; When -1, there is no limit on the number of requests
[compression] whether to perform GZIP compression on the response data. Off: indicates that compression is prohibited; On: indicates that compression is allowed (the text will be compressed). force: indicates that compression is performed in all cases. The default value is off. Compressed data can effectively reduce the size of the page, generally by about 1/3, saving bandwidth.
[compressionMinSize] indicates the minimum value of the compression response. The message will be compressed only when the response message size is greater than this value. If the compression function is enabled, the default value is 2048.
[compressableMimeType] compression type, which specifies the types of files for data compression.
[noCompressionUserAgents= "gozilla, traviata"] for the following browsers, compression is not enabled
#If the static and dynamic separation processing has been performed, data such as static pages and pictures do not need to be processed in tomcat, and compression should not be configured in Tomcat.
The above are some commonly used configuration parameters. There are many other parameter settings that can be further optimized. The parameter attribute values of HTTP Connector and AJP Connector can be learned by referring to the detailed instructions in the official documents.
vim /usr/local/tomcat/conf/server.xml ...... <Connector port="8080" protocol="HTTP/11.1" connectionTimeout="20000" redirectPort="8443" --71 line--insert minSpareThreads="50" enableLookups="false" disableUploadTimeout="true" acceptCount="300" maxThreads="500" processorCache="500" URIEncoding="UTF-8" compression="on" compressionMinSize="2048" compressableMimeType="text/html,text/xml,text/javascript,text/css,text/plain,image/gif,image /jpg,image/png"/>
---------------------Tomcat multi instance deployment---------------------
1.Installed jdk 2.install tomcat cd /opt tar zxvf apache-tomcat-9.0.16.tar.gz mkdir /usr/local/tomcat mv apache-tomcat-9.0.16 /usr/local/tomcat/tomcat1 cp -a /usr/local/tomcat/tomcat1 /usr/local/tomcat/tomcat2 3.allocation tomcat environment variable vim /etc/profile.d/tomcat.sh #tomcat1 export CATALINA_HOME1=/usr/local/tomcat/tomcat1 export CATALINA_BASE1=/usr/local/tomcat/tomcat1 export TOMCAT_HOME1=/usr/local/tomcat/tomcat1 #tomcat2 export CATALINA_HOME2=/usr/local/tomcat/tomcat2 export CATALINA_BASE2=/usr/local/tomcat/tomcat2 export TOMCAT_HOME2=/usr/local/tomcat/tomcat2 source /etc/profile.d/tomcat.sh 4.modify tomcat2 Medium server.xml Documents, requirements tomcat Instance configuration cannot have duplicate port numbers vim /usr/local/tomcat/tomcat2/conf/server.xml <Server port="8006" shutdown="SHUTDOWN"> #Line 22, modify the Server prot, default to 8005 - > modify to 8006 <Connector port="8081" protocol="HTTP/1.1" #Line 69, modify the Connector port. HTTP/1.1 defaults to 8080 - > modify to 8081 <Connector port="8010" protocol="AJP/1.3" redirectPort="8443" /> #Line 116, modify Connector port AJP/1.3, default to 8009 - > modify to 8010
The first connector listens to port 8080 by default and is responsible for establishing HTTP connections. This connector is used when accessing the Web application of the Tomcat server through the browser.
The second connector listens to port 8009 by default and is responsible for establishing connections with other HTTP servers. This connector is required when integrating Tomcat with other HTTP servers.
5. modify the startup SH and shutdown SH file, add tomcat environment variable
vim /usr/local/tomcat/tomcat1/bin/startup.sh # ----------------------------------------------------------------------------- # Start Script for the CATALINA Server # ----------------------------------------------------------------------------- ##Add the following export CATALINA_BASE=$CATALINA_BASE1 export CATALINA_HOME=$CATALINA_HOME1 export TOMCAT_HOME=$TOMCAT_HOME1 vim /usr/local/tomcat/tomcat1/bin/shutdown.sh # ----------------------------------------------------------------------------- # Stop script for the CATALINA Server # ----------------------------------------------------------------------------- export CATALINA_BASE=$CATALINA_BASE1 export CATALINA_HOME=$CATALINA_HOME1 export TOMCAT_HOME=$TOMCAT_HOME1 vim /usr/local/tomcat/tomcat2/bin/startup.sh # ----------------------------------------------------------------------------- # Start Script for the CATALINA Server # ----------------------------------------------------------------------------- export CATALINA_BASE=$CATALINA_BASE2 export CATALINA_HOME=$CATALINA_HOME2 export TOMCAT_HOME=$TOMCAT_HOME2 vim /usr/local/tomcat/tomcat2/bin/shutdown.sh # ----------------------------------------------------------------------------- # Stop script for the CATALINA Server # ----------------------------------------------------------------------------- export CATALINA_BASE=$CATALINA_BASE2 export CATALINA_HOME=$CATALINA_HOME2 export TOMCAT_HOME=$TOMCAT_HOME2 6.Start each tomcat Medium /bin/startup.sh /usr/local/tomcat/tomcat1/bin/startup.sh /usr/local/tomcat/tomcat2/bin/startup.sh netstat -natp | grep java
7. browser access test
http://192.168.80.101:8080
http://192.168.80.101:8081
---------------------Nginx+Tomcat load balancing, dynamic and static separation-------------------------
Nginx server: 192.168.80.10:80
Tomcat server 1:192.168.80.100:80
Tomcat server 2:192.168.80.101:8080 192.168.80.101:8081
1. deploy Nginx load balancer
systemctl stop firewalld setenforce 0 yum -y install pcre-devel zlib-devel openssl-devel gcc gcc-c++ make useradd -M -s /sbin/nologin nginx cd /opt tar zxvf nginx-1.12.0.tar.gz -C /opt/ cd nginx-1.12.0/ ./configure \ --prefix=/usr/local/nginx \ --user=nginx \ --group=nginx \ --with-file-aio \ #Enable file modification support --with-http_stub_status_module \ #Enable status statistics --with-http_gzip_static_module \ #Enable gzip static compression --with-http_flv_module \ #Enable the flv module to provide pseudo streaming support for flv video --with-http_ssl_module #Enable SSL module to provide SSL encryption function --with-stream #Enable the stream module to provide 4-layer scheduling ---------------------------------------------------------------------------------------------------------- ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx --with-file-aio --with-http_stub_status_module --with-http_gzip_static_module --with-http_flv_module --with-stream make && make install ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/ vim /lib/systemd/system/nginx.service [Unit] Description=nginx After=network.target [Service] Type=forking PIDFile=/usr/local/nginx/logs/nginx.pid ExecStart=/usr/local/nginx/sbin/nginx ExecrReload=/bin/kill -s HUP $MAINPID ExecrStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true [Install] WantedBy=multi-user.target chmod 754 /lib/systemd/system/nginx.service systemctl start nginx.service systemctl enable nginx.service
2. deploy two Tomcat application servers
systemctl stop firewalld setenforce 0 tar zxvf jdk-8u91-linux-x64.tar.gz -C /usr/local/ vim /etc/profile export JAVA_HOME=/usr/local/jdk1.8.0_91 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin:$PATH source /etc/profile tar zxvf apache-tomcat-8.5.16.tar.gz mv /opt/apache-tomcat-8.5.16/ /usr/local/tomcat /usr/local/tomcat/bin/shutdown.sh /usr/local/tomcat/bin/startup.sh netstat -ntap | grep 8080 `` 3.Dynamic static separation configuration
(1) Tomcat1 server configuration
mkdir /usr/local/tomcat/webapps/test
vim /usr/local/tomcat/webapps/test/index.jsp
<%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%>
vim /usr/local/tomcat/conf/server.xml
#Since the hostname and name configurations are all localhost, you need to delete the previous HOST configuration
/usr/local/tomcat/bin/shutdown.sh
/usr/local/tomcat/bin/startup.sh
``
(2) Tomcat2 server configuration
mkdir /usr/local/tomcat/tomcat1/webapps/test /usr/local/tomcat/tomcat2/webapps/test vim /usr/local/tomcat/tomcat1/webapps/test/index.jsp <%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%> <html> <head> <title>JSP test2 page</title> #Specify as test2 page </head> <body> <% out.println("Dynamic page 2,http://www.test2.com");%> </body> </html> vim /usr/local/tomcat/tomcat1/conf/server.xml #Delete the previous HOST configuration <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context docBase="/usr/local/tomcat/tomcat1/webapps/test" path="" reloadable="true" /> </Host> /usr/local/tomcat/tomcat1/bin/shutdown.sh /usr/local/tomcat/tomcat1/bin/startup.sh vim /usr/local/tomcat/tomcat2/webapps/test/index.jsp <%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%> <html> <head> <title>JSP test3 page</title> #Specify as test3 page </head> <body> <% out.println("Dynamic page 3,http://www.test3.com");%> </body> </html> vim /usr/local/tomcat/tomcat2/conf/server.xml #Delete the previous HOST configuration <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context docBase="/usr/local/tomcat/tomcat2/webapps/test" path="" reloadable="true" /> </Host> /usr/local/tomcat/tomcat2/bin/shutdown.sh /usr/local/tomcat/tomcat2/bin/startup.sh
(3) Nginx server configuration
#Prepare static pages and static pictures
echo '<html><body><h1>This is a static page</h1></body></html>' > /usr/local/nginx/html/index.html mkdir /usr/local/nginx/html/img cp /root/game.jpg /usr/local/nginx/html/img vim /usr/local/nginx/conf/nginx.conf ...... http { ...... #gzip on; #Configure the list of servers for load balancing. The weight parameter indicates the weight. The higher the weight, the greater the probability of being allocated upstream tomcat_server { server 192.168.80.100:8080 weight=1; server 192.168.80.101:8080 weight=1; server 192.168.80.101:8081 weight=1; } server { listen 80; server_name www.kgc.com; charset utf-8; #access_log logs/host.access.log main; #Configure Nginx to process dynamic page requests, and set The jsp file request is forwarded to the Tomcat server for processing location ~ .*\.jsp$ { proxy_pass http://tomcat_server; #Set the back-end Web server to obtain the real IP of the remote client ##Set the HOST name (domain name, IP, port) of the request received by the back-end Web server. The default HOST value is proxy_ The hostname set by the pass directive. If the reverse proxy server does not rewrite the request header, the back-end real server will consider that all requests come from the reverse proxy server. If the back-end has an anti attack strategy, the machine will be blocked. proxy_set_header HOST $host; ##Put $remote_addr is assigned to X-Real-IP to obtain the source IP proxy_set_header X-Real-IP $remote_addr; ##When nginx is used as a proxy server, the IP list set will record the IP of the passing machine and the proxy machine proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } #Configure Nginx to process still picture requests location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|css)$ { root /usr/local/nginx/html/img; expires 10d; } location / { root html; index index.html index.htm; } ...... } ...... }
4. test effect
Test static page effect
Browser access http://192.168.80.10/
Browser access http://192.168.80.10/game.jpg
Test the load balancing effect and constantly refresh the browser test
Browser access http://192.168.80.10/index.jsp
Nginx load balancing mode:
● rr load balancing mode:
Each request is allocated to different back-end servers one by one in chronological order. If the maximum number of failures is exceeded (max\u failures, default 1), within the failure time (fail\u timeout, default 10 seconds), the node's failure weight becomes 0. After the failure time is exceeded, it will return to normal. Or after all nodes are down, all nodes will return to effective and continue to detect. Generally speaking, rr can be evenly distributed according to the weight.
● least_conn minimum connection:
Give priority to scheduling client requests to the server with the least current connections.
● ip_hash load balancing mode:
Each request is allocated according to the hash result of the access ip. In this way, each visitor accesses a back-end server, which can solve the problem of session, but the ip_hash will cause uneven load. Some service requests receive more and some receive less. Therefore, ip is not recommended_ In the hash mode, the session sharing problem can be solved by using the session sharing of the back-end service to replace the ip of nginx_ Hash (use the back-end server itself to keep session synchronization through relevant mechanisms).
● fair (third party) load balancing mode:
Requests are allocated according to the response time of the back-end server, and those with short response time are allocated first.
● url_hash (third party) load balancing mode:
Hash based on the uri requested by the user. And IP_ The hash algorithm is similar. It allocates each request according to the hash result of the URL, so that each URL is directed to the same back-end server, but it will also cause uneven allocation. This mode is better when the back-end server is caching.
Nginx four layer proxy configuration:
./configure --with-stream
The same level as http: therefore, it is generally only set in the upper section of HTTP,
stream { upstream appserver { server 192.168.80.100:8080 weight=1; server 192.168.80.101:8080 weight=1; server 192.168.80.101:8081 weight=1; } server { listen 8080; proxy_pass appserver; } } http { ......