Environment configuration requirements:
Component installation package | Software name and version | Function |
---|---|---|
Java program compile and run components | JDK 1.8.0_211 | Program compiling and running components |
elasticsearch | 7.1.1 | log storage |
ik | 7.1.1 | ik tokenizer |
kibana | 7.1.1 | Graphical display of log data |
logstash | 7.1.1 | log processing |
filebeat | 7.1.1 | Log collection |
Note: The version numbers installed by elasticsearch, logstash, kibana, filebeat, and ik must all be the same.
.
1. Install elasticsearch
- Create es installation path
mkdir -p /data/nusp/es/{data,logs}
- Create esUser user
useradd esUser chown -R esUser:esGroup /data/nusp/es
- root permission edit
The vim configuration file /etc/security/limits.conf, add the following four lines of data at the end
* soft nofile 65536 * hard nofile 65536 * soft nproc 65536 * hard nproc 65536
- vim configuration file /etc/sysctl.conf, add the following
vm.overcommit_memory = 1 vm.max_map_count=655360
- Execute sysctl -p to make the configuration take effect
sysctl -p
- Upload the es installation package to the es directory and unzip it
cd /data/nusp/es tar -zxvf elasticsearch-7.1.1-linux-x86_64.tar.gz
- Modify the configuration file, enter the es installation directory config directory, and modify the elasticsearch.yml file
vim /data/nusp/es/elasticsearch-7.1.1/config/elasticsearch.yml cluster.name: elkbdp-cluster #cluster name node.name: elk #node name cluster.initial_master_nodes: ["elk"] #Master node information path.data: /data/nusp/es/data #data storage path path.logs: /data/nusp/es/logs #log storage path bootstrap.memory_lock: false bootstrap.system_call_filter: false network.host: 0.0.0.0 #All IPs can be accessed, discovery.seed_hosts: ["192.168.11.11","192.168.11.12","192.168.11.13"] #Output to elasticsearch server discovery.zen.minimum_master_nodes: 2 #At most several can participate in the master node election http.cors.enabled: true http.max_initial_line_length: "1024k" http.max_header_size: "1024k"
- Modify the jvm.options file
cd /data/nusp/es/elasticsearch-7.1.1/config/ vim jvm.options -Xms4g -Xmx4g
-
ik tokenizer installation
Unzip the prepared ik tokenizer installation package and copy the files to the es installation directory /plugins/ik. If there is no directory, create a directory by yourself, and there can be no other things in the directory folder. -
Start the es service and enter the es installation directory /bin to execute (start in the background, if there is no error, the startup is complete, you can access http://ip:9200 at this time).
./elasticsearch -d
- Test the es service, enter http://ip:9200 in the browser and press Enter, the following page will be displayed if the startup is successful.
.
2. Configure TLS and Authentication
The following steps can be executed on a master
- Generate CA certificate
cd /data/nusp/es/elasticsearch-7.1.1/bin ./elasticsearch-certutil ca # Enter twice ./elasticsearch-certutil cert --ca elastic-stack-ca.p12 # Enter three times
- Grant permissions (and copy the certificate file elastic-certificates.p12 to other master nodes and grant permissions).
mkdir /data/nusp/es/elasticsearch-7.1.1/config/certs mv elastic-*.p12 config/certs/ chown -R elsearch:elsearch config/certs/
- Modify the configuration file (add all master configuration files to ssl)
vim /data/nusp/es/elasticsearch-7.1.1/config/elasticsearch.yml xpack.security.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate # Certificate certification level xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12 xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
- restart elasticsearch
kill -9 10086 ./elasticsearch -d
- Set the default password (enter y, and set the passwords of elastic, apm_system, kibana, logstash_system, beats_system, and remote_monitoring_user accounts, respectively, here I enter the unified password 10086 for convenience)
bin/elasticsearch-setup-passwords interactive
- Configure kibana, modify the kibana.yml file, add username and password parameters (refer to the following)
.
3. Install Kibana
- Upload the es installation package to the es directory and unzip it
cd /data/nusp/es tar -zxvf kibana-7.1.1-linux-x86_64.tar.gz
- Modify the Kibana configuration file and add the following
vim /data/nusp/es/kibana-7.1.1-linux-x86_64/config/kibana.yml server.port: 5601 #Modify the binding ip so that the outside can be accessed through http server.host: "0.0.0.0" ##Listening port, can not be modified elasticsearch.hosts: ["192.168.11.11","192.168.11.12","192.168.11.13"] elasticsearch.username: "kibana" elasticsearch.password: "10086"
- Start the kibana service
./bin/kibana &
-
Verify by visiting http://192.168.11.11:5601 through a browser.
-
After logging in, verify the tokenizer you just installed
POST /_analyze { "text": "I am Chinese" } Ik tokenizer validation script POST /_analyze { "analyzer": "ik_max_word", "text": "I am Chinese" }
.
.
4. Install logstast
- Unzip the installation package and authorize the installation directory
cd /data/nusp/es tar -xzvf logstash-7.1.1.tar.gz chown -R esUser:esUser logstash-7.1.1
- Modify the logstash configuration file and create a pipeline in the logstash-7.1.1 directory.
Copy the logstash-sample.conf file to the pipeline folder, and modify the elasticsearch address in output in logstash-sample.conf.
cd /data/nusp/es/logstash-7.1.1/pipeline mv /data/nusp/es/logstash-7.1.1/config/logstash-sample.conf . vim logstash-sample.conf input { beats { port => 5044 } } filter { } output { elasticsearch { hosts => ["192.168.11.11","192.168.11.12","192.168.11.13"] index => "logstash-dev-%{+YYYY.MM.dd}" user => "elastic" password => "10086" } stdout { codec => rubydebug } }
- To start the logstash service, you must execute the startup command under the esUser user
su - esUser cd /data/nusp/es/logstash-7.1.1/ ./bin/logstash -f ./pipeline/logstash-gn.conf > /dev/null &
- Verify that it is working properly
cd /data/nusp/es/logstash-7.1.1/logs tail -f logstash-plain.log
.
5. Install filebeat
Continue writing tomorrow!