ELK+filebeat+redis to monitor nginx logs

1. Environmental preparation

Background Introduction:

Operations and maintenance personnel need precise control of the system and business logs to facilitate analysis of the system and business status. Logs are distributed on different servers. Traditional methods of logging on to each server in turn are cumbersome and inefficient. So we need centralized log management tools to gather logs from different servers and then analyze them.Exhibition.

Introduction to ELK:
ELK is the abbreviation of three open source software, Elasticsearch, Logstash, Kibana. They are all open source software. A new FileBeat is added, which is a lightweight log collection and processing tool (Agent). Filebeat takes up less resources and is suitable for transferring to Logstash after collecting logs on each server. This tool is also recommended by the authorities.

Elasticsearch is an open source distributed search engine that provides three functions: collecting, analyzing and storing data. Its features are: distributed, zero configuration, automatic discovery, automatic index fragmentation, index copy mechanism, restful style interface, multiple data sources, automatic search load, etc.

Logstash is a tool for collecting, analyzing and filtering logs. It supports a large number of data acquisition methods. The general working method is the c/s architecture. client side is installed on the host that needs to collect logs. server side is responsible for filtering and modifying the logs received from each node and sending them to elastic search.

Kibana is also an open source and free tool that provides a log analysis friendly Web interface for Logstash and ElasticSearch to help summarize, analyze, and search important data logs.

Filebeat is a lightweight transport tool for forwarding and centralizing log data.

Redis is a key-value storage system. Similar to Memcached, it supports storing relatively more value types, including string, list, set, zset(sorted set -- ordered set), and hash (hash type)These data types support push/pop, add/remove, intersection Union and difference, and richer operations, which are atomic. On this basis, redis supports sorting in different ways. Like memcached, data is cached in memory for efficiency. The difference is that redis periodically writes updated data to disk or repairs itThe change operation is written to the appended record file and master-slave synchronization is implemented on this basis.

Environmental preparation:

[root@Kibana ~]# cat /etc/hosts #Configure Host Resolution
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.42.200 kibana
192.168.42.201 es
192.168.42.202 logstash
192.168.42.203 redis
192.168.42.204 nginx
[root@Kibana ~]# cat /etc/yum.repos.d/elk.repo #The remaining yum sources are configured with Tsinghua sources
[elk]
name=elk
baseurl=https://mirrors.tuna.tsinghua.edu.cn/elasticstack/yum/elastic-7.x/
gpgcheck=0
enabled=1
[root@Kibana ~]# systemctl stop firewalld #Close firewall and selinux
[root@Kibana ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@Kibana ~]# setenforce 0
[root@Kibana ~]# sed -i "s/enforcing/permissive/g" /etc/selinux/config

Required software:
nginx filebeat logstash elasticsearch kibana openjdk(oraclejdk)

# nginx server
[root@nginx ~]# dnf -y install nginx filebeat
# redis server
[root@redis ~]# dnf -y install redis
# logstash server
[root@Logstash ~]# dnf -y install logstash java-1.8.0-openjdk
# elasticsearch server
[root@ES ~]# dnf -y install elasticsearch java-1.8.0-openjdk
# kibana server
[root@Kibana ~]# dnf -y install kibana

2. Cluster building

1. nginx server

[root@nginx ~]# systemctl start nginx    
[root@nginx ~]# systemctl enable nginx   
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
# After starting the service, remember to access it through the host web page, otherwise there are no log files
[root@nginx ~]# vim /etc/filebeat/filebeat.yml #Change Profile
- type: log 

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/nginx/access.log
  # Extension: Log Filtering
  # include_lines: ['test']] Indicates that the collected logs will only be collected if there is a test keyword in them

# change
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
# Configure for
output.redis:
  # Array of hosts to connect to.
  hosts: ["192.168.42.203"] # Configure redis server ip address
  password: "123456" 
  key: "filebeattoredis" 
  db: 0
  datatype: list 
  
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
Save Exit
[root@nginx ~]# systemctl start filebeat
[root@nginx ~]# systemctl enable filebeat
Synchronizing state of filebeat.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable filebeat
Created symlink /etc/systemd/system/multi-user.target.wants/filebeat.service → /usr/lib/systemd/system/filebeat.service.

2. redis server

[root@redis ~]# vim /etc/redis.conf # Modify Profile
bind 0.0.0.0 # Mainly filebeat can connect and write 0.0.0.0
requirepass 123456
 Save Exit
[root@redis ~]# systemctl restart redis   
[root@redis ~]# systemctl enable redis   
Created symlink /etc/systemd/system/multi-user.target.wants/redis.service → /usr/lib/systemd/system/redis.service.
[root@redis ~]# redis-cli -h 192.168.42.203 -a 123456 # Test redis service
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
192.168.42.203:6379> keys *
1) "filebeattoredis"
192.168.42.203:6379> llen filebeattoredis
(integer) 8 # If you use a browser to access nginx, you will have information about it, 8 indicating that there are 8 logs in the queue

3. logstash Server

[root@Logstash ~]# vim /etc/logstash/logstash.yml # Modify Profile
path.config: /etc/logstash/conf.d/	
[root@Logstash ~]# vim /etc/logstash/conf.d/logstash_nginx.conf
# Add the following
input {
    redis {
        host => "192.168.42.203"
        port => 6379
        password => "123456"
        db => "0"
        data_type => "list"
        key => "filebeattoredis"
    }
}
filter {
  if [app] == "www" {
    if [type] == "nginx" {
      grok {
        match => {
          "message" => "%{IPV4:remote_addr} - (%{USERNAME:remote_user}|-) \[%{HTTPDATE:time_local}\] \"%{WORD:request_method} %{URIPATHPARAM:request_uri} HTTP/%{NUMBER:http_protocol}\" %{NUMBER:http_status} %{NUMBER:body_bytes_sent} \"%
{GREEDYDATA:http_referer}\" \"%{GREEDYDATA:http_user_agent}\" \"(%{IPV4:http_x_forwarded_for}|-)\""
        }
        overwrite => ["message"]
      }
      date {
          locale => "en"
          match => ["time_local", "dd/MMM/yyyy:HH:mm:ss Z"]
      }
      mutate {
          convert => ["[geoip][coordinates]", "float"]
      }
    }
  }
}

output {
  elasticsearch {
      hosts  => ["http://192.168.42.201:9200"]
      index  => "logstash-nginx-log-format-%{+YYYY.MM.dd}"
  }
  stdout{
  }
}
[root@Logstash ~]# systemctl start logstash
[root@Logstash ~]# systemctl enable logstash        
Created symlink /etc/systemd/system/multi-user.target.wants/logstash.service → /etc/systemd/system/logstash.service.

4. es Server

[root@ES ~]# vim /etc/security/limits.conf 
# Profile last added
# This configuration consists of four parts: <domain> <type> <item> <value>
* soft nofile 65536
* hard nofile 131072
* soft nproc 65536
* hard nproc 65536
es hard fsize unlimited
es soft fsize unlimited
[root@ES ~]# echo "vm.max_map_count=262144 " >> /etc/sysctl.conf # Expand virtual memory size
[root@ES ~]# vim /etc/elasticsearch/elasticsearch.yml
# Modify to the following configuration
network.host: 0.0.0.0
discovery.seed_hosts: ["127.0.0.1"] # If you have multiple es servers, add an ip address here
cluster.initial_master_nodes: ["node-1"] 
Save Exit
[root@ES ~]# systemctl start elasticsearch # It will take a while to start the service here
[root@ES ~]# systemctl enable elasticsearch
Synchronizing state of elasticsearch.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable elasticsearch
Created symlink /etc/systemd/system/multi-user.target.wants/elasticsearch.service → /usr/lib/systemd/system/elasticsearch.service.

Matters needing attention:
If kibana keeps displaying server is not ready yet

[root@ES ~]# curl http://192.168.42.201:9200/_cat/indices?v
# See if an index for.kibana already exists
# Responsible to restart kibana after deletion by command
[root@ES ~]# curl -X DELETE http://localhost:9200/.kibana*

5. kibana Server

[root@Kibana ~]# vim /etc/kibana/kibana.yml 
# Modify to the following configuration
server.host: "0.0.0.0"	
elasticsearch.hosts: ["http://192.168.42.201:9200"] # es server ip address
logging.dest: /var/log/kibana.log # Log file address
i18n.locale: "zh-CN" # Set language to Chinese
 Save Exit
[root@Kibana ~]# touch /var/log/kibana.log # Create log file
[root@Kibana ~]# chown kibana.kibana /var/log/kibana.log
[root@Kibana ~]# systemctl start kibana
[root@Kibana ~]# systemctl enable kibana
Synchronizing state of kibana.service with SysV service script with /usr/lib/systemd/systemd-sysv-install.
Executing: /usr/lib/systemd/systemd-sysv-install enable kibana
Created symlink /etc/systemd/system/multi-user.target.wants/kibana.service → /etc/systemd/system/kibana.service.

3. Test Services

Note: The browser accesses the nginx server several times, otherwise there is no log file
Browser access to kibana server port 5601

Create an index:
Click the three bars in the upper left corner - > Slide to the bottom Stack Management - > Index Mode - > Create Index Mode in the upper right corner - > Index Mode Name Fill in "logstash-nginx-log-format*" -> Next - > Time Field - >@timestamp-> Create Index Mode
View index:
Click on the top left three bars - >Discover

Then you can filter the fields you want to see yourself
Make Visual Graphics
Click the three bars in the upper left corner - > Dashboard - > Upper right corner to create a dashboard - > Create a visualization

Select a field to map, for example: if I want to view the UV of a web page (daily visits), I add a time field and an ip field

At this point, the cluster is set up.

Tags: Linux Nginx Redis CentOS ELK

Posted by netxfly on Tue, 21 Sep 2021 00:06:36 +0530