Password configuration and common operations of EFK in K8S


Set password for EFK


Configure elasticsearch


Set up security

#pod from cloud exec to elk

# Edit the configuration file of elasticsearch
vim /etc/elasticsearch/elasticsearch.yml

#Add the following two sentences at the end
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

#Add the following statement to allow kibana to use regular expressions in painless languages
script.painless.regex.enabled: true

#exit the container
exit

restart elk container

docker ps
docker restart elk container ID

set password

#exec to elk's pod again, run this command, start setting password
./opt/elasticsearch/bin/elasticsearch-setup-passwords interactive

#Enter y first, then enter the set password, here is set to "josepha0509"
Initiating the setup of passwords for reserved users elastic,apm_system,kibana,kibana_system,logstash_system,beats_system,remote_monitoring_user.
You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N]y


Enter password for [elastic]:
Reenter password for [elastic]:
Enter password for [apm_system]:
Reenter password for [apm_system]:
Enter password for [kibana_system]:
Reenter password for [kibana_system]:
Enter password for [logstash_system]:
Reenter password for [logstash_system]:
Enter password for [beats_system]:
Reenter password for [beats_system]:
Enter password for [remote_monitoring_user]:
Reenter password for [remote_monitoring_user]:

Configure Kibana

# Edit kibana's configuration file
vim /opt/kibana/config/kibana.yml

# Sinicization
i18n.locale: "zh-CN"

# Set the password, the same as elasticsearch
elasticsearch.username: "elastic"
elasticsearch.password: "josepha0509"

exit

restart the container

docker restart elk container ID

Rewrite the startup scripts of elasticsearch and Kibana

Once elk is restarted, the passwords and other settings configured for elasticsearch and kibana will be cleared. You can try to rewrite the elk startup script. The default script is /usr/local/bin/start.sh in the elk container

After adding the elasticsearch and kibana configuration information, name it start_edited.sh, then rewrite the start yaml file of elk in the k8s cluster, mount start_edited.sh, and use the CMD command to overwrite the entrypoint of the elk image. Automatically configure password



Configure filebeat


Set the password to be consistent with elasticsearch


Enter the filebeat container through docker at the edge or through the kubectl command in the cloud

The default configuration file is /usr/share/filebeat/filebeat.yml

Edit the configuration file and add the password

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["x.x.x.x:30092"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "josepha0509"


Restart the filebeat container

docker restart filebeat container ID

rewrite timestamp

If the original timestamp format of the log is in pure digital format and only accurate to the second, it cannot be recognized by filebeat. It can be modified in the processors field of filebeat's configuration file. The configuration file is compatible with javascript language.

The default configuration file is /usr/share/filebeat/filebeat.yml

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

  - script:
      lang: javascript
      id: my_filter
      tag: enable
      source: >
        function process(event) {
            var str = event.Get("message");
            var unixTimestamp = str.split(" ")[0];
            var time = new Date(unixTimestamp * 1000);
            event.Put("start_time",time);
        }

  - timestamp:
      # format time value to timestamp
      field: start_time
      layouts:
        - '2006-01-02T15:04:05Z'
        - '2006-01-02T15:04:05.999Z'
        - '2006-01-02T15:04:05.999-07:00'
      test:
        - '2019-06-22T16:33:51Z'
        - '2019-11-18T04:59:51.123Z'
        - '2020-08-03T07:10:20.123456+02:00'
        
        
  - drop_fields:
      fields: [start_time]



some common operations

mapping dynamic mapping

If you add data to es, a dynamic mapping will be created based on the data, and the data type will be automatically assigned

# In the kibana operation, you can view the properties of each field
GET index name/_mapping

Customize a dynamic mapping for the index myindex. If a value of text type is encountered, it will be converted to keyword, so that word segmentation will not be performed.

PUT /myindex
{
    "mappings": {
        "dynamic_templates": [
            {
                "integers": {
                    "match_mapping_type": "text",
                    "mapping": {
                        "type": "keyword",
                    }
                }
            }
        ]
    }
}



Inquire

word matching

Use match_phrase to match, the search content will not be segmented, and the phrase "login successful" is searched here,

And if it is changed to match, it will be divided into "deng", "record", "cheng", "gong"

If term matching is used, the target text of the search will not be tokenized, and the search content will not be tokenized. More strictly, term is equivalent to == in programming languages, if and only if the field is exactly equal to "login successful". was retrieved

GET filebeat-7.16.3-2022.03.14-000001/_search
{
  "query": {
    "match_phrase": {
      "message": "login successful"
    }
  }
}

regular match

The regexp query is based on word segmentation. If the matched field attribute is text type, then each word segmentation will be matched, not the entire field. But when the keyword type is encountered, the regular expression must be required to match the entire field, because the keyword does not divide words

GET myindex/_search
{
  "query": {
    "regexp": {
      "message": "regex"
    }
  }
}



painless

painless is a scripting language with a syntax similar to Java, which can be used in kibana with script

regular expression

Extract the value of xyz from the log entry by regular expression and assign it to the newly created xyz field

POST myindex/_update_by_query
{
  "script": {
    "lang": "painless",
    "source": """
      Matcher matcher = /Position now: pictureId = 31, x : (.*), y : (.*), z : (.*), timestamp = (.*)/.matcher(ctx._source.message);
      if(matcher.find())
      {
        ctx._source.x = Double.parseDouble(matcher.group(1));
        ctx._source.y = Double.parseDouble(matcher.group(2));
        ctx._source.z = Double.parseDouble(matcher.group(3));
        
      }
    """
  }
}

Tags: ElasticSearch Kubernetes Cloud Native Container

Posted by eddie_twoFingers on Thu, 06 Oct 2022 09:58:30 +0530