[HDFS Chapter 10] DataNode related concepts

Promise me to do one thing at a time

DataNode related concepts

DataNode working mechanism

  1. A data block is stored on the disk in the form of a file on the DataNode, including two files, one is the data itself, and the other is the metadata including the length of the data block, the checksum of the block data, and the timestamp.
  2. After the DataNode is started, it registers with the NameNode. After passing, it periodically (1 hour) reports all block information to the NameNode.
  3. The heartbeat is once every 3 seconds, and the heartbeat returns the result with the command from the NameNode to the DataNode, such as copying the block data to another machine, or deleting a data block. If no heartbeat is received from a DataNode for more than 10 minutes, the node is considered unavailable.
  4. It is safe to join and exit some machines while the cluster is running [introduce new nodes and decommission nodes later].

data integrity

When DataNode reads Block, it will calculate CheckSum. If the calculated CheckSum is different from the value when the Block was created, it means that the Block is damaged. Client reads Block s on other DataNode s. The DataNode periodically validates CheckSum after its file is created, as shown

Dropout time limit parameter setting

Configured in hdfs-site.xml

<property>
    <name>dfs.namenode.heartbeat.recheck-interval</name>
    <value>300000</value> # in milliseconds
</property>
<property>
    <name> dfs.heartbeat.interval </name>
    <value>3</value> # unit second
</property>

New data nodes in service

technical background

As the company's business volume increases, the original data nodes can no longer meet its data storage requirements, and dynamic expansion of nodes is required.

If you are using a cloud server, you need to create an instance, if it is your own virtual machine clone one

The following demonstrates virtual machine cloning to add nodes

1. Environmental preparation

(1)exist hadoop104 Another clone on the host hadoop105 host
(2)Revise IP address and hostname
(3)delete the original HDFS File system persisted files (/opt/module/hadoop-2.7.2/data and log)
(4)source config file: source /etc/profile

2. Service node steps

(1)direct start DataNodeļ¼Œcan be linked to the cluster,Isn't it super easy
[zhutiansama@hadoop105 hadoop-2.7.2]$ sbin/hadoop-daemon.sh start datanode
[zhutiansama@hadoop105 hadoop-2.7.2]$ sbin/yarn-daemon.sh start nodemanager

(2)exist hadoop105 upload file
[zhutiansama@hadoop105 hadoop-2.7.2]$ hadoop fs -put /opt/module/hadoop-2.7.2/LICENSE.txt / 
(3)If the data is unbalanced, you can use the command to rebalance the cluster
[zhutiansama@hadoop102 sbin]$ ./start-balancer.sh

Retire old data nodes [whitelist]

Important: Simply add the desired slave to the whitelist

The steps to configure the whitelist are as follows:

(1) Create the dfs.hosts file in the /opt/module/hadoop-2.7.2/etc/hadoop directory of NameNode

[zhutiansama@hadoop102 hadoop]$ pwd
/opt/module/hadoop-2.7.2/etc/hadoop
[zhutiansama@hadoop102 hadoop]$ touch dfs.hosts
[zhutiansama@hadoop102 hadoop]$ vi dfs.hosts
 Add the following hostname (do not add hadoop105)
hadoop102
hadoop103
hadoop104

(2) Add the dfs.hosts property to the hdfs-site.xml configuration file of the NameNode

<property>
<name>dfs.hosts</name>
<value>/opt/module/hadoop-2.7.2/etc/hadoop/dfs.hosts</value>
</property>

(3) Configuration file distribution

[zhutiansama@hadoop102 hadoop]$ xsync hdfs-site.xml

(4) Refresh NameNode

[zhutiansama@hadoop102 hadoop-2.7.2]$ hdfs dfsadmin -refreshNodes

(5) Update the ResourceManager node

[zhutiansama@hadoop102 hadoop-2.7.2]$ yarn rmadmin -refreshNodes

(6) View on web browser

If the data is unbalanced, you can use the start-balancer.sh command again

Retire old data nodes [blacklist]

The operation is the same as above, but the whitelist file is transposed to the blacklist file dfs.hosts.exclude

All hosts on the blacklist will be kicked out of the cluster

Datanode multi-directory configuration

This multi-directory does not mean a copy, it means that you do not want to put all the data in one directory.

The specific configuration is as follows hdfs-site.xml

<property>
<name>dfs.datanode.data.dir</name>
<value>file:///${hadoop.tmp.dir}/dfs/data1,file:///${hadoop.tmp.dir}/dfs/data2</value>
</property>

Relevant information

Tags: Big Data

Posted by boby on Mon, 30 May 2022 14:40:06 +0530