Easy way to set up Zookeeper server

What is ZooKeeper

ZooKeeper is a top-level Apache project that provides efficient and highly available distributed coordination services for distributed applications, providing distributed infrastructure services such as data publishing/subscription, load balancing, naming services, distributed coordination/notification, and distributed locks. ZooKeeper is widely used in large distributed systems such as Hadoop, HBase, Kafka and Dubbo due to its convenient usage, excellent performance and good stability.

Zookeeper has three modes of operation: single machine mode, pseudo cluster mode, and cluster mode.

  • Single machine mode: This mode is generally applicable to the development and testing environment. On the one hand, we do not have so many machine resources, on the other hand, development and debugging in peacetime do not require excellent stability.

  • Cluster mode: A ZooKeeper cluster usually consists of a set of machines, generally more than three machines can form an available ZooKeeper cluster. Each machine that makes up the ZooKeeper cluster maintains the current server state in memory and maintains communication with each other.

  • Pseudo cluster mode: This is a special cluster mode in which all servers in a cluster are deployed on one machine. When you have a better machine at hand and deploy it as a single-machine mode, it wastes resources. In this case, ZooKeeper allows you to start multiple ZooKeeper service instances on a single machine by starting different ports, serving them externally with the characteristics of the cluster.

ZooKeeper knowledge

Roles in Zookeeper:

  • leader: responsible for voting initiation and resolution, updating system status.

  • follower: Used to receive client requests and return results to the client and vote during the voting process.

  • Observer: A client connection can be accepted and a write request forwarded to the leader, but the observer does not participate in the voting process, just to expand the system and increase the speed of reading.

 

Zookeeper's data model

  • Hierarchical directory structure, named in accordance with general file system specifications, similar to Linux.

  • Each node is called Znode in Zookeeper and has a unique path identifier.

  • Node Znode can contain data and child nodes, but nodes of EPHEMERAL type cannot have child nodes.

  • The data in Znode can have multiple versions, such as multiple versions in a path, so querying the data in this path requires a version on the band.

  • Client applications can set up monitors on nodes.

  • Nodes do not support partial read-write, but complete read-write at once.

 

Node characteristics of ZooKeeper

ZooKeeper nodes are lifecycle, depending on the type of node. In ZooKeeper, nodes can be divided into persistent nodes (PERSISTENT), temporary nodes (EPHEMERAL), sequential nodes (SEQUENTIAL), and out-of-order nodes (default is out of order) based on their duration.

Once a persistent node is created, it will always be saved in Zookeeper unless removed actively (it will not disappear because the session of the client that created the node failed).

Application scenarios for Zookeeper

ZooKeeper is a highly available framework for distributed data management and system coordination. Based on the implementation of the Paxos algorithm, the framework ensures strong consistency of data in a distributed environment, which also enables ZooKeeper to solve many distributed problems.

It is worth noting that ZooKeeper was not designed for these scenarios by nature, but that many developers have come up with typical usage methods based on the characteristics of their frameworks, using a series of API interfaces (or primitives) they provide.

Data Publishing and Subscription (Configuration Center)

The publishing and subscription model, the so-called Configuration Center, as its name implies, is that publishers publish data to ZooKeeper nodes, which allows subscribers to dynamically obtain data, centralized management and dynamic updates of configuration information. For example, global configuration information, service address lists for service-oriented service frameworks, and so on, are ideal for use.

Some of the configuration information used in the application is centrally managed on ZooKeeper. This is often the case in scenarios where the application actively acquires a configuration at startup and registers a Watcher on the node. This way, every time a configuration update occurs in the future, the subscribed client will be notified in real time, always to get the latest configuration information.

In a distributed search service, the meta-information of the index and the node state of the server cluster machine are stored in some specified nodes of ZooKeeper for individual client subscriptions.

Distributed Log Collection System

The core work of this system is to collect logs distributed across different machines. Collectors usually assign collection task units by application, so you need to create a node P with the application name as the path on ZooKeeper, and register all machine IP of this application as a child node on node P. This enables real-time notification to the collector to adjust task assignments when machine changes are made.

Some of the information in the system needs to be retrieved dynamically, and there will be questions about how to modify this information manually. It is common to expose interfaces, such as JMX interfaces, to obtain some runtime information. With the introduction of ZooKeeper, you don't have to implement a set of scenarios yourself, just store the information on the specified ZooKeeper node.

Note: In the application scenarios mentioned above, there is a default premise - the amount of data is small, but data updates may be faster in scenarios.

load balancing

Load balancing here refers to soft load balancing. In a distributed environment, to ensure high availability, multiple deployments are usually made to peer-to-peer services by the same application or provider of the same service. Consumers have to choose one of these peer servers to perform the relevant business logic, typically the producer in the message middleware, with the consumer load balancing.

Naming Service

Naming services are also a common scenario in distributed systems. In distributed systems, by using named services, client applications can obtain the address, provider, and other information of a resource or service based on a specified Name. Named entities can usually be machines in a cluster, service addresses provided, remote objects, and so on - all of which we can collectively call names (Names). A common example is a list of service addresses in some distributed service frameworks. By calling the API provided by ZooKeeper to create a node, it is easy to create a globally unique path that can be used as a Name.

The Alibaba Group's open source distributed service framework Dubbo uses ZooKeeper as its naming service to maintain a global list of service addresses. In the implementation of Dubbo:

  • When the service provider starts, it writes its URL address to the specified node/dubbo/${serviceName}/providers directory on ZooKeeper, which completes the publishing of the service.

  • At service consumer startup, the provider URL address in the subscription/dubbo/${serviceName}/providers directory is written to the/dubbo/${serviceName}/consumers directory.

Note: All addresses registered with ZooKeeper are temporary nodes, which ensure that service providers and consumers are automatically aware of changes in resources.

In addition, Dubbo has monitoring for service granularity. This is done by subscribing to information about all providers and consumers in the / dubbo/${serviceName} directory.

Distributed Notification/Coordination

The unique Watcher registration and asynchronous notification mechanism in ZooKeeper can well achieve the notification and coordination between different systems in a distributed environment, and achieve real-time processing of data changes. Usually different systems register the same zinode on ZooKeeper to monitor changes in the zinode (including the content of the zinode itself and its subnodes). One system updates the zinode, and the other system receives notifications and processes them accordingly.

Another heartbeat detection mechanism is that the detected system is not directly related to the detected system, but rather to a node on ZooKeeper, which greatly reduces system coupling.

Another mode of system scheduling is that a system consists of a console and a push system. The function of the console is to control the push system for the corresponding push work. Some of the things managers do in the console are actually modifying the state of some nodes on ZooKeeper, which notifies their client that registers Watcher, the push system. Then, make the corresponding push task.

Another mode of work reporting: some are similar to task distribution systems. After the subtasks start, go to ZooKeeper to register a temporary node and periodically report on your progress (write it back to this temporary node). This allows the task manager to know the progress of the task in real time.

Distributed Lock

Distributed locks are mainly due to ZooKeeper's strong consistency of data. Lock services can be categorized into two categories: those that remain exclusive and those that control timing.

Keeping exclusive means that all the clients trying to acquire the lock, only one can successfully acquire the lock. It is common practice to treat a Znode on ZooKeeper as a lock, which is achieved by creating znodes. All clients create/distribute_ The lock node, the client that was created successfully, also owns the lock.

Control timing, that is, all the clients whose views acquire this lock, will eventually be scheduled for execution, but with a global timing. This is similar to the above except here/distribute_ Lock already exists in advance and the client creates a temporary ordered node beneath it (this can be specified through the node's property control: CreateMode.EPHEMERAL_SEQUENTIAL). ZooKeeper's parent node (/distribute_lock) maintains a sequence that guarantees the chronology of child node creation, thus forming a global chronology for each client.

  1. Since child nodes cannot have the same name under the same node, as long as a Znode is created under a node, successful creation indicates successful locking. Registered listeners listen on this Znode and notify other clients to lock whenever the Znode is deleted.

  2. Create a temporary sequential node: Create a node under a node, and a request creates a node. Because it is sequential, the lock with the smallest sequence number is acquired, and when the lock is released, the next sequence number is notified to acquire the lock.

Distributed Queue

In terms of queues, there are two kinds of queues: one is a regular first-in-first-out queue, the other is to wait for the queue members to gather before executing in sequence. The first type of queue shares the same basic principles as the scenario described above for controlling timing in distributed lock services, which is not to be overlooked here.

The second queue actually builds on the FIFO queue. Usually a/queue/num node can be pre-established under the Znode of/queue, and a value of n (or n directly to/queue) indicates the queue size. Each time a queue member joins, it determines whether the queue size has been reached and whether execution can begin.

A typical scenario for this usage is in a distributed environment where a large Task A task needs to be completed (or conditionally ready) with many subtasks. At this point, if one of the subtasks is complete (ready), then go and set up your own temporary time series node (CreateMode.EPHEMERAL_SEQUENTIAL) under the / taskList. When/taskList finds that the number of child nodes below it meets the specified number, it can proceed to the next sequential step.

Clustering with dokcer-compose

We have described so many scenarios for ZooKeeper above, so let's start by learning how to build a ZooKeeper cluster and then use the scenarios for battle.

The directory structure of the files is as follows:

├── docker-compose.yml

Write docker-compose.yml file

Docker-compose. The YML file is as follows:

version: '3.4'

services:
  zoo1:
    image: zookeeper
    restart: always
    hostname: zoo1
    ports:
      - 2181:2181
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo2:
    image: zookeeper
    restart: always
    hostname: zoo2
    ports:
      - 2182:2181
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo3:
    image: zookeeper
    restart: always
    hostname: zoo3
    ports:
      - 2183:2181
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181

In this configuration file, Docker runs three Zookeeper mirrors and binds the local ports 2181, 2182, and 2183 to the corresponding container's 2181 through the ports field, respectively.

ZOO_MY_ID and ZOO_SERVERS are two environment variables needed to set up a Zookeeper cluster. ZOO_ MY_ The id identifies the id of the service, an integer between 1 and 255, and must be unique in the cluster. ZOO_SERVERS is a list of hosts in a cluster.

In docker-compose. When you execute docker-compose up in the directory where YML resides, you can see the boot log.

Connect ZooKeeper

Once the cluster is started, we can connect ZooKeeper to do something about its nodes.

  1. First you need to download ZooKeeper.

  2. Unzip it.

  3. Enter its conf/directory and zoo_ Change sample.cfg to zoo.cfg.

Profile description

# The number of milliseconds of each tick
# tickTime:CS Communication Heartbeat Number
# The time interval between Zookeeper servers or between client and server to maintain a heartbeat, that is, one heartbeat per tickTime. TickTime is in milliseconds.
tickTime=2000

# The number of ticks that the initial
# synchronization phase can take
# initLimit:LF initial communication time limit
# The maximum number of heartbeats (tickTime s) that can be tolerated at the initial connection between the follower server (F) and the leader server (L) in the cluster.
initLimit=5

# The number of ticks that can pass between
# sending a request and getting an acknowledgement
# syncLimit:LF Synchronization Communication Time Limit
# The maximum number of heartbeats (tickTime s) that can be tolerated between requests and responses between follower servers and leader servers in a cluster.
syncLimit=2

# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# dataDir: Data file directory
# Zookeeper saves the directory where the data is stored, and by default, Zookeeper saves log files that write data.
dataDir=/data/soft/zookeeper-3.4.12/data


# dataLogDir: Log file directory
# The directory where Zookeeper saves log files.
dataLogDir=/data/soft/zookeeper-3.4.12/logs

# the port at which the clients will connect
# clientPort: Client Connection Port
# The port on which the client connects to the Zookeeper server, which Zookeeper listens for and accepts client access requests.
clientPort=2181

# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1


# Server name and address: Cluster information (server number, server address, LF communication port, election port)
# This configuration item is written in a special format with the following rules:

# server.N=YYY:A:B

# N is the server number, YYY is the IP address of the server, A is the LF communication port, and represents the port of information exchanged between the server and the leader in the cluster. B is the election port, representing the port where the servers communicate with each other when a new leader is elected (when the leader hangs up, the rest of the servers communicate with each other and a new leader is selected). Generally speaking, port A is the same for each server in the cluster and port B for each server. However, when using a pseudo cluster, the IP addresses are the same, only Port A and Port B are different.

Zoo may not be modified. Cfg, using the default configuration. Next execute the command. /zkCli in the unzipped bin/directory. Sh-server 127.0.0.1:2181 to connect.

Welcome to ZooKeeper!
2020-06-01 15:03:52,512 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1025] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
JLine support is enabled
2020-06-01 15:03:52,576 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@879] - Socket connection established to localhost/127.0.0.1:2181, initiating session
2020-06-01 15:03:52,599 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x100001140080000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null
[zk: 127.0.0.1:2181(CONNECTED) 0]

Next, you can use commands to view nodes:

  • Use the LS command to view what is currently contained in ZooKeeper. Command: ls/

[zk: 127.0.0.1:2181(CONNECTED) 10] ls /

A new znode node, zk, and the string associated with it were created. Command: create/zk myData

[zk: 127.0.0.1:2181(CONNECTED) 11] create /zk myData

Created /zk
[zk: 127.0.0.1:2181(CONNECTED) 12] ls /
[zk, zookeeper]
[zk: 127.0.0.1:2181(CONNECTED) 13]

  • Gets the znode node zk. Command: get/zk

[zk: 127.0.0.1:2181(CONNECTED) 13] get /zk
myData
cZxid = 0x400000008
ctime = Mon Jun 01 15:07:50 CST 2020
mZxid = 0x400000008
mtime = Mon Jun 01 15:07:50 CST 2020
pZxid = 0x400000008
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 6
numChildren = 0

Delete the znode node zk. Command: delete/zk

[zk: 127.0.0.1:2181(CONNECTED) 14] delete /zk
[zk: 127.0.0.1:2181(CONNECTED) 15] ls /
[zookeeper]

Due to limited space, the following articles will use code one by one to implement the ZooKeeper scenarios mentioned above.

You can pull the project directly from GitHub and start it in two steps:

  1. Pull the item from GitHub.

  2. Execute the docker-compose up command in the ZooKeeper folder.

Tags: Java Back-end server Zookeeper java-zookeeper

Posted by microthick on Tue, 16 Aug 2022 03:21:02 +0530