美文网首页
ELK集群搭建大型日志系统:3.Kafka集群搭建

ELK集群搭建大型日志系统:3.Kafka集群搭建

作者: 小六的昵称已被使用 | 来源:发表于2019-08-13 10:13 被阅读0次

环境

[14:23:11 root@elk-01 ~ $]cat /etc/centos-release
CentOS Linux release 7.6.1810 (Core)

apache-zookeeper-3.5.5-bin.tar.gz
kafka_2.12-2.2.1.tgz

第一步:配置 zookeeper

官网:https://zookeeper.apache.org/

下载地址:https://www.apache.org/dyn/closer.cgi/zookeeper/

安装教程:https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_systemReq

3台服务器为集群的最小建议大小,服务器数量最好为奇数,并且建议在不同的机器上运行

1.安装 JDK(by all)

对于4GB内存的服务器,最大堆内存设置为3GB

2.下载并安装 ZooKeeper(by all)

## 下载并解压
wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.5.5/apache-zookeeper-3.5.5-bin.tar.gz
tar vxzf apache-zookeeper-3.5.5-bin.tar.gz

3.编辑配置文件(3个节点保持一致即可)(by all)

cat <<\EOF > apache-zookeeper-3.5.5-bin/conf/zoo.cfg
## 心跳包间隔时间
tickTime=2000

## 此配置表示允许 follower 连接并同步到 leader 的初始化连接时间,以 tickTime 的倍数表示
## 当超过此时间则连接失败
## 如果在设定的时间段内,半数以上的跟随者未能完成同步,领导者会放弃领导地位,进行新一次的领导选举
## 默认为 10
initLimit=10

## 集群中的 follower 与 leader 服务器之间请求和应答之间能容忍的最多心跳数,以 tickTime 的倍数表示
## 此配置表示,如果 follower 在设置的时间内不能与 leader 进行通信,娜美此 follower 将被丢弃。所有关联到这个 follower 的客户端将重新连接到其他 follower 
## 默认为 5
syncLimit=5

## 存储快照文件snapshot的目录
## 默认在 /tmp 目录,但是请不要使用 /tmp 路径,这只是一个示例
dataDir=/home/zookeeperData

## 监听端口
clientPort=2181

## 最大连接数
#maxClientCnxns=60

## 这个参数指定了需要保留的文件数目。默认是保留3个。
#autopurge.snapRetainCount=3
## 设置为0 禁用自动清除功能,跟上一个参数配合使用
#autopurge.purgeInterval=1

## 集群节点
server.1=192.168.50.14:2888:3888
server.2=192.168.50.15:2888:3888
server.3=192.168.50.16:2888:3888
EOF

4.创建 Data 目录并写入myid文件

myid文件由一行组成,是一个只包含本服务器 ID 的文本,此 ID 必须唯一,并且介于 1-254 之间

## 节点1:
mkdir /home/zookeeperData
echo 1 > /home/zookeeperData/myid
cat /home/zookeeperData/myid

## 节点2:
mkdir /home/zookeeperData
echo 2 > /home/zookeeperData/myid
cat /home/zookeeperData/myid

## 节点3:
mkdir /home/zookeeperData
echo 3 > /home/zookeeperData/myid
cat /home/zookeeperData/myid

5.启动服务(by all)

## 启动
/home/apache-zookeeper-3.5.5-bin/bin/zkServer.sh start

## 查看服务状态
/home/apache-zookeeper-3.5.5-bin/bin/zkServer.sh status

## 查看端口监听状态
netstat -antp | grep -E '2181|2888|3888'

## 查看进程状态
ps -ef | grep zookeeper | grep -v grep

## 使用客户端连接到集群
/home/apache-zookeeper-3.5.5-bin/bin/zkCli.sh -server 127.0.0.1:2181

第二步:部署 kafka

下载地址:http://kafka.apache.org/downloads

1.下载并安装

wget http://mirror.bit.edu.cn/apache/kafka/2.2.1/kafka_2.12-2.2.1.tgz
tar vxzf kafka_2.12-2.2.1.tgz

2.编辑配置文件

以下配置文件有5处做了修改

需要根据节点修改broker.idlisteners

cp kafka_2.12-2.2.1/config/server.properties kafka_2.12-2.2.1/config/server.properties.bak
cat <<\EOF > kafka_2.12-2.2.1/config/server.properties
############################# Server Basics #############################
## broker ID,必须为每个服务器设置一个唯一的整数
broker.id=0

############################# Socket Server Settings #############################
## 修改
## 配置监听
listeners=PLAINTEXT://192.168.50.14.:9092

## 用于从网络接收请求并向网络发送响应的线程数
num.network.threads=3

## 用于处理请求的线程数,其中可能包括磁盘 I/O
num.io.threads=8

## 发送缓冲区
socket.send.buffer.bytes=102400

## 接收缓冲区
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
## 服务器接收的请求的最大值(防止 OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################
## 日志目录
log.dirs=/tmp/kafka-logs

## 每个 broker 的默认日志分区
num.partitions=1

## 修改
## 在启动时用于日志恢复和关闭时刷新日志的线程数
## 如果数据目录位于 RAID 阵列中,建议增加此值
## 默认值为1
num.recovery.threads.per.data.dir=5


############################# Internal Topic Settings  #############################
## 修改
## 组元数据 "__consumer_offsets" and "__transaction_state" 的复制因子
## 出开外测试外,建议使用大于1的值以确保高可用性,例如3
## offsets.topic 的副本数量
## 默认为1
offsets.topic.replication.factor=3
## transaction.state.log 副本数量
## 默认为1
transaction.state.log.replication.factor=3
## 覆盖事务主题的min.insync.replicas配置
## 默认为1
transaction.state.log.min.isr=3


############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000


############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000


############################# Zookeeper #############################
## 修改
## Zookeeper 连接字符串
zookeeper.connect=192.168.50.14:2181,192.168.50.15:2181,192.168.50.16:2181

## Zookeeper 连接超时时间(毫秒)
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################
## 修改
## 空消费组延时时间,设为0是为了方便开发
## 官方建议实际发布生成线中配置为3秒更好
## 单位为毫秒 
group.initial.rebalance.delay.ms=3000
EOF

vim kafka_2.12-2.2.1/config/server.properties

3.创建启动脚本

## 创建启动脚本
echo '/home/kafka_2.12-2.2.1/bin/kafka-server-start.sh /home/kafka_2.12-2.2.1/config/server.properties &' > /home/kafka_start.sh

## 启动 kafka
bash /home/kafka_start.sh

## 查看监听状态
netstat -antp | grep 9092

第三步:安装 Kafka 管理工具 Kafka Manager

GitHub主页:https://github.com/yahoo/kafka-manager

下载地址:https://github.com/yahoo/kafka-manager/releases

1.安装依赖

cat <<\EOF >/etc/yum.repos.d/bintray-sbt-rpm.repo
#bintray--sbt-rpm - packages by  from Bintray
[bintray--sbt-rpm]
name=bintray--sbt-rpm
baseurl=https://sbt.bintray.com/rpm
gpgcheck=0
repo_gpgcheck=0
enabled=1
EOF

yum install -y sbt

2.下载安装

## 下载并解压
wget https://github.com/yahoo/kafka-manager/archive/2.0.0.2.zip
unzip kafka-manager2.0.0.2.zip
cd kafka-manager-2.0.0.2

## 编译
./sbt clean dist

## 查看编译后的文件
cd target/universal/
ll

## 重新解压编译好的
unzip kafka-manager-2.0.0.2.zip

## 进入到新解压的目录内修改配置文件
cd kafka-manager-2.0.0.2
vim conf/application.conf

## 将本行注销
## kafka-manager.zkhosts="kafka-manager-zookeeper:2181"
## 增加本行
kafka-manager.zkhosts="192.168.50.14:2181,192.168.50.15:2181,192.168.50.16:2181"

## 启动
bin/kafka-manager -Dconfig.file=conf/application.conf -Dhttp.port=8080

## 查看进程
jps

    [16:51:50 root@kibana ~ $]jps
    56136 Jps
    54795 ProdServerStart

3.登录管理页面,添加集群并管理

## 1.登录管理页面
http://192.168.50.17:8080

## 点击 Add Cluster
    Cluster Name:任意
    Cluster Zookeeper Hosts:192.168.50.14:2181,192.168.50.15:2181,192.168.50.16:2181
    Kafka Version:选择跟自己版本最接近的

## 点击 Save 保存即可

相关文章

网友评论

      本文标题:ELK集群搭建大型日志系统:3.Kafka集群搭建

      本文链接:https://www.haomeiwen.com/subject/akwujctx.html