美文网首页
ELK 安装配置

ELK 安装配置

作者: 怪咖_OOP | 来源:发表于2018-07-17 17:19 被阅读0次

Elasticsearch

下载:https://www.elastic.co/cn/downloads/elasticsearch

配置:(config/elasticsearch.yml)

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#配置的集群名称,默认是elasticsearch,es服务会通过广播方式自动连接在同一网段下的es服务,通过多播方式进行通信,同一网段下可以有多个集群,通过集群名称这个属性来区分不同的集群。
cluster.name: my-application 

#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#节点名称
node.name: slave-1
#指定该节点是否有资格被选举成为node(注意这里只是设置成有资格, 不代表该node一定就是master),默认是true,es是默认集群中的第一台机器为master,如果这台机挂了就会重新选举master。
node.master: true
#指定该节点是否存储索引数据,默认为true。
node.data: true 
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#es数据保存位置
path.data: /path/to/data
#
# Path to log files:
#
#es日志保存位置
path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
# 
#本机内网IP
network.host: 192.168.1.26
#
# Set a custom port for HTTP:
#
http.port: 9200 
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#设置新节点被启动时能够发现的主节点列表(主要用于不同网段机器连接)
discovery.zen.ping.unicast.hosts: ["192.168.1.200"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#设置一个集群中主节点的数量,当多于三个节点时,该值可在2-4之间
discovery.zen.minimum_master_nodes: 1
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#设置允许http跨域访问
http.cors.enabled: true
http.cors.allow-origin: "*"

错误处理:

[1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
修改/etc/security/limits.conf
* hard nofile 65536
修改/etc/security/limits.d/*-nproc.conf
*          soft    nproc     65536
修改/etc/sysctl.conf
添加配置:vm.max_map_count=262144,然后执行命令 sysctl -p

Logstash

下载:https://www.elastic.co/downloads/logstash

配置:

logstash.conf
input {
    
    # tcp {
    #     port => 5599
    #     type => ofbank-mining
    # }
    
    # filebeats配置
    beats{
        port => 5599
    }
}

filter {

    grok {
        match => [
        "message","%{TIMESTAMP_ISO8601:time} (?<code_info>[a-zA-Z0-9.]+):%{BASE10NUM:code_line} (?:[a-zA-Z0-9.()*]+) >>> %{WORD:log_lavel} %{BASE10NUM} %{GREEDYDATA:content}"
        ]
        overwrite => ["message"]
    }

    date {
        match => ["time","yyyy-MM-dd HH:mm:ss"]
    }

    mutate {
        remove_field=> ["@version","TYPE","ACTION"]
    }
}

output {

    if [fields][log_topics] == "ofbank_mining_web" {
        elasticsearch {
            hosts => ["192.168.1.160:9200","192.168.1.81:9200"]
            index => "ofbank-mining-log-%{+YYYY-MM-dd}"
            manage_template => false
            template_name => "ofbank"
            template => "/home/elasticsearch/logstash/template/ofbank-template.json"
            template_overwrite => true
        }
    }
    
    
}
template.json
{
  "crawl": {
    "template": "app-*",
    "settings": {
      "index.number_of_shards": 3,
      "number_of_replicas": 1
    },
    "mappings": {
      "logs": {
        "properties": {
          "time": {
            "type": "string",
            "index": "not_analyzed"
          },
          "code_info": {
            "type": "string",
            "index": "not_analyzed"
          },
          "code_line": {
            "type": "string",
            "index": "not_analyzed"
          },
          "log_lavel": {
            "type": "string",
            "index": "not_analyzed"
          },
          "content": {
            "type": "string"
          }
        }
      }
    }
  }
}

Filebeat配置

filebeat.prospectors:

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    # 读取的文件路径
    - /mnt/logs/mining-10010.log

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  fields:
    log_topics: ofbank_mining_web



#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3

#================================ Outputs =====================================


output.logstash:
  # The Logstash hosts
  hosts: ["192.168.1.160:5599"]

Kibana配置

server.port: 5601
#本地IP
server.host: "192.168.1.81"
elasticsearch.url: "http://192.168.1.81:9200"

以上属于原创文章,转载请注明作者@怪咖

QQ:208275451

相关文章

网友评论

      本文标题:ELK 安装配置

      本文链接:https://www.haomeiwen.com/subject/hzvwpftx.html