Elasticsearch、Kibana KQL基本查询 https://blog.csdn.net/weixin_43847283/article/details/123694650

Kibana & Elasticsearch故障

从浏览器打开Kibana,http://192.168.30.101:5601出现如下问题:

{"statusCode":500,"error":"Internal Server Error","message":"[parent] Data too large, data for [<http_request>] would be [1048857026/1000.2mb], which is larger than the limit of [1020054732/972.7mb], real usage: [1048856512/1000.2mb], new bytes reserved: [514/514b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=514/514b, accounting=6729208/6.4mb]: [circuit_breaking_exception] [parent] Data too large, data for [<http_request>] would be [1048857026/1000.2mb], which is larger than the limit of [1020054732/972.7mb], real usage: [1048856512/1000.2mb], new bytes reserved: [514/514b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=514/514b, accounting=6729208/6.4mb], with { bytes_wanted=1048857026 & bytes_limit=1020054732 & durability=\"PERMANENT\" }"}

根据网传对ES设置进行修改:

$ curl -X PUT "http://localhost:9200/_cluster/settings" -H 'Content-Type: application/json' -d '{ "persistent" : { "indices.breaker.fielddata.limit" : "40%" } }'

当然无效,……——^_^;

查看es日志,发现gc比较频繁,应该是内存不够

$ docker logs elasticsearch_es_1
...
{"type": "server", "timestamp": "2022-10-05T13:36:44,578Z", "level": "WARN", "component": "o.e.m.j.JvmGcMonitorService", "cluster.name": "docker-cluster", "node.name": "e4278024af86", "message": "[gc][6782493] overhead, spent [3.4s] collecting in the last [4.3s]", "cluster.uuid": "oEk4KD4VQFmYlI5O0_2yZA", "node.id": "7rELm_mLS4WLUnyIioVlEQ"  }
...

随后查看了启动时的配置文件,发现内存只有1G,随增大内存

$ more docker-compose.yaml 
version: '3'
services:
  es:
    image: elasticsearch:7.7.0
    environment:
      ES_JAVA_OPTS: -Xms1g -Xmx1g
      discovery.type: single-node
    volumes:
      - ./es-data:/usr/share/elasticsearch/data
      - ./es-plugins:/usr/share/elasticsearch/plugins
    ports:
      - "9200:9200"
      - "9300:9300"

  logstash:
    image: logstash:7.7.0
    volumes:
      - ./logstash-pipeline:/usr/share/logstash/pipeline
      - ./logstash-data:/usr/share/logstash/data
    ports:
      - "5000:5000"

重新建立容器

$ docker-compose up -d
elasticsearch_logstash_1 is up-to-date
Recreating elasticsearch_es_1 ... done

多等一会儿,就好了。

相关内容

· ELK

[ 编辑 | 历史 ]
最近由“jilili”在“2022-10-05 13:53:57”修改