web analytics

How to prevent BufferOverflowError – Source: socprime.com

Rate this post

Source: socprime.com – Author: Oleh P.

In this guide, I will tell you how to prevent BufferOverflowError when you get logs from Kafka/in_tail, and your output can’t connect to OpenSearch/ElasticSearch. If you use input from Kafka/in_tail and sometimes you have issues with connection to OpenSearch/ElasticSearch, you can customize your Fluentd buffer in the output to stop getting logs from the input source and wait until Fluentd can send data.

You need to do:

Step 1. Flushing Parameters:

  1. flush_mode: set flush_mode to interval
   flush_mode:      Default: default      Supported types: default, lazy, interval, immediate         interval flushes per flush_interval         immediate flushes just after event arrives

2. flush_interval: You can skip that parameter if you are okay with receiving logs every 60 seconds. Set flush_interval to custom time in seconds:

   flush_interval      The interval between buffer chunk flushes.      Default: 60

3. overflow_action: set overflow_action to block

   block:      This mode stops input plugin thread until buffer full issue is resolved. This action is good for batch-like use-cases. This is mainly for in_tail plugin. Other input plugins, e.g. socket-based plugin, don't assume this action.       We do not recommend using block action to avoid BufferOverflowError. Please consider improving destination settings to resolve BufferOverflowError or use @ERROR label for routing overflowed events to another backup destination (or secondary with lower retry_limit). If you hit BufferOverflowError frequently, it means your destination capacity is insufficient for your traffic.

Step 2. Retries Parameters:

  1. retry_max_interval: If you are okay with a randomized interval, you can skip that parameter. Set retry_randomize to false
   retry_randomize:      If true, the output plugin will retry after a randomized interval not to do burst retries.      Default: true

2. retry_max_interval: set retry_max_interval to custom time in seconds

   retry_max_interval:       The maximum interval (seconds) for exponential backoff between retries while failing.       Default: nil

Step 3. Buffering Parameters:

You can skip that parameter if you are okay with the default chunk_limit_size. Set chunk_limit_size to a value in megabytes:

    chunk_limit_size [size]        Default: 8MB (memory) / 256MB (file)        The max size of each chunks: events will be written into chunks until the size of chunks become this size

My configuration:

     @type memory     flush_mode                      interval     flush_interval                  5s     chunk_limit_size                20m     overflow_action                 block #https://docs.fluentd.org/output#overflow_action     retry_randomize                 false     retry_max_interval              300 

Was this article helpful?

Like and share it with your peers.

Original Post URL: https://socprime.com/blog/how-to-prevent-bufferoverflowerror/

Category & Tags: Blog,Knowledge Bits,Elasticsearch,OpenSearch – Blog,Knowledge Bits,Elasticsearch,OpenSearch

Views: 0

LinkedIn
Twitter
Facebook
WhatsApp
Email

advisor pick´S post