Error in modules manager modules management requires filebeat config modules path setting

[Filebeat stable/filebeat] Modules Don’t Enable #16621 Comments Describe the bug When trying to use the filebeat modules, they aren’t enabled. Which chart: stable/elastic-stack stable/filebeat:7.0.1 What happened: Cannot enable modules unless manually exec’ed into the pod. What you expected to happen: To be able to enable modules using helm values. How to reproduce it (as […]

Содержание

  1. [Filebeat stable/filebeat] Modules Don’t Enable #16621
  2. Comments
  3. Footer
  4. [Filebeat-8.0]: module system is configured but has no enabled fileset: error on running filebeat setup command. #29175
  5. Comments
  6. Error in modules manager modules management requires filebeat config modules path setting
  7. Configure modules in the modules.d directoryedit
  8. Configure modules in the filebeat.yml fileedit
  9. Filebeat should not setup ML modules if the index pattern does not exist #11349
  10. Comments

[Filebeat stable/filebeat] Modules Don’t Enable #16621

Describe the bug
When trying to use the filebeat modules, they aren’t enabled.

Which chart:
stable/elastic-stack
stable/filebeat:7.0.1

What happened:
Cannot enable modules unless manually exec’ed into the pod.

What you expected to happen:
To be able to enable modules using helm values.

How to reproduce it (as minimally and precisely as possible):
Include the following in your values.yaml file

Anything else we need to know:
I have tried to set this underneath config instead of overrideConfig (example below)

Which resulted in the following behavior when exec’ed into the pod

The text was updated successfully, but these errors were encountered:

I was just investigating the use of modules and found that in the source Docker image the modules are all named .yml.disabled:

There is one workaround I am sure will work (but haven’t tried it) and one that I think will work (but also have not tried):

  1. create a custom container with the modules you want enabled (99% sure this will work)
  2. specify a custom command via .Values.command to rename any .disabled files to be not disabled

I am not sure if #2 is the official, correct, way of doing this, but it would be nice to get an option in the values.yaml to list the modules to be enabled and then the startup process takes care of the rename (if it exists I couldn’t find it). The process would have to account for someone typing a new module file that doesn’t exist (or one that is a typo) which I think may best be be handled by allowing the container to start and just emitting a message that the module file was not found.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

© 2023 GitHub, Inc.

You can’t perform that action at this time.

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.

Источник

[Filebeat-8.0]: module system is configured but has no enabled fileset: error on running filebeat setup command. #29175

Kibana version: 8.0 Snapshot Kibana Cloud-qa environment

Host OS: Ubuntu 20(.tar), Centos 8(.rpm), Debian 10(.deb) and MAC

Build details:

Steps to reproduce:

  1. Download and extract filebeat artifact.
  2. Update the filebeat.yml :
  1. Run: ./filebeat modules enable system .
  2. Run command: ./filebeat setup -e .
  3. Observe below error:

Expected Result:
No error should be there on running filebeat setup -e command, and filebeat should show data under Discover tab.

NOTE:

  • We are unable to get filebeat data on Ubuntu 20(.tar), Centos 8(.rpm), Debian 10(.deb) and MAC hosts.

The text was updated successfully, but these errors were encountered:

Pinging @elastic/elastic-agent (Team:Elastic-Agent)

Reviewed & mentioned to @andresrc

This is expected behaviour. From 8.0, all filesets are disabled by default and users have to enable them manually. An error is returned when nothing is enabled to let users know if they forgot turn on modules/filesets.

Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane)

I talked with @amolnater-qasource offline, but I am sharing it here as well. To avoid this error, you have to enable syslog and audit filesets in the file modules.d/system.yml :

Hi @kvch
Thanks for sharing the update.
On updating both syslog and auth to true under modules.d/system.yml .
We are successfully able to get data under Discover tab.

QUERY:
Further under Discover tab: we got a new index could you please confirm if this is an issue or expected?:

We have only enabled system module.

@EricDavisX
We have updated our test content for Filebeat installation as per this update.

Please let us know if anything else is required.
Thanks

@ruflin Is this new «apm--transaction. » template expected?

@simitt @sqren You might be able to help with the above? I assume this is coming from APM?

Hi @kvch Thanks for sharing the update. On updating both syslog and auth to true under modules.d/system.yml . We are successfully able to get data under Discover tab.

@EricDavisX We have updated our test content for Filebeat installation as per this update.

Hi @amolnater-qasource can you do a Filebeat docs check to see if it was updated to indicate this new expectation and general info on how to update it to ‘see’ any data come in? If it needs it, we can log a separate docs ticket and ref this.

Yes, it’s automatically created when the APM UI app is opened.
It’s possible to disable this

Hi @sqren
We haven’t accessed APM UI and haven’t done anything related to APM.
These datasets were there on running Filebeat with both syslog and auth filesets from system.yml.
You can refer screenshot at: #29175 (comment)
agent.type = filebeat.

Is that expected to get this for even Filebeat or it is an issue?

@EricDavisX
Under docs yes it is there that we need to enable the required filesets.

However till 7.16 we never enabled these, as by default these filesets gets enabled on running ./filebeat modules enable system for any module.
On 8.0 its set to false even after enabling system, user has to manually do it as confirmed at #29175 (comment)
It might be confusing for the first time.

Further for datastreams it is only mentioned to show for filebeat-* index.
On confirmation from @sqren we will log the required ticket.

Is that expected to get this for even Filebeat or it is an issue?

I don’t see any problems here. The user can delete the data view (index pattern) manually and stop it from being created again by setting xpack.apm.autocreateApmIndexPattern: false .

Thanks @sqren for sharing the feedback.

  • Documents do have these updates available for 8.0.
  • There is no discussion of the new index(«apm—transaction. «) under the docs.
  • However as confirmed above it is not an issue that a new index is created with filebeat-*.

Could you please confirm if any action is required for the same or we should mark this as done?

excellent summary — @amolnater-qasource please scan through the docs repo and put a ticket in if we don’t see one for the docs, then we can call it done. 🙂

Источник

Error in modules manager modules management requires filebeat config modules path setting

Using Filebeat modules is optional. You may decide to configure inputs manually if you’re using a log type that isn’t supported, or you want to use a different setup.

Filebeat modules provide a quick way to get started processing common log formats. They contain default configurations, Elasticsearch ingest pipeline definitions, and Kibana dashboards to help you implement and deploy a log monitoring solution.

You can configure modules in the modules.d directory (recommended), or in the Filebeat configuration file.

Before running Filebeat with modules enabled, make sure you also set up the environment to use Kibana dashboards. See Quick start: installation and configuration for more information.

On systems with POSIX file permissions, all Beats configuration files are subject to ownership and file permission checks. For more information, see Config File Ownership and Permissions.

Configure modules in the modules.d directoryedit

The modules.d directory contains default configurations for all the modules available in Filebeat. To enable or disable specific module configurations under modules.d , run the modules enable or modules disable command. For example:

The default configurations assume that your data is in the location expected for your OS and that the behavior of the module is appropriate for your environment. To change the default behavior, configure variable settings. For a list of available settings, see the documentation under Modules.

For advanced use cases, you can also override input settings.

You can enable modules at runtime by using the —modules flag. This is useful if you’re getting started and want to try things out. Any modules specified at the command line are loaded along with any modules that are enabled in the configuration file or modules.d directory. If there’s a conflict, the configuration specified at the command line is used.

Configure modules in the filebeat.yml fileedit

When possible, you should use the config files in the modules.d directory.

However, configuring modules directly in the config file is a practical approach if you have upgraded from a previous version of Filebeat and don’t want to move your module configs to the modules.d directory. You can continue to configure modules in the filebeat.yml file, but you won’t be able to use the modules command to enable and disable configurations because the command requires the modules.d layout.

To enable specific modules in the filebeat.yml config file, add entries to the filebeat.modules list. Each entry in the list begins with a dash (-) and is followed by settings for that module.

The following example shows a configuration that runs the nginx , mysql , and system modules:

Источник

Filebeat should not setup ML modules if the index pattern does not exist #11349

  • Version: 7.0.0-RC1
  • Operating System: x86_64 x86_64 x86_64 GNU/Linux
  • Steps to Reproduce:
  1. deploy elasticsearch from tar.gz
  2. deploy kibana from tag.gz
  3. ./filebeat setup —machine-learning

On a clean installation, if you run ./filebeat setup —machine-learning first, before any other filebeat setup step, then the ML jobs created are invalid. They have their datafeed config set for indices: INDEX_PATTERN . This string is meant to be substituted for the kibana index pattern filebeat-* which is not possible as it does not yet exist.

If the index pattern exists prior to running ml setup, then the ML jobs are created properly.

The following setup methods work, as the index pattern is created before ML setup.

I suspect this problem is not limited to RC1 and has existed in 6.x time frame.

On a more conceptual note, perhaps we could consider removing the ./filebeat setup —machine-learning command line option for the following reasons:

  1. ML jobs are already created during module setup, which is more targeted for specific named modules.
  2. I think the use case behind running ./filebeat setup —machine-learning would be in order to add ML jobs to an existing deployment that is already collecting data using filebeat. However this can be done already from the wizard inside the ML Kibana app.
  3. ./filebeat setup —machine-learning does not allow you to pick specific modules. As it stands today, filebeat setup will create both nginx and apache ML jobs and it is unlikely both are required.

The text was updated successfully, but these errors were encountered:

I agree with removing —machine-learning from the flags.

However, the third reason you mentioned is not true. It is possible to pick a specific module using the -modules flag. So ./filebeat setup —machine-learning -modules nginx sets up nginx ML module only.

Thanks for the clarification. I can confirm that if ./filebeat setup —machine-learning —modules nginx is run before the index pattern filebeat-* exists, then the jobs created are still invalid due to the datafeed config having indices: INDEX_PATTERN set.

I suggest to add a deprecation warning in 7.x and delete the flag in 8.0. So in the future ML would be set up when the user runs ./filebeat setup only.

The main problem is the index pattern + the ml code in kibana defaulting to INDEX_PATTERN_NAME, if the index pattern does not yet exists. Filebeat installs the index pattern only if dashboards are installed.

Filebeat calls a kibana API to install the machine learning jobs. It does not hold the jobs. When running filebeat setup only, but not having dashboards available for install (or disabled alltogether), it also fails, as job setup is part of filebeat setup. The extra flags like —machine-learning , and others are mostly used to selectively overwrite/change resources later.

I wonder if the index pattern name is indeed required to create the data feeds or not.
Based on this the kibana code should:

  • return an error if the index pattern does not exist, but is required
  • or just use the correct index names as told by the Beat, if index pattern is not really required to setup a data feed (not sure if API supports this).

@jgowdyelastic ^ Can you comment on this please?

If the index pattern is a must, then Beats should ensure that the index pattern exists before attempting to install machine learning jobs.

The setup endpoint could be changed to allow it to work if an index pattern hasn’t been created. This would allow the datafeed to be created correctly in this situation.
However, the reason the index pattern check is there in the first place is because the index pattern id is needed for custom urls and kibana saved objects which are also created by the ml module.
So in this situation the custom urls and dashboards created by the ML module will be broken.

I think the suggestion return an error if the index pattern does not exist, but is required is something we should do on the ML side. If the module requires the index pattern ID because it contains custom URLs or saved objects which uses it, then it should return an error.
And the datafeed should always replace INDEX_PATTERN_NAME even if there is no index pattern ID.

This change will not fix the issue with beats, because all of the Beats ML modules contain custom URLs and saved objects and so a valid index pattern is required.

I think the suggestion return an error if the index pattern does not exist, but is required is something we should do on the ML side.

+1
I’m fine to always assume that the index pattern ID is required. Just in case users trying to create dashboards after ML. But in this case we should still return an error.

btw. I can easily workaround the error in ES security by adding read rights to an imaginary (non-existent) index named INDEX_PATTERN_NAME. This should not be possible.

This change will not fix the issue with beats, because all of the Beats ML modules contain custom URLs and saved objects and so a valid index pattern is required.

Which is ok. At least we would have a proper error message explaining the actual problem. On the Beats side we can still consider to separate the index pattern installation from the dashboards and require users to run it before. In the end filebeat setup — shall only be used to update the stack with recent changes. The initial setup requires filebeat setup as is.

Источник

[Filebeat stable/filebeat] Modules Don’t Enable #16621

Comments

Describe the bug
When trying to use the filebeat modules, they aren’t enabled.

Which chart:
stable/elastic-stack
stable/filebeat:7.0.1

What happened:
Cannot enable modules unless manually exec’ed into the pod.

What you expected to happen:
To be able to enable modules using helm values.

How to reproduce it (as minimally and precisely as possible):
Include the following in your values.yaml file

Anything else we need to know:
I have tried to set this underneath config instead of overrideConfig (example below)

Which resulted in the following behavior when exec’ed into the pod

The text was updated successfully, but these errors were encountered:

I was just investigating the use of modules and found that in the source Docker image the modules are all named .yml.disabled:

There is one workaround I am sure will work (but haven’t tried it) and one that I think will work (but also have not tried):

  1. create a custom container with the modules you want enabled (99% sure this will work)
  2. specify a custom command via .Values.command to rename any .disabled files to be not disabled

I am not sure if #2 is the official, correct, way of doing this, but it would be nice to get an option in the values.yaml to list the modules to be enabled and then the startup process takes care of the rename (if it exists I couldn’t find it). The process would have to account for someone typing a new module file that doesn’t exist (or one that is a typo) which I think may best be be handled by allowing the container to start and just emitting a message that the module file was not found.

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

This issue is being automatically closed due to inactivity.

Footer

© 2023 GitHub, Inc.

You can’t perform that action at this time.

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.

Источник

[Filebeat-8.0]: module system is configured but has no enabled fileset: error on running filebeat setup command. #29175

Comments

Kibana version: 8.0 Snapshot Kibana Cloud-qa environment

Host OS: Ubuntu 20(.tar), Centos 8(.rpm), Debian 10(.deb) and MAC

Build details:

Steps to reproduce:

  1. Download and extract filebeat artifact.
  2. Update the filebeat.yml :
  1. Run: ./filebeat modules enable system .
  2. Run command: ./filebeat setup -e .
  3. Observe below error:

Expected Result:
No error should be there on running filebeat setup -e command, and filebeat should show data under Discover tab.

NOTE:

  • We are unable to get filebeat data on Ubuntu 20(.tar), Centos 8(.rpm), Debian 10(.deb) and MAC hosts.

The text was updated successfully, but these errors were encountered:

Pinging @elastic/elastic-agent (Team:Elastic-Agent)

Reviewed & mentioned to @andresrc

This is expected behaviour. From 8.0, all filesets are disabled by default and users have to enable them manually. An error is returned when nothing is enabled to let users know if they forgot turn on modules/filesets.

Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane)

I talked with @amolnater-qasource offline, but I am sharing it here as well. To avoid this error, you have to enable syslog and audit filesets in the file modules.d/system.yml :

Hi @kvch
Thanks for sharing the update.
On updating both syslog and auth to true under modules.d/system.yml .
We are successfully able to get data under Discover tab.

QUERY:
Further under Discover tab: we got a new index could you please confirm if this is an issue or expected?:

We have only enabled system module.

@EricDavisX
We have updated our test content for Filebeat installation as per this update.

Please let us know if anything else is required.
Thanks

@ruflin Is this new «apm--transaction. » template expected?

@simitt @sqren You might be able to help with the above? I assume this is coming from APM?

Hi @kvch Thanks for sharing the update. On updating both syslog and auth to true under modules.d/system.yml . We are successfully able to get data under Discover tab.

@EricDavisX We have updated our test content for Filebeat installation as per this update.

Hi @amolnater-qasource can you do a Filebeat docs check to see if it was updated to indicate this new expectation and general info on how to update it to ‘see’ any data come in? If it needs it, we can log a separate docs ticket and ref this.

Yes, it’s automatically created when the APM UI app is opened.
It’s possible to disable this

Hi @sqren
We haven’t accessed APM UI and haven’t done anything related to APM.
These datasets were there on running Filebeat with both syslog and auth filesets from system.yml.
You can refer screenshot at: #29175 (comment)
agent.type = filebeat.

Is that expected to get this for even Filebeat or it is an issue?

@EricDavisX
Under docs yes it is there that we need to enable the required filesets.

However till 7.16 we never enabled these, as by default these filesets gets enabled on running ./filebeat modules enable system for any module.
On 8.0 its set to false even after enabling system, user has to manually do it as confirmed at #29175 (comment)
It might be confusing for the first time.

Further for datastreams it is only mentioned to show for filebeat-* index.
On confirmation from @sqren we will log the required ticket.

Is that expected to get this for even Filebeat or it is an issue?

I don’t see any problems here. The user can delete the data view (index pattern) manually and stop it from being created again by setting xpack.apm.autocreateApmIndexPattern: false .

Thanks @sqren for sharing the feedback.

  • Documents do have these updates available for 8.0.
  • There is no discussion of the new index(«apm—transaction. «) under the docs.
  • However as confirmed above it is not an issue that a new index is created with filebeat-*.

Could you please confirm if any action is required for the same or we should mark this as done?

excellent summary — @amolnater-qasource please scan through the docs repo and put a ticket in if we don’t see one for the docs, then we can call it done. 🙂

Источник

Русские Блоги

Вы действительно разбираетесь в модулях Filebeat?

определение

Модули Filebeat упрощают сбор, анализ и визуализацию распространенных форматов журналов. Типичный модуль (например, для журналов Nginx) состоит из одного или нескольких наборов файлов (возьмем Nginx в качестве примера: доступ и ошибка).

Набор файлов содержит следующее:

  • Конфигурация ввода Filebeat, содержащая путь к файлу поиска или журнала по умолчанию. Эти пути по умолчанию зависят от операционной системы. Конфигурация Filebeat также отвечает за объединение нескольких строк событий, когда это необходимо.
  • Определение конвейера узла приема Elasticsearch, используемого для анализа строк журнала.
  • Определения полей используются для настройки правильного типа в Elasticsearch для каждого поля. Они также содержат краткое описание каждого поля.
  • Простые информационные панели Kibana для визуализации файлов журналов.

Filebeat автоматически настроит эти конфигурации в соответствии с вашей средой и загрузит их в соответствующие компоненты стека Elastic. Filebeat предоставляет набор предварительно созданных модулей, вы можете использовать эти модули для быстрого внедрения и развертывания решения для мониторинга журналов, включая образцы панелей мониторинга и визуализацию данных, что позволяет сэкономить время при настройке. Эти модули поддерживают распространенные форматы журналов, такие как Nginx, Apache2 и MySQL.

Официальные документы:

потребность

Например, у нас есть такое требование, что нам нужно собирать журналы доступа Nginx к ElasticSearch и иметь детальный контроль. Другими словами, нам нужно сегментировать журналы Nginx, например, remote_addr, upstream_response_time и т. Д. Нужно вырезать в формате JSON.

анализ

Для этого спроса мы, естественно, думаем о следующих решениях:

(1) Определите журнал доступа как logstash_json, чтобы данные передавались в es, естественно, в формате json. В форме:

(2) Мы можем реализовать это через модуль Grok в Logstash.

Два вышеуказанных метода также широко используются нами, но сегодня мы напрямую используем встроенные модули Filebeat Nginx для достижения

решить

Часто используемые команды модулей Filebeat:

Когда мы включаем модуль Nginx, суффикс под модулями автоматически удаляется.

Редактируем nginx.yml и указываем путь журнала собранного nginx:

Попробуйте решение первое:

Мы отправляем данные, собранные Filelebeat, в Logstash, а затем проверяем, нормально ли они анализируются и нормальный ли стандартный вывод. Конфигурация Logstash следующая:

Результат будет следующим:

Результат нормально не врезался в json.

Попробуйте второй план:

Filebeat напрямую выводит ElasticSearch, и его конфигурация выглядит следующим образом:

Затем выполните настройку ./filebeat, чтобы вступили в силу:

Взглянем на индекс ElasticSeach:

Инструменты разработчика запрашивают данные ElasticSearch:

Создайте шаблон индекса, используйте filebeat- * и запросите результат индекса в Kibana:

Было обнаружено, что журнал Nginx не был разрезан на json.

Давайте посмотрим на приборную панель Kibana:

Картина рынка такова:

Этот рынок, как мы догадались, был создан с использованием настройки ./filebeat.

в заключении

Из приведенных выше попыток мы пришли к выводу:Filebeat предоставляет набор предварительно созданных модулей, но это не панацея для анализа вашего формата журнала и его сокращения до нужного вам формата json.

Интеллектуальная рекомендация

Andorid Авторитетное программирование Руководство

1 Создайте фрагмент 2 tools:text а такжеandroid:textразница android:text —> what you would see while running the app tools:text —> what you would see just on preview window (when you need to d.

003.JDK Скачать и установка Подробное объяснение (Graking Gine)

Как разработчик,Должен мастерСтроительство среды развития,ЭтоСамый основной шагВ будущем еще много ситуаций в программном обеспечении для установки и связанной с этим конфигурации. Java 8 — наиболее ш.

Почему регуляризация может быть уменьшена

Запишите его сначала, а затем организуйте его в будущем. 1CS231N Примечания курса 2Учитель Wu Enda Учебная программа. Чрезмерные переменные признака могут привести к переоснащению. Чтобы предотвратить.

Перенести пространство из / home, установленного по умолчанию в CentOS7, в корневой каталог /

Стандарты набора персонала Unicorn Enterprise Heavy для Python-инженеров 2019 >>> 1. Основные концепции Cent0S 7 включает LVM2 (Диспетчер логических томов) по умолчанию и делит жесткий диск м.

Фильтрующие экраны Известной группе, возвращает элемент или объект, который верно или формирует новый массив

Определение и использование Фильтр () Метод создает новый массив, а элементы в новом массиве являются проверкой всех элементов, которые соответствуют условиям в указанном массиве. Возвращает массив, к.

Источник

Filebeat NGINX module 7.10.0 Upgrade Errors #22567

Comments

For confirmed bugs, please report:

  • Version: 7.10.0
  • Operating System: ECK 1.3
  • Steps to Reproduce: Upgrade filebeat with nginx module from 7.9.3 to 7.10.0

While upgrading filebeat from 7.9.3 to 7.10.0 which leverage a nginx module the deployment was failing. After checking the log file I had the below errors:

The text was updated successfully, but these errors were encountered:

Pinging @elastic/integrations-services (Team:Services)

filebeat.yml looks like this:

Linking some issues provided by @sophiec20 in relation to this issue:

Using the elastic user still keeps getting the same error logs.

@kvch By adding —dashboards —index-management to the filebeat -e setup in our Helm configuration and leaving the modules configured in the yml file, did the job!

What now I’m not sure is how to load the ML jobs for nginx, unless they are pre-loaded!

@christophercutajar filebeat setup -e —modules nginx —dashboards —index-management didn’t help in our case (Kubernetes 1.16 cluster, ingress-nginx v0.40.2), actually also tried to upgrade to 7.10.1 but without luck. While checking events on the Discover tab I don’t see any hits with event.module:nginx as they used to be in 7.9.3.

BTW the dashboards were recreated in Kibana but now [Filebeat Nginx] Overview ECS gives errors like Saved field «source.geo.location» is invalid for use with the «Geohash» aggregation. Please select a new field. and Saved field «user_agent.version» is invalid for use with the «Terms» aggregation. Please select a new field. . Which leads to an assumption that dashboard piece inside filebeat module directory is not compatible with the latest elasticsearch version.

still the same issue with 7.10.2 — loading dashboards with filebeat ( filebeat setup -e —modules nginx —dashboards —index-management ) did NOT help

Same for me, any update?

What version are you running @roysG?

filebeat version 7.13.3 (amd64), libbeat 7.13.3 [3ddad4c built 2021-07-02 12:11:38 +0000 UTC]

Источник

FileBeat Collect Nginx Logs in Modules

1. Why use modules to collect logs

Modules is just a small feature of Filebeat. Due to this logless output such as MySQL, Redis, FileBeat cannot convert collected ordinary logs to JSON format, thus performing meticulous statistics.

Logstash can do converting a normal log into a JSON format, but the configuration is very complicated, and it is easy to make an error.

In connection with the inconvenience, the ELK official launched the FileBeat Modules module function, making log transformations of common services into templates, only start templates, configuring log paths to convert ordinary text log formats into JSON format output

Modules Collection Log Implementation Ideas:

1. First Enable Module, then modify the configuration file to increase the Module file path

2. Modify a Module configuration file to clear what log

3. Open Module Kibana Self-with Graphics

2.Filebeat Open Modules

2.1. Modify the configuration file to specify the modules path

You don’t need to restart after you finish the Modules path.

Modify the configuration file
[[email protected] ~]# vim /etc/filebeat/filebeat.yml
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  
 2. See which modules enabled: indicates that the displayed Disabled is turned off.
[[email protected] ~]# filebeat modules list
Enabled:

Disabled:
apache2
auditd
elasticsearch
haproxy
icinga
iis
kafka
kibana
logstash
mongodb
mysql
nginx
osquery
postgresql
redis
suricata
system
traefik 

2.2.filebeat Enable Nginx Module

1. Enable nginx modules
[[email protected] ~]# filebeat modules enable nginx
Enabled nginx

 2. View the launched modules list
[[email protected] ~]# filebeat modules list
Enabled:
nginx

 3. Viewing the file in the modules.d directory changes, you can see it has been changed from .disabled. Yml
[[email protected] ~]# ll /etc/filebeat/modules.d/nginx*
 -rw-r - r - ROOT ROOT 369 January 24 2019 /etc/filebeat/modules.d/nginx.yml
 In fact, FileBeat Modules Enable nginx is the same, it is the same, give a name.

2.3. Configuring NGINX Collection Ordinary Log Format

[[email protected] ~]# vim /etc/nginx/nginx.conf
log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

[[email protected]02 ~]# systemctl reload nginx
    
[[email protected] ~]# tail -f /var/log/nginx/www_access.log
192.168.81.210 - - [21/Jan/2021:15:46:49 +0800] "GET / HTTP/1.1" 200 10 "-" "curl/7.29.0" "-"

2.4. Activate Nginx Module in ES Cluster

All ES nodes must be operated

1. Install the plugin
/usr/share/elasticsearch/bin/elasticsearch-plugin install  file:///root/ingest-user-agent-6.6.0.zip 
-> Downloading file:///root/ingest-user-agent-6.6.0.zip
[=================================================] 100%   
-> Installed ingest-user-agent


/usr/share/elasticsearch/bin/elasticsearch-plugin install  file:///root/ingest-geoip-6.6.0.zip 
-> Downloading file:///root/ingest-geoip-6.6.0.zip
[=================================================] 100%   
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@     WARNING: plugin requires additional permissions     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
* java.lang.RuntimePermission accessDeclaredMembers
* java.lang.reflect.ReflectPermission suppressAccessChecks
See http://docs.oracle.com/javase/8/docs/technotes/guides/security/permissions.html
for descriptions of what these permissions allow and the associated risks.

Continue with installation? [y/N]y
-> Installed ingest-geoip

 2. Restart ES
systemctl restart elasticsearch

ES plugin operation extension

3.Es plugin extension
 3.1. View plugin list
[[email protected] ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin list
ik
ingest-geoip
ingest-user-agent

 3.2. Delete a plugin
[[email protected] ~]# /usr/share/elasticsearch/bin/elasticsearch-plugin remove  ingest-user-agent
-> removing [ingest-user-agent]...

2.5. Configuring the NGINX Module configuration file to collect Nginx logs

Module will format the collected Nginx logs, and eventually convert to the JSON format.

Official website configuration explanation address: https://www.eatic.co/guide/en/beats/filebeat/6.6/filebeat-module-nginx.html

Modify the configuration file
[[email protected] ~]# vim /etc/filebeat/modules.d/nginx.yml 
- module: nginx
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/var/log/nginx/www_access.log"]

  # Error logs
  error:
    enabled: true

 2. Restart Filebeat
[[email protected] ~]# systemctl restart filebeat

2.6. View index data on ES

Since we don’t specify an index name, it is the default filebeat-xxx

View data, it is indeed JSON format

2.7. Related ES Index Library on Kibana

Click ManagerMent-Index Mode — Create an index

Create success

2.8. Is Kibana View the logged log in json format

Click Discovery-Select Index, click on the single document view to see the detailed JSON format

The JSON format collected by Module is very detailed, and some of the Filebeat’s JSON is collected.

2.9.nginx Module Open Kibana Graphics

Filebeat’s Module has its own rich graphics page, you only need to open it.

2.9.1. Configure FileBeat to connect Kibana

Configure FileBeat to connect kibana
[[email protected] ~]# vim /etc/filebeat/filebeat.yml
setup.kibana:
  host: "192.168.81.210:5601"

 2. Open graphic display
[[email protected] ~]# filebeat setup -e

2.9.2. View graphics on Kibana

Click Visualize-Search to view the graph to view, we search for nginx

2.9.3. Click Dashboard to view graphic aggregation

Click Dashboard-Search NGINX, OVERVIEW and Access and Error are all nice graphics, and the drawing can also be studied according to the graphics inside.

Overview dashboard

Access And Error Dashboard

3. Place that collects logs for Module

  • Use the Module collection directly to the default index name, not good
  • Access log and error in the same index, not good

In response to the above two questions, you can match the FileBeat matching rules, and create the index of XXX when the log name is called XXX log. Since it is based on the index name, then it is perfect to solve the Access and Error logs. Index problem

3.1. Use the Module to collect logs and customize the creation index

In 2, FileBeat has been implemented using Module to collect NGINX logs, but the index created is indeed the default, this is still not satisfactory

In the official manual, the module configuration file does not have a parameter to match the condition to create an index.

However, we can use the FileBeat matching rules to specify the index name, such as the index name of XXX when the log is XXX.

Filebeat can be matched only, or you can match the rules for a field. Since the Module configuration file cannot specify TAG, we match a field of JSON data, obviously the log name is better match.

FileBeat can also be perfectly solved with an index problem with the Access and Error log according to the log name.

3.1.1. Configure FileBeat to match the path specified by Module and create an index.

Just add a WHEN match condition in the FILEBEAT configuration file

Configure Filebeat
[[email protected] ~]# vim /etc/filebeat/filebeat.yml
    - index: "nginx-www-access-%{+yyyy.MM.dd}"
      when.contains:
        source: "/var/log/nginx/www_access.log"

 2. Restart Filebeat
[[email protected] ~]# systemctl restart filebeat

3.1.2. View whether ES has an index of our specified name

Before viewing, use the AB command to generate several access logs.

ab -c 100 -n 1000 http://www.jiangxl.com/

Index has been successfully generated,

3.1.3. In the Kibana associated index and view the collected data

Associated ES index

Collected logs are also JSON format

4. Resolve the custom module index after Dashboard can’t display the problem

Since we customize the name of the index, the Dashboard is written by the index name, so it can’t be displayed, just need to modify the index in the dashboard into your own index.

1. Back up the figure of Module on Kibana to other paths
[[email protected] ~]# mkdir /data/kibana_module/kibana_module_nginx
[[email protected] ~]# cp -r  /usr/share/filebeat/kibana/ /data/kibana_module/kibana_module_nginx


 2. Keep only the template for nginx
[[email protected] ~]# /data/kibana_module/kibana_module_nginx/6
[[email protected] /data/kibana_module/kibana_module_nginx/6]# find dashboard/ -type f ! -name "*nginx*" |xargs rm -rf


 3. Modify the index name in the template file
 First checking is really modified, in use -i parameters
[[email protected] /data/kibana_module/kibana_module_nginx/6]# sed -n 's#filebeat-*#nginx-*#gp' dashboard/Filebeat-nginx-overview.json

[[email protected] /data/kibana_module/kibana_module_nginx/6]# sed -i 's#filebeat-*#nginx-*#g' Filebeat-nginx-logs.json 
[[email protected] /data/kibana_module/kibana_module_nginx/6]# sed -i 's#filebeat-*#nginx-*#g' Filebeat-nginx-overview.json

[[email protected] /data/kibana_module/kibana_module_nginx/6]# sed -i 's#filebeat-*#nginx-*#g' index-pattern/filebeat.json 


 4. Specify the path to the new Module template
 Delete the original template on Kibana before importing
[[email protected] ~]# filebeat setup --dashboards -E setup.dashboards.directory=/data/kibana_module/kibana_module_nginx
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards

View graphic already displayed

4. Troubleshooting Recovery

4.1.Filebeat Module List cannot be used

If the error is as follows, the error refers to the path to specify the Module file in the configuration file.

[[email protected] /etc/filebeat]# filebeat modules list Error in modules manager: modules management requires 'filebeat.config.modules.path' setting

Solve: You can set the Module file path in the configuration file.

4.2. Activate Nginx Modules error

Phenomenon: After configuring nginx modules, start an error

The error is as follows:

2021-01-21T15:55:12.326+0800 ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(http://192.168.81.210:9200)): Connection marked as failed because the onConnect callback failed: Error loading pipeline for fileset nginx/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:

Solution:

sudo bin/elasticsearch-plugin install ingest-user-agent sudo bin/elasticsearch-plugin install ingest-geoip

4.3.FileBeat enabled Module Graphics Report

Phenomenon: Execute FileBeat Startup-E error

2021-01-21T20:14:10.888+0800 ERROR instance/beat.go:911 Exiting: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: fail to execute the HTTP GET request: Get http://localhost:5601/api/status: dial tcp [::1]:5601: connect: connection refused. Response: Exiting: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: fail to execute the HTTP GET request: Get http://localhost:5601/api/status: dial tcp [::1]:5601: connect: connection refused. Response:

[::1]:5601: connect: connection refused. Response:
Exiting: fail to create the Kibana loader: Error creating Kibana client: Error creating Kibana client: fail to get the Kibana version: HTTP GET request to /api/status fails: fail to execute the HTTP GET request: Get http://localhost:5601/api/status: dial tcp [::1]:5601: connect: connection refused. Response: `

Solution: This is because there is no FileBeat configuration kibana address, because the profile does not specify the kibana address, FileBeat thinks itself is kibana, and I have access to localhost: 5601 but kibana is unable to write localhost address, write localhost address. The outside world cannot be accessed, so you must configure the kibana address in Filebeat.

I am trying to visualize sample data on Kibana using Windows. I followed the link to Security Analytics section to setup Elasticsearch, Kibana and Filebeats.
Link to installation

I have installed Elastic search and Kibana, and have been able to successfully launch both. The description in the link states to configure the filebeat.yml file.

I have configured the filebeat.yml file as follows

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:programdataelasticsearchlogs*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  username: "elastic"
  password: "n2yHQc8Cp1K2iRrOrNcV"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

After running the command «.filebeat -e -modules=system —setup», filebeat begins, successfully connecting to Elasticsearch and loading Kibana dashboards.

But when I click on the dashboard section on Kibana, the Filebeat process exits with an error message saying «Exiting: Error in initing prospector: No paths were defined for prospector accessing config».

Error

Am I doing something wrong? How can this issue be rectified?

Here is the filtered version of the filebeat config file:

 filebeat.prospectors:
- type: log
  enabled: false
  paths:
    - /var/log/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 3
setup.kibana:
output.elasticsearch:
  hosts: ["localhost:9200"]
  username: "elastic"
  password: "n2yHQc8Cp1K2iRrOrNcV"

определение

Модули Filebeat упрощают сбор, анализ и визуализацию распространенных форматов журналов. Типичный модуль (например, для журналов Nginx) состоит из одного или нескольких наборов файлов (возьмем Nginx в качестве примера: доступ и ошибка).

Набор файлов содержит следующее:

  • Конфигурация ввода Filebeat, содержащая путь к файлу поиска или журнала по умолчанию. Эти пути по умолчанию зависят от операционной системы. Конфигурация Filebeat также отвечает за объединение нескольких строк событий, когда это необходимо.
  • Определение конвейера узла приема Elasticsearch, используемого для анализа строк журнала.
  • Определения полей используются для настройки правильного типа в Elasticsearch для каждого поля. Они также содержат краткое описание каждого поля.
  • Простые информационные панели Kibana для визуализации файлов журналов.

Filebeat автоматически настроит эти конфигурации в соответствии с вашей средой и загрузит их в соответствующие компоненты стека Elastic. Filebeat предоставляет набор предварительно созданных модулей, вы можете использовать эти модули для быстрого внедрения и развертывания решения для мониторинга журналов, включая образцы панелей мониторинга и визуализацию данных, что позволяет сэкономить время при настройке. Эти модули поддерживают распространенные форматы журналов, такие как Nginx, Apache2 и MySQL.

Официальные документы:

        https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-module-nginx.html

потребность

Например, у нас есть такое требование, что нам нужно собирать журналы доступа Nginx к ElasticSearch и иметь детальный контроль. Другими словами, нам нужно сегментировать журналы Nginx, например, remote_addr, upstream_response_time и т. Д. Нужно вырезать в формате JSON.

анализ

Для этого спроса мы, естественно, думаем о следующих решениях:

(1) Определите журнал доступа как logstash_json, чтобы данные передавались в es, естественно, в формате json. В форме:

log_format logstash_json '{  "timestamp": "$time_local", '
                         '"remote_addr": "$remote_addr", '
                         '"status": "$status", '
                         '"request_time": "$request_time", '
                         '"upstream_response_time": "$upstream_response_time", '
                         '"body_bytes_sent":"$body_bytes_sent", '
                         '"request": "$request", '
                         '"http_referrer": "$http_referer", '
                         '"upstream_addr": "$upstream_addr", '
                         '"http_x_real_ip": "$http_x_real_ip", '
                         '"http_x_forwarded_for": "$http_x_forwarded_for", '
                         '"http_user_agent": "$http_user_agent",'
                         '"request_filename": "$request_filename" }';

(2) Мы можем реализовать это через модуль Grok в Logstash.

Два вышеуказанных метода также широко используются нами, но сегодня мы напрямую используем встроенные модули Filebeat Nginx для достижения

решить

Часто используемые команды модулей Filebeat:

./filebeat modules list # Показать все модули
 ./filebeat modules -h # Показать команду справки по модулям
 ./filebeat -h # Показать команду справки
 ./filebeat modules enable nginx # включить указанный модуль
 ./filebeat -e # выполнение на переднем плане

Когда мы включаем модуль Nginx, суффикс под модулями автоматически удаляется.

[[email protected] modules.d]$ pwd
/opt/application/elk/test-filebeat/modules.d
[[email protected] modules.d]$ ls
apache.yml.disabled         googlecloud.yml.disabled  kibana.yml.disabled    netflow.yml.disabled     redis.yml.disabled
auditd.yml.disabled         haproxy.yml.disabled      logstash.yml.disabled  nginx.yml                santa.yml.disabled
cisco.yml.disabled          icinga.yml.disabled       mongodb.yml.disabled   osquery.yml.disabled     suricata.yml.disabled
coredns.yml.disabled        iis.yml.disabled          mssql.yml.disabled     panw.yml.disabled        system.yml.disabled
elasticsearch.yml.disabled  iptables.yml.disabled     mysql.yml.disabled     postgresql.yml.disabled  traefik.yml.disabled
envoyproxy.yml.disabled     kafka.yml.disabled        nats.yml.disabled      rabbitmq.yml.disabled    zeek.yml.disabled
[[email protected] modules.d]$ 

Редактируем nginx.yml и указываем путь журнала собранного nginx:

# Module: nginx
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.3/filebeat-module-nginx.html

- module: nginx
  # Access logs
  access:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/home/data/logs/www.qq.cn/*.access.log"]

  # Error logs
  error:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    #var.paths

Попробуйте решение первое:

Мы отправляем данные, собранные Filelebeat, в Logstash, а затем проверяем, нормально ли они анализируются и нормальный ли стандартный вывод. Конфигурация Logstash следующая:

input {
    beats {
        port => "5044"
    }
}


filter {

}

output {
        stdout { codec => rubydebug } 

}

Результат будет следующим:

{
       "message" => "[02/Jan/2020:20:32:24 +0800] 192.168.106.162 200 0.000 - 10 "GET / HTTP/1.1" "-" - - "-"  curl/7.29.0" /home/data/webroot/www.qq.cn/xxooo.html ",
         "event" => {
          "module" => "nginx",
         "dataset" => "nginx.access",
        "timezone" => "+08:00"
    },
    "@timestamp" => 2020-04-09T08:13:09.487Z,
         "input" => {
        "type" => "log"
    },
          "host" => {
        "name" => "me03"
    },
       "fileset" => {
        "name" => "access"
    },
         "agent" => {
            "hostname" => "me03",
                  "id" => "2046309e-6157-4114-890a-65dc8142264b",
        "ephemeral_id" => "b6e4a023-cc9a-4257-9d45-7f710d2d3425",
             "version" => "7.3.0",
                "type" => "filebeat"
    },
           "log" => {
          "file" => {
            "path" => "/home/data/logs/www.qq.cn/www.qq.cn.access.log"
        },
        "offset" => 4702
    },
           "ecs" => {
        "version" => "1.0.1"
    },
      "@version" => "1",
          "tags" => [
        [0] "beats_input_codec_plain_applied"
    ],
       "service" => {
        "type" => "nginx"
    }
}

Результат нормально не врезался в json.

Попробуйте второй план:

Filebeat напрямую выводит ElasticSearch, и его конфигурация выглядит следующим образом:

## Вывод в es
output:
  elasticsearch:
    hosts: ["localhost:9200"]

Затем выполните настройку ./filebeat, чтобы вступили в силу:

[[email protected] test-filebeat]$ ./filebeat setup
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
Loaded machine learning job configurations
Loaded Ingest pipelines

Взглянем на индекс ElasticSeach:

Инструменты разработчика запрашивают данные ElasticSearch:

Создайте шаблон индекса, используйте filebeat- * и запросите результат индекса в Kibana:

Было обнаружено, что журнал Nginx не был разрезан на json.

Давайте посмотрим на приборную панель Kibana:

Картина рынка такова:

Этот рынок, как мы догадались, был создан с использованием настройки ./filebeat.

в заключении

Из приведенных выше попыток мы пришли к выводу:Filebeat предоставляет набор предварительно созданных модулей, но это не панацея для анализа вашего формата журнала и его сокращения до нужного вам формата json.

How to install and configure Filebeat - Lightweight Log Forwarder

Over last few years, I’ve been playing with Filebeat – it’s one of the best lightweight log/data forwarder for your production application.

Consider a scenario in which you have to transfer logs from one client location to central location for analysis. Splunk is one of the alternative to forward logs but it’s too costly. In my opinion it’s way too costly.

That’s where Filebeat comes into picture. It’s super light weight, simple, easy to setup, uses less memory and too efficient. Filebeat is a product of Elastic.co.

It’s Robust and Doesn’t Miss a Beat. It guarantees delivery of logs.

It’s ready of all types of containers:

  • Kubernetes
  • Docker

With simple one liner command, Filebeat handles collection, parsing and visualization of logs from any of below environments:

  • Apache
  • NGINX
  • System
  • MySQL
  • Apache2
  • Auditd
  • Elasticsearch
  • haproxy
  • Icinga
  • IIS
  • Iptables
  • Kafka
  • Kibana
  • Logstash
  • MongoDB
  • Osquery
  • PostgreSQL
  • Redis
  • Suricata
  • Traefik
  • And more…

One of the best Lightweight log file shipper

Filebeat comes with internal modules (auditd, Apache, NGINX, System, MySQL, and more) that simplify the collection, parsing, and visualization of common log formats down to a single command.

How to Install Filebeat on Linux environment?

If you have any of below questions then you are at right place:

  • Getting Started With Filebeat
  • A Filebeat Tutorial: Getting Started
  • Install, Configure, and Use FileBeat – Elasticsearch
  • Filebeat setup and configuration example
  • How To Install Elasticsearch, Logstash?
  • How to Install Elastic Stack on Ubuntu?

Step-1) Installation

Download and extract Filebeat binary using below command.

Linux environment:

root@localhost:~# curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.0-linux-x86_64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 11.1M  100 11.1M    0     0  13.2M      0 --:--:-- --:--:-- --:--:-- 13.2M

root@localhost:~# tar xzvf filebeat-6.7.0-linux-x86_64.tar.gz
root@localhost:~# cd filebeat-6.7.0-linux-x86_64/

root@localhost:~/filebeat-6.7.0-linux-x86_64# pwd
/root/filebeat-6.7.0-linux-x86_64

root@localhost:~/filebeat-6.7.0-linux-x86_64# ls -ltra
total 36720
-rw-r--r--  1 root root    13675 Mar 21 14:30 LICENSE.txt
-rw-r--r--  1 root root   163444 Mar 21 14:30 NOTICE.txt
drwxr-xr-x  4 root root     4096 Mar 21 14:31 kibana
drwxr-xr-x  2 root root     4096 Mar 21 14:33 modules.d
drwxr-xr-x 21 root root     4096 Mar 21 14:33 module
-rw-r--r--  1 root root   146747 Mar 21 14:33 fields.yml
-rw-------  1 root root     7714 Mar 21 14:33 filebeat.yml
-rw-r--r--  1 root root    69996 Mar 21 14:33 filebeat.reference.yml
-rwxr-xr-x  1 root root 37161549 Mar 21 14:34 filebeat
-rw-r--r--  1 root root      802 Mar 21 14:35 README.md
-rw-r--r--  1 root root       41 Mar 21 14:35 .build_hash.txt
drwx------  9 root root     4096 Mar 30 13:46 ..
drwxr-xr-x  5 root root     4096 Mar 30 13:46 .

Mac Download:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.0-darwin-x86_64.tar.gz
tar xzvf filebeat-6.7.0-darwin-x86_64.tar.gz

RPM Download:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.7.0-x86_64.rpm
sudo rpm -vi filebeat-6.7.0-x86_64.rpm

Step-2) Configure filebeat.yml config file

Checkout filebeat.yml file. It’s filebeat configuration file.

Here is a simple file content.

root@localhost:~/filebeat-6.7.0-linux-x86_64# cat filebeat.yml 
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:programdataelasticsearchlogs*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Enabled ilm (beta) to use index lifecycle management instead daily indices.
  #ilm.enabled: false

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:

Open filebeat.yml file and setup your log file location:

Open filebeat.yml file and setup your log file location

Step-3) Send log to ElasticSearch

Make sure you have started ElasticSearch locally before running Filebeat. I’ll publish an article later today on how to install and run ElasticSearch locally with simple steps.

Here is a filebeat.yml file configuration for ElasticSearch.

ElasticSearch runs on port 9200.

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

And you are all set.

Step-4) Run Filebeat

bash-3.2$ sudo chown root filebeat.yml 
bash-3.2$ sudo ./filebeat -e

Execute above two commands from filebeat root directory and you should see filebeat startup logs as below.

root@localhost:/user/crunchify/filebeat-6.6.2-linux-x86_64# sudo chown root filebeat.yml 
root@localhost:/user/crunchify/filebeat-6.6.2-linux-x86_64# sudo ./filebeat -e
2019-03-30T14:52:02.608Z	INFO	instance/beat.go:616	Home path: [/user/crunchify/filebeat-6.6.2-linux-x86_64] Config path: [/user/crunchify/filebeat-6.6.2-linux-x86_64] Data path: [/user/crunchify/filebeat-6.6.2-linux-x86_64/data] Logs path: [/user/crunchify/filebeat-6.6.2-linux-x86_64/logs]
2019-03-30T14:52:02.608Z	INFO	instance/beat.go:623	Beat UUID: da7e202d-d480-42df-907a-1073b19c8e2d
2019-03-30T14:52:02.609Z	INFO	[seccomp]	seccomp/seccomp.go:116	Syscall filter successfully installed
2019-03-30T14:52:02.609Z	INFO	[beat]	instance/beat.go:936	Beat info	{"system_info": {"beat": {"path": {"config": "/user/crunchify/filebeat-6.6.2-linux-x86_64", "data": "/user/crunchify/filebeat-6.6.2-linux-x86_64/data", "home": "/user/crunchify/filebeat-6.6.2-linux-x86_64", "logs": "/user/crunchify/filebeat-6.6.2-linux-x86_64/logs"}, "type": "filebeat", "uuid": "da7e202d-d480-42df-907a-1073b19c8e2d"}}}
2019-03-30T14:52:02.609Z	INFO	[beat]	instance/beat.go:945	Build info	{"system_info": {"build": {"commit": "1eea934ce81be553337f2828bd12131896fea8e4", "libbeat": "6.6.2", "time": "2019-03-06T14:17:59.000Z", "version": "6.6.2"}}}
2019-03-30T14:52:02.609Z	INFO	[beat]	instance/beat.go:948	Go runtime info	{"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.10.8"}}}
2019-03-30T14:52:02.611Z	INFO	[beat]	instance/beat.go:952	Host info	{"system_info": {"host": {"architecture":"x86_64","boot_time":"2019-01-15T18:44:58Z","containerized":false,"name":"localhost","ip":["127.0.0.1/8","::1/128","50.116.13.161/24","192.168.177.126/17","2600:3c01::f03c:91ff:fe17:4534/64","fe80::f03c:91ff:fe17:4534/64"],"kernel_version":"4.18.0-13-generic","mac":["f2:3c:91:17:45:34"],"os":{"family":"debian","platform":"ubuntu","name":"Ubuntu","version":"18.10 (Cosmic Cuttlefish)","major":18,"minor":10,"patch":0,"codename":"cosmic"},"timezone":"UTC","timezone_offset_sec":0,"id":"1182104d1089460dbcc0c94ff1954c8c"}}}
2019-03-30T14:52:02.611Z	INFO	[beat]	instance/beat.go:981	Process info	{"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend","audit_read"],"ambient":null}, "cwd": "/user/crunchify/filebeat-6.6.2-linux-x86_64", "exe": "/user/crunchify/filebeat-6.6.2-linux-x86_64/filebeat", "name": "filebeat", "pid": 20394, "ppid": 20393, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2019-03-30T14:52:01.740Z"}}}
2019-03-30T14:52:02.611Z	INFO	instance/beat.go:281	Setup Beat: filebeat; Version: 6.6.2
2019-03-30T14:52:05.613Z	INFO	add_cloud_metadata/add_cloud_metadata.go:319	add_cloud_metadata: hosting provider type not detected.
2019-03-30T14:52:05.614Z	INFO	elasticsearch/client.go:165	Elasticsearch url: http://localhost:9200
2019-03-30T14:52:05.615Z	INFO	[publisher]	pipeline/module.go:110	Beat name: localhost
2019-03-30T14:52:05.615Z	INFO	instance/beat.go:403	filebeat start running.
2019-03-30T14:52:05.615Z	INFO	registrar/registrar.go:134	Loading registrar data from /user/crunchify/filebeat-6.6.2-linux-x86_64/data/registry
2019-03-30T14:52:05.615Z	INFO	[monitoring]	log/log.go:117	Starting metrics logging every 30s
2019-03-30T14:52:05.616Z	INFO	registrar/registrar.go:141	States Loaded from registrar: 0
2019-03-30T14:52:05.616Z	INFO	crawler/crawler.go:72	Loading Inputs: 1
2019-03-30T14:52:05.616Z	INFO	log/input.go:138	Configured paths: [/crunchify/tutorials/log/crunchify-filebeat-test.log]
2019-03-30T14:52:05.616Z	INFO	input/input.go:114	Starting input of type: log; ID: 7740765267175828127 
2019-03-30T14:52:05.617Z	INFO	crawler/crawler.go:106	Loading and starting Inputs completed. Enabled inputs: 1
2019-03-30T14:52:05.617Z	INFO	cfgfile/reload.go:150	Config reloader started
2019-03-30T14:52:05.617Z	INFO	cfgfile/reload.go:205	Loading of config files completed.

Step-5) Result

Next step is for your to check who logs are coming to Elastic Search and how you are visualizing. We will go over detailed tutorial on that very soon. Stay tuned.

What’s next? Setup Elastic Search

How to Install and Configure Elasticsearch on your Dev/Production environment?


Join the Discussion

If you liked this article, then please share it on social media. Still have any questions about an article, leave us a comment.

Share:

Понравилась статья? Поделить с друзьями:

Читайте также:

  • Error no available modular metadata for modular package
  • Error hwid rise changer
  • Error no auth type found rejecting the user via post auth type reject
  • Error http handshake with server failed 1
  • Error no asepindialog dll

  • 0 0 голоса
    Рейтинг статьи
    Подписаться
    Уведомить о
    guest

    0 комментариев
    Старые
    Новые Популярные
    Межтекстовые Отзывы
    Посмотреть все комментарии