Merge branch 'master' of https://github.com/Cyb3rWard0g/HELK into scripts-helk_install_and_update

updates_os_and_scripts
neu5ron 2020-01-22 12:08:29 -05:00
commit a73a37672f
263 changed files with 10159 additions and 52 deletions

View File

@ -1,13 +1,14 @@
# HELK [Alpha]
# HELK
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
[![GitHub issues-closed](https://img.shields.io/github/issues-closed/Cyb3rward0g/HELK.svg)](https://GitHub.com/Cyb3rWard0g/HELK/issues?q=is%3Aissue+is%3Aclosed)
[![Twitter](https://img.shields.io/twitter/follow/THE_HELK.svg?style=social&label=Follow)](https://twitter.com/THE_HELK)
[![Open Source Love](https://badges.frapsoft.com/os/v1/open-source.png?v=103)](https://github.com/ellerbrock/open-source-badges/)
[![stability-alpha](https://img.shields.io/badge/stability-alpha-f4d03f.svg)](https://github.com/mkenney/software-guides/blob/master/STABILITY-BADGES.md#alpha)
The Hunting ELK or simply the HELK is one of the first open source hunt platforms with advanced analytics capabilities such as SQL declarative language, graphing, structured streaming, and even machine learning via Jupyter notebooks and Apache Spark over an ELK stack. This project was developed primarily for research, but due to its flexible design and core components, it can be deployed in larger environments with the right configurations and scalable infrastructure.
![alt text](resources/images/HELK_Design.png "HELK Infrastructure")
![alt text](docs/content/images/HELK-Design.png "HELK Infrastructure")
# Goals
@ -20,58 +21,11 @@ The Hunting ELK or simply the HELK is one of the first open source hunt platform
The project is currently in an alpha stage, which means that the code and the functionality are still changing. We haven't yet tested the system with large data sources and in many scenarios. We invite you to try it and welcome any feedback.
# HELK Features
## Docs:
* **Kafka:** A distributed publish-subscribe messaging system that is designed to be fast, scalable, fault-tolerant, and durable.
* **Elasticsearch:** A highly scalable open-source full-text search and analytics engine.
* **Logstash:** A data collection engine with real-time pipelining capabilities.
* **Kibana:** An open source analytics and visualization platform designed to work with Elasticsearch.
* **ES-Hadoop:** An open-source, stand-alone, self-contained, small library that allows Hadoop jobs (whether using Map/Reduce or libraries built upon it such as Hive, Pig or Cascading or new upcoming libraries like Apache Spark ) to interact with Elasticsearch.
* **Spark:** A fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs.
* **GraphFrames:** A package for Apache Spark which provides DataFrame-based Graphs.
* **Jupyter Notebook:** An open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.
* **KSQL:** Confluent KSQL is the open source, streaming SQL engine that enables real-time data processing against Apache Kafka®. It provides an easy-to-use, yet powerful interactive SQL interface for stream processing on Kafka, without the need to write code in a programming language such as Java or Python
* **Elastalert:** ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch
* **Sigma:** Sigma is a generic and open signature format that allows you to describe relevant log events in a straightforward manner.
* [Introduction](https://thehelk.com/introduction.html)
* [Installation](https://thehelk.com/installation.html)
# Getting Started
## WIKI
* [Introduction](https://github.com/Cyb3rWard0g/HELK/wiki)
* [Architecture Overview](https://github.com/Cyb3rWard0g/HELK/wiki/Architecture-Overview)
* [Kafka](https://github.com/Cyb3rWard0g/HELK/wiki/Kafka)
* [Logstash](https://github.com/Cyb3rWard0g/HELK/wiki/Logstash)
* [Elasticsearch](https://github.com/Cyb3rWard0g/HELK/wiki/Elasticsearch)
* [Kibana](https://github.com/Cyb3rWard0g/HELK/wiki/Kibana)
* [Spark](https://github.com/Cyb3rWard0g/HELK/wiki/Spark)
* [Installation](https://github.com/Cyb3rWard0g/HELK/wiki/Installation)
## (Docker) Accessing the HELK's Images
By default, the HELK's containers are run in the background (Detached). You can see all your docker containers by running the following command:
```
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a97bd895a2b3 cyb3rward0g/helk-spark-worker:2.3.0 "./spark-worker-entr…" About an hour ago Up About an hour 0.0.0.0:8082->8082/tcp helk-spark-worker2
cbb31f688e0a cyb3rward0g/helk-spark-worker:2.3.0 "./spark-worker-entr…" About an hour ago Up About an hour 0.0.0.0:8081->8081/tcp helk-spark-worker
5d58068aa7e3 cyb3rward0g/helk-kafka-broker:1.1.0 "./kafka-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:9092->9092/tcp helk-kafka-broker
bdb303b09878 cyb3rward0g/helk-kafka-broker:1.1.0 "./kafka-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:9093->9093/tcp helk-kafka-broker2
7761d1e43d37 cyb3rward0g/helk-nginx:0.0.2 "./nginx-entrypoint.…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp helk-nginx
ede2a2503030 cyb3rward0g/helk-jupyter:0.32.1 "./jupyter-entrypoin…" About an hour ago Up About an hour 0.0.0.0:4040->4040/tcp, 0.0.0.0:8880->8880/tcp helk-jupyter
ede19510e959 cyb3rward0g/helk-logstash:6.2.4 "/usr/local/bin/dock…" About an hour ago Up About an hour 5044/tcp, 9600/tcp helk-logstash
e92823b24b2d cyb3rward0g/helk-spark-master:2.3.0 "./spark-master-entr…" About an hour ago Up About an hour 0.0.0.0:7077->7077/tcp, 0.0.0.0:8080->8080/tcp helk-spark-master
6125921b310d cyb3rward0g/helk-kibana:6.2.4 "./kibana-entrypoint…" About an hour ago Up About an hour 5601/tcp helk-kibana
4321d609ae07 cyb3rward0g/helk-zookeeper:3.4.10 "./zookeeper-entrypo…" About an hour ago Up About an hour 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp helk-zookeeper
9cbca145fb3e cyb3rward0g/helk-elasticsearch:6.2.4 "/usr/local/bin/dock…" About an hour ago Up About an hour 9200/tcp, 9300/tcp helk-elasticsearch
```
Then, you will just have to pick which container you want to access and run the following following commands:
```
sudo docker exec -ti <image-name> bash
root@ede2a2503030:/opt/helk/scripts#
```
# Resources
* [Welcome to HELK! : Enabling Advanced Analytics Capabilities](https://cyberwardog.blogspot.com/2018/04/welcome-to-helk-enabling-advanced_9.html)

1
docs/CNAME Normal file
View File

@ -0,0 +1 @@
thehelk.com

21
docs/Gemfile Executable file
View File

@ -0,0 +1,21 @@
source 'https://rubygems.org'
group :jekyll_plugins do
gem 'github-pages'
gem 'jekyll-feed', '~> 0.6'
# Textbook plugins
gem 'jekyll-redirect-from'
gem 'jekyll-scholar'
end
# Windows does not include zoneinfo files, so bundle the tzinfo-data gem
gem 'tzinfo-data', platforms: [:mingw, :mswin, :x64_mingw, :jruby]
# Performance-booster for watching directories on Windows
gem 'wdm', '~> 0.1.0' if Gem.win_platform?
# Development tools
gem 'guard', '~> 2.14.2'
gem 'guard-jekyll-plus', '~> 2.0.2'
gem 'guard-livereload', '~> 2.5.2'

307
docs/Gemfile.lock Executable file
View File

@ -0,0 +1,307 @@
GEM
remote: https://rubygems.org/
specs:
activesupport (4.2.11.1)
i18n (~> 0.7)
minitest (~> 5.1)
thread_safe (~> 0.3, >= 0.3.4)
tzinfo (~> 1.1)
addressable (2.7.0)
public_suffix (>= 2.0.2, < 5.0)
bibtex-ruby (4.4.7)
latex-decode (~> 0.0)
citeproc (1.0.9)
namae (~> 1.0)
citeproc-ruby (1.1.10)
citeproc (~> 1.0, >= 1.0.9)
csl (~> 1.5)
coderay (1.1.2)
coffee-script (2.4.1)
coffee-script-source
execjs
coffee-script-source (1.11.1)
colorator (1.1.0)
commonmarker (0.17.13)
ruby-enum (~> 0.5)
concurrent-ruby (1.1.5)
csl (1.5.0)
namae (~> 1.0)
csl-styles (1.0.1.9)
csl (~> 1.0)
dnsruby (1.61.3)
addressable (~> 2.5)
em-websocket (0.5.1)
eventmachine (>= 0.12.9)
http_parser.rb (~> 0.6.0)
ethon (0.12.0)
ffi (>= 1.3.0)
eventmachine (1.2.7)
execjs (2.7.0)
faraday (0.17.0)
multipart-post (>= 1.2, < 3)
ffi (1.11.1)
formatador (0.2.5)
forwardable-extended (2.6.0)
gemoji (3.0.1)
github-pages (202)
activesupport (= 4.2.11.1)
github-pages-health-check (= 1.16.1)
jekyll (= 3.8.5)
jekyll-avatar (= 0.6.0)
jekyll-coffeescript (= 1.1.1)
jekyll-commonmark-ghpages (= 0.1.6)
jekyll-default-layout (= 0.1.4)
jekyll-feed (= 0.11.0)
jekyll-gist (= 1.5.0)
jekyll-github-metadata (= 2.12.1)
jekyll-mentions (= 1.4.1)
jekyll-optional-front-matter (= 0.3.0)
jekyll-paginate (= 1.1.0)
jekyll-readme-index (= 0.2.0)
jekyll-redirect-from (= 0.14.0)
jekyll-relative-links (= 0.6.0)
jekyll-remote-theme (= 0.4.0)
jekyll-sass-converter (= 1.5.2)
jekyll-seo-tag (= 2.5.0)
jekyll-sitemap (= 1.2.0)
jekyll-swiss (= 0.4.0)
jekyll-theme-architect (= 0.1.1)
jekyll-theme-cayman (= 0.1.1)
jekyll-theme-dinky (= 0.1.1)
jekyll-theme-hacker (= 0.1.1)
jekyll-theme-leap-day (= 0.1.1)
jekyll-theme-merlot (= 0.1.1)
jekyll-theme-midnight (= 0.1.1)
jekyll-theme-minimal (= 0.1.1)
jekyll-theme-modernist (= 0.1.1)
jekyll-theme-primer (= 0.5.3)
jekyll-theme-slate (= 0.1.1)
jekyll-theme-tactile (= 0.1.1)
jekyll-theme-time-machine (= 0.1.1)
jekyll-titles-from-headings (= 0.5.1)
jemoji (= 0.10.2)
kramdown (= 1.17.0)
liquid (= 4.0.0)
listen (= 3.1.5)
mercenary (~> 0.3)
minima (= 2.5.0)
nokogiri (>= 1.10.4, < 2.0)
rouge (= 3.11.0)
terminal-table (~> 1.4)
github-pages-health-check (1.16.1)
addressable (~> 2.3)
dnsruby (~> 1.60)
octokit (~> 4.0)
public_suffix (~> 3.0)
typhoeus (~> 1.3)
guard (2.14.2)
formatador (>= 0.2.4)
listen (>= 2.7, < 4.0)
lumberjack (>= 1.0.12, < 2.0)
nenv (~> 0.1)
notiffany (~> 0.0)
pry (>= 0.9.12)
shellany (~> 0.0)
thor (>= 0.18.1)
guard-compat (1.2.1)
guard-jekyll-plus (2.0.2)
guard (~> 2.10, >= 2.10.3)
guard-compat (~> 1.1)
jekyll (>= 1.0.0)
guard-livereload (2.5.2)
em-websocket (~> 0.5)
guard (~> 2.8)
guard-compat (~> 1.0)
multi_json (~> 1.8)
html-pipeline (2.12.0)
activesupport (>= 2)
nokogiri (>= 1.4)
http_parser.rb (0.6.0)
i18n (0.9.5)
concurrent-ruby (~> 1.0)
jekyll (3.8.5)
addressable (~> 2.4)
colorator (~> 1.0)
em-websocket (~> 0.5)
i18n (~> 0.7)
jekyll-sass-converter (~> 1.0)
jekyll-watch (~> 2.0)
kramdown (~> 1.14)
liquid (~> 4.0)
mercenary (~> 0.3.3)
pathutil (~> 0.9)
rouge (>= 1.7, < 4)
safe_yaml (~> 1.0)
jekyll-avatar (0.6.0)
jekyll (~> 3.0)
jekyll-coffeescript (1.1.1)
coffee-script (~> 2.2)
coffee-script-source (~> 1.11.1)
jekyll-commonmark (1.3.1)
commonmarker (~> 0.14)
jekyll (>= 3.7, < 5.0)
jekyll-commonmark-ghpages (0.1.6)
commonmarker (~> 0.17.6)
jekyll-commonmark (~> 1.2)
rouge (>= 2.0, < 4.0)
jekyll-default-layout (0.1.4)
jekyll (~> 3.0)
jekyll-feed (0.11.0)
jekyll (~> 3.3)
jekyll-gist (1.5.0)
octokit (~> 4.2)
jekyll-github-metadata (2.12.1)
jekyll (~> 3.4)
octokit (~> 4.0, != 4.4.0)
jekyll-mentions (1.4.1)
html-pipeline (~> 2.3)
jekyll (~> 3.0)
jekyll-optional-front-matter (0.3.0)
jekyll (~> 3.0)
jekyll-paginate (1.1.0)
jekyll-readme-index (0.2.0)
jekyll (~> 3.0)
jekyll-redirect-from (0.14.0)
jekyll (~> 3.3)
jekyll-relative-links (0.6.0)
jekyll (~> 3.3)
jekyll-remote-theme (0.4.0)
addressable (~> 2.0)
jekyll (~> 3.5)
rubyzip (>= 1.2.1, < 3.0)
jekyll-sass-converter (1.5.2)
sass (~> 3.4)
jekyll-scholar (5.16.0)
bibtex-ruby (~> 4.0, >= 4.0.13)
citeproc-ruby (~> 1.0)
csl-styles (~> 1.0)
jekyll (~> 3.0)
jekyll-seo-tag (2.5.0)
jekyll (~> 3.3)
jekyll-sitemap (1.2.0)
jekyll (~> 3.3)
jekyll-swiss (0.4.0)
jekyll-theme-architect (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-cayman (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-dinky (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-hacker (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-leap-day (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-merlot (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-midnight (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-minimal (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-modernist (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-primer (0.5.3)
jekyll (~> 3.5)
jekyll-github-metadata (~> 2.9)
jekyll-seo-tag (~> 2.0)
jekyll-theme-slate (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-tactile (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-theme-time-machine (0.1.1)
jekyll (~> 3.5)
jekyll-seo-tag (~> 2.0)
jekyll-titles-from-headings (0.5.1)
jekyll (~> 3.3)
jekyll-watch (2.2.1)
listen (~> 3.0)
jemoji (0.10.2)
gemoji (~> 3.0)
html-pipeline (~> 2.2)
jekyll (~> 3.0)
kramdown (1.17.0)
latex-decode (0.3.1)
liquid (4.0.0)
listen (3.1.5)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
ruby_dep (~> 1.2)
lumberjack (1.0.13)
mercenary (0.3.6)
method_source (0.9.2)
mini_portile2 (2.4.0)
minima (2.5.0)
jekyll (~> 3.5)
jekyll-feed (~> 0.9)
jekyll-seo-tag (~> 2.1)
minitest (5.13.0)
multi_json (1.14.1)
multipart-post (2.1.1)
namae (1.0.1)
nenv (0.3.0)
nokogiri (1.10.4)
mini_portile2 (~> 2.4.0)
notiffany (0.1.3)
nenv (~> 0.1)
shellany (~> 0.0)
octokit (4.14.0)
sawyer (~> 0.8.0, >= 0.5.3)
pathutil (0.16.2)
forwardable-extended (~> 2.6)
pry (0.12.2)
coderay (~> 1.1.0)
method_source (~> 0.9.0)
public_suffix (3.1.1)
rb-fsevent (0.10.3)
rb-inotify (0.10.0)
ffi (~> 1.0)
rouge (3.11.0)
ruby-enum (0.7.2)
i18n
ruby_dep (1.5.0)
rubyzip (2.0.0)
safe_yaml (1.0.5)
sass (3.7.4)
sass-listen (~> 4.0.0)
sass-listen (4.0.0)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
sawyer (0.8.2)
addressable (>= 2.3.5)
faraday (> 0.8, < 2.0)
shellany (0.0.1)
terminal-table (1.8.0)
unicode-display_width (~> 1.1, >= 1.1.1)
thor (0.20.3)
thread_safe (0.3.6)
typhoeus (1.3.1)
ethon (>= 0.9.0)
tzinfo (1.2.5)
thread_safe (~> 0.1)
unicode-display_width (1.6.0)
PLATFORMS
ruby
DEPENDENCIES
github-pages
guard (~> 2.14.2)
guard-jekyll-plus (~> 2.0.2)
guard-livereload (~> 2.5.2)
jekyll-feed (~> 0.6)
jekyll-redirect-from
jekyll-scholar
tzinfo-data
BUNDLED WITH
1.17.2

8
docs/Guardfile Executable file
View File

@ -0,0 +1,8 @@
guard 'jekyll-plus', serve: true do
watch /.*/
ignore /^_site/
end
guard 'livereload' do
watch /.*/
end

34
docs/Makefile Executable file
View File

@ -0,0 +1,34 @@
.PHONY: help book clean serve
help:
@echo "Please use 'make <target>' where <target> is one of:"
@echo " install to install the necessary dependencies for jupyter-book to build"
@echo " book to convert the content/ folder into Jekyll markdown in _build/"
@echo " clean to clean out site build files"
@echo " runall to run all notebooks in-place, capturing outputs with the notebook"
@echo " serve to serve the repository locally with Jekyll"
@echo " build to build the site HTML and store in _site/"
@echo " site to build the site HTML, store in _site/, and serve with Jekyll"
install:
jupyter-book install ./
book:
jupyter-book build ./
runall:
jupyter-book run ./content
clean:
python scripts/clean.py
serve:
bundle exec guard
build:
jupyter-book build ./ --overwrite
site: build
bundle exec jekyll build
touch _site/.nojekyll

View File

@ -0,0 +1,56 @@
---
---
@inproceedings{holdgraf_evidence_2014,
address = {Brisbane, Australia, Australia},
title = {Evidence for {Predictive} {Coding} in {Human} {Auditory} {Cortex}},
booktitle = {International {Conference} on {Cognitive} {Neuroscience}},
publisher = {Frontiers in Neuroscience},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Knight, Robert T.},
year = {2014}
}
@article{holdgraf_rapid_2016,
title = {Rapid tuning shifts in human auditory cortex enhance speech intelligibility},
volume = {7},
issn = {2041-1723},
url = {http://www.nature.com/doifinder/10.1038/ncomms13654},
doi = {10.1038/ncomms13654},
number = {May},
journal = {Nature Communications},
author = {Holdgraf, Christopher Ramsay and de Heer, Wendy and Pasley, Brian N. and Rieger, Jochem W. and Crone, Nathan and Lin, Jack J. and Knight, Robert T. and Theunissen, Frédéric E.},
year = {2016},
pages = {13654},
file = {Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:C\:\\Users\\chold\\Zotero\\storage\\MDQP3JWE\\Holdgraf et al. - 2016 - Rapid tuning shifts in human auditory cortex enhance speech intelligibility.pdf:application/pdf}
}
@inproceedings{holdgraf_portable_2017,
title = {Portable learning environments for hands-on computational instruction using container-and cloud-based technology to teach data science},
volume = {Part F1287},
isbn = {978-1-4503-5272-7},
doi = {10.1145/3093338.3093370},
abstract = {© 2017 ACM. There is an increasing interest in learning outside of the traditional classroom setting. This is especially true for topics covering computational tools and data science, as both are challenging to incorporate in the standard curriculum. These atypical learning environments offer new opportunities for teaching, particularly when it comes to combining conceptual knowledge with hands-on experience/expertise with methods and skills. Advances in cloud computing and containerized environments provide an attractive opportunity to improve the effciency and ease with which students can learn. This manuscript details recent advances towards using commonly-Available cloud computing services and advanced cyberinfrastructure support for improving the learning experience in bootcamp-style events. We cover the benets (and challenges) of using a server hosted remotely instead of relying on student laptops, discuss the technology that was used in order to make this possible, and give suggestions for how others could implement and improve upon this model for pedagogy and reproducibility.},
booktitle = {{ACM} {International} {Conference} {Proceeding} {Series}},
author = {Holdgraf, Christopher Ramsay and Culich, A. and Rokem, A. and Deniz, F. and Alegro, M. and Ushizima, D.},
year = {2017},
keywords = {Teaching, Bootcamps, Cloud computing, Data science, Docker, Pedagogy}
}
@article{holdgraf_encoding_2017,
title = {Encoding and decoding models in cognitive electrophysiology},
volume = {11},
issn = {16625137},
doi = {10.3389/fnsys.2017.00061},
abstract = {© 2017 Holdgraf, Rieger, Micheli, Martin, Knight and Theunissen. Cognitive neuroscience has seen rapid growth in the size and complexity of data recorded from the human brain as well as in the computational tools available to analyze this data. This data explosion has resulted in an increased use of multivariate, model-based methods for asking neuroscience questions, allowing scientists to investigate multiple hypotheses with a single dataset, to use complex, time-varying stimuli, and to study the human brain under more naturalistic conditions. These tools come in the form of “Encoding” models, in which stimulus features are used to model brain activity, and “Decoding” models, in which neural features are used to generated a stimulus output. Here we review the current state of encoding and decoding models in cognitive electrophysiology and provide a practical guide toward conducting experiments and analyses in this emerging field. Our examples focus on using linear models in the study of human language and audition. We show how to calculate auditory receptive fields from natural sounds as well as how to decode neural recordings to predict speech. The paper aims to be a useful tutorial to these approaches, and a practical introduction to using machine learning and applied statistics to build models of neural activity. The data analytic approaches we discuss may also be applied to other sensory modalities, motor systems, and cognitive systems, and we cover some examples in these areas. In addition, a collection of Jupyter notebooks is publicly available as a complement to the material covered in this paper, providing code examples and tutorials for predictive modeling in python. The aimis to provide a practical understanding of predictivemodeling of human brain data and to propose best-practices in conducting these analyses.},
journal = {Frontiers in Systems Neuroscience},
author = {Holdgraf, Christopher Ramsay and Rieger, J.W. and Micheli, C. and Martin, S. and Knight, R.T. and Theunissen, F.E.},
year = {2017},
keywords = {Decoding models, Encoding models, Electrocorticography (ECoG), Electrophysiology/evoked potentials, Machine learning applied to neuroscience, Natural stimuli, Predictive modeling, Tutorials}
}
@book{ruby,
title = {The Ruby Programming Language},
author = {Flanagan, David and Matsumoto, Yukihiro},
year = {2008},
publisher = {O'Reilly Media}
}

View File

@ -0,0 +1,130 @@
---
title: |-
Elasticsearch
pagenum: 2
prev_page:
url: /installation.html
next_page:
url: /architecture/logstash.html
suffix: .md
search: elasticsearch helk set heap memory docker mbs config available gbs jvm example file not else usr share own using options edit following restart value cluster xmsg xmxg esjavaopts environment yml bash size ram perform functions keep aggregations run amount important variables therefore note max only lines always above add under license basic scripts uses order various list track such data perfect however ways logic below shown gb server getting settings sure min same also restarting temporarily service database coming back online here should wanted services option kibana analysis rebuild container elastic need entrypoint name xpack soft hard compose f build
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Elasticsearch</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p><img src="../../images/ELASTICSEARCH-Design.png"></p>
<h2 id="HELK's-Elasticsearch-Heap-Size">HELK's Elasticsearch Heap Size<a class="anchor-link" href="#HELK's-Elasticsearch-Heap-Size"> </a></h2><p>Elasticsearch uses heap, which can more specifically be referred to as memory/RAM, in order to perform various functions.<br>
A list of some of the functions this heap/memory does is as follows (keep in mind this is not an exhaustive list):</p>
<ul>
<li>Keep track of indexes</li>
<li>When aggregations are run such as calculating sums, mathematical variations, sub aggregations of aggregations, etc..</li>
<li>When certain searches are </li>
<li>Keep track of offsets of the tokens/terms of indexed values (aka events/logs/data)</li>
</ul>
<p>As you can see, heap and the amount of it is important in a healthy setup. The HELK installation process uses various functions to try to set the "perfect" amount of heap, however there are thousands of variables in all the different ways people use/install HELK.<br>
Therefore, we are unable to account for them all and thus our logic will never be perfect and unfortunately may not work best for you. However, we have given you an ability to set your own heap and we have described the logic if you choose to let HELK determine what to set it.</p>
<p>Heap can and or is set one of four ways, as detailed below.</p>
<h3 id="1)-Allow-HELK-to-calculate-how-much-to-assign.">1) Allow HELK to calculate how much to assign.<a class="anchor-link" href="#1)-Allow-HELK-to-calculate-how-much-to-assign."> </a></h3><p>This is based on the available memory and variables shown in the code block below.<br>
Its very important to note <code>available memory</code>, not the amount of memory the host has.<br>
An example to show why this is critical to understand.. If you have a 100GB RAM server, but the server is actively using 90GBs of RAM - then you will NOT get the max 31GB heap/memory for elasticsearch. In this example you would actually end up getting roughly 3 GBs for the heap. Because, with only 10 GBs of available/free memory, it could cause drastic issues to lock up all of the remaining memory!</p>
<pre><code>if available memory &gt;= 1000 MBs and &lt;= 5999 MBs:
then set to 2000 MBs
else if available memory =&gt; 6000 MBs and &lt;= 8999 MBs:
then set to 3200 MBs
else if available memory =&gt; 9000 MBs and &lt;= 12999 MBs:
then set to 5000 MBs
else if available memory =&gt; 13000 MBs and &lt;= 16000 MBs:
then set to 7100 MBs
else:
if available memory =&gt; 31 GBs:
then set to 31 GBs
else:
set to available memory in GBs</code></pre>
<h3 id="2)-Set-your-own-heap">2) Set your own heap<a class="anchor-link" href="#2)-Set-your-own-heap"> </a></h3><p>In order to define your own heap settings, in the file <code>HELK/docker/helk-elasticsearch/config/jvm.options</code>
edit the following two lines that begin with</p>
<p><code>#-Xms</code><br>
<code>#-Xmx</code></p>
<p>Then make sure to restart elasticsearch.<br>
<strong>Always set the min and max JVM heap size to the same value<br>
Also, you will be restarting elasticsearch. Therefore your cluster will temporarily be down as the elasticsearch service/database is coming back online</strong></p>
<p>Here is an example of how to perform the above:</p>
<pre><code># Edit the file jvm file
sudo nano HELK/docker/helk-elasticsearch/config/jvm.options
# Resulting lines (as mentioned that you should edit from above)
# should look something like the following if you wanted to set the heap to 16GBs
-Xms16g
-Xmx16g
# Restart elasticsearch
docker restart helk-elasticsearch</code></pre>
<h3 id="3)-Add-ES_JAVA_OPTS-to-the-docker-config-file">3) Add <code>ES_JAVA_OPTS</code> to the docker config file<a class="anchor-link" href="#3)-Add-ES_JAVA_OPTS-to-the-docker-config-file"> </a></h3><p>Which docker config file to use is shown later.<br>
You will add this value under <code>services.helk-elasticsearch.environment</code>.
Example, if I used the option for ELK + Kafka with no license and no alerting and I wanted to set the heap to 16GBs<br>
Then I would edit <code>HELK/docker/helk-kibana-analysis-basic.yml</code> and add the following line under the environment seciton:<br>
<code>- "ES_JAVA_OPTS=-Xms16g -Xmx16g"</code></p>
<p>Then make sure rebuild the elasticsearch docker container.<br>
<strong>Always set the min and max JVM heap size to the same value<br>
Also, you will be restarting elasticsearch. Therefore your cluster will temporarily be down as the elasticsearch service/database is coming back online</strong><br>
<strong>Note if you are using (elastic) license you will need to set your ELASTIC_PASSWORD and KIBANA_UI_PASSWORD variables (and logstash password if applicable)</strong></p>
<p>Here is how to perform the above:</p>
<pre><code># Example config (only showing the beginning lines) Note, that these settings may not match your config exactly, but that the important thing is to have the value under the environment section
version: '3.5'
services:
helk-elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.1
container_name: helk-elasticsearch
secrets:
- source: elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
volumes:
- esdata:/usr/share/elasticsearch/data
- ./helk-elasticsearch/scripts:/usr/share/elasticsearch/scripts
- ./helk-elasticsearch/config/jvm.options:/usr/share/elasticsearch/config/jvm.options
entrypoint: /usr/share/elasticsearch/scripts/elasticsearch-entrypoint.sh
environment:
- cluster.name=helk-cluster
- node.name=helk-1
- xpack.license.self_generated.type=basic
- xpack.security.enabled=false
- "ES_JAVA_OPTS= -Xms16g -Xmx16g"
ulimits:
memlock:
soft: -1
hard: -1
nproc: 20480
nofile:
soft: 160000
hard: 160000
restart: always
networks:
helk:
# Rebuild the elasticsearch docker container
`docker-compose -f HELK/docker/helk-kibana-analysis-basic.yml up --build -d`</code></pre>
<h4 id="4)-Set-at-run-time-using-custom-bash-variable">4) Set at run time using custom bash variable<a class="anchor-link" href="#4)-Set-at-run-time-using-custom-bash-variable"> </a></h4><p>Example bash variable such as:</p>
<div class="highlight"><pre><span></span><span class="nb">export</span> <span class="nv">ES_JAVA_OPTS</span><span class="o">=</span><span class="s2">&quot;-Xms16g -Xmx16g&quot;</span>
</pre></div>
<p>Then run the following using your own docker config file.</p>
<div class="highlight"><pre><span></span>docker-compose -f <span class="nv">$PlaceDockerConfigFileNameHere</span> up --build -d
</pre></div>
<p><strong>Only use this option if you explicitly need to. Please know what your getting into to ;)</strong></p>
</div>
</div>
</div>
</div>
</main>

56
docs/_build/architecture/kibana.html vendored Normal file
View File

@ -0,0 +1,56 @@
---
title: |-
Kibana
pagenum: 4
prev_page:
url: /architecture/logstash.html
next_page:
url: /how-to/docker/docker.html
suffix: .md
search: logs kibana img src images png endpoint winevent overview helk monitoring sysmon elasticsearch logstash docker security right additionally currently dashboards globaldashboard networkdashboard sysmondashboard tail usr share config kibanalogs log design visualize discover sure being sent least windows events helks ip preferred browser dont away update picker top include farther back window just started sending wait minute check again creates automatically index patterns sets default application system powershell wmiactivity discovery comes views x pack basic free license initial nodes troubleshooting apart running ps follow located example exec f times not working because still starting ran into error
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Kibana</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p><img src="../../images/KIBANA-Design.png"></p>
<h2 id="Visualize-your-logs">Visualize your logs<a class="anchor-link" href="#Visualize-your-logs"> </a></h2><h3 id="Discover">Discover<a class="anchor-link" href="#Discover"> </a></h3><p>Make sure you have logs being sent to your HELK first (At least Windows security and Sysmon events). Then, go to <code>https://&lt;HELK's IP&gt;</code> in your preferred browser. If you dont see logs right away then update your time picker (in the top right) to include a farther back window. Additionally, if you just started sending logs then wait a minute and check again.</p>
<p>Currently, HELK creates automatically 7 index patterns for you and sets <strong>logs-endpoint-winevent-sysmon-*</strong> as your default one:</p>
<ul>
<li>"logs-*"</li>
<li>"logs-endpoint-winevent-sysmon-*"</li>
<li>"logs-endpoint-winevent-security-*"</li>
<li>"logs-endpoint-winevent-application-*"</li>
<li>"logs-endpoint-winevent-system-*"</li>
<li>"logs-endpoint-winevent-powershell-*"</li>
<li>"logs-endpoint-winevent-wmiactivity-*"</li>
</ul>
<p><img src="../../images/KIBANA-Discovery.png"></p>
<h2 id="Dashboards">Dashboards<a class="anchor-link" href="#Dashboards"> </a></h2><p>Currently, the HELK comes with 3 dashboards:</p>
<h3 id="Global_Dashboard">Global_Dashboard<a class="anchor-link" href="#Global_Dashboard"> </a></h3><p><img src="../../images/KIBANA-GlobalDashboard.png"></p>
<h3 id="Network_Dashboard">Network_Dashboard<a class="anchor-link" href="#Network_Dashboard"> </a></h3><p><img src="../../images/KIBANA-NetworkDashboard.png"></p>
<h3 id="Sysmon_Dashboard">Sysmon_Dashboard<a class="anchor-link" href="#Sysmon_Dashboard"> </a></h3><p><img src="../../images/KIBANA-SysmonDashboard.png"></p>
<h2 id="Monitoring-Views-(x-Pack-Basic-Free-License)">Monitoring Views (x-Pack Basic Free License)<a class="anchor-link" href="#Monitoring-Views-(x-Pack-Basic-Free-License)"> </a></h2><h3 id="Kibana-Initial-Overview">Kibana Initial Overview<a class="anchor-link" href="#Kibana-Initial-Overview"> </a></h3><p><img src="../../images/MONITORING-Kibana-Overview.png"></p>
<h3 id="Elasticsearch-Overview">Elasticsearch Overview<a class="anchor-link" href="#Elasticsearch-Overview"> </a></h3><p><img src="../../images/MONITORING-Elasticsearch-Overview.png"></p>
<h3 id="Logstash-Overview">Logstash Overview<a class="anchor-link" href="#Logstash-Overview"> </a></h3><p><img src="../../images/MONITORING-Logstash-Overview.png"></p>
<p><img src="../../images/MONITORING-Logstash-Nodes-Overview.png"></p>
<h2 id="Troubleshooting">Troubleshooting<a class="anchor-link" href="#Troubleshooting"> </a></h2><p>Apart from running <code>docker ps</code> and <code>docker logs --follow --tail 25 helk-kibana</code>, additionally you can look at logs located at <code>/usr/share/kibana/config/kibana_logs.log</code>.</p>
<p>Example: <code>docker exec helk-kibana tail -f /usr/share/kibana/config/kibana_logs.log</code></p>
<p>Many times Kibana will not be "working" because elasticsearch is still starting up or has ran into an error.</p>
</div>
</div>
</div>
</div>
</main>

33
docs/_build/architecture/logstash.html vendored Normal file
View File

@ -0,0 +1,33 @@
---
title: |-
Logstash
pagenum: 3
prev_page:
url: /architecture/elasticsearch.html
next_page:
url: /architecture/kibana.html
suffix: .md
search: logstash img src images design png
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Logstash</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p><img src="../../images/LOGSTASH-Design.png"></p>
</div>
</div>
</div>
</div>
</main>

View File

@ -0,0 +1,109 @@
---
title: |-
Export Docker Images locally
pagenum: 6
prev_page:
url: /how-to/docker/docker.html
next_page:
url: /how-to/docker/docker-load-images.html
suffix: .md
search: helk docker tar ago hours root cybrwardg tcp sudo spark feb ksql save o home rw mb logstash kibana elastic co elasticsearch images jupyter elastalert kafka zookeeper worker master days confluentinc cp server cli broker nginx system months isolated bash export internet files image command usr share locally list non via id created ps entr where planning install run another access built downloaded load those available repository tag size efaeccd gb f bdcebaf efbbee ba cffcbeee fafc weeks befce abbdae bbdb fcde db containers running container status ports names decdcf bin sh ecc etc confluent dock dcc entrypoint edd cadba
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Export Docker Images locally</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>If the system where you are planning to install HELK is isolated from the Internet, you can run HELK on another system that has access to the Internet and then export the built/downloaded images to .tar files. You can then LOAD Those image files in the system that is isolated from the Internet.</p>
<ul>
<li>List all the images available in the non-isolated system via the docker images command</li>
</ul>
<div class="highlight"><pre><span></span>sudo docker images
</pre></div>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
cyb3rward0g/helk-jupyter 0.1.1 efa46ecc8d32 2 days ago 2.18GB
confluentinc/cp-ksql-server 5.1.2 f57298019757 6 days ago 514MB
confluentinc/cp-ksql-cli 5.1.2 bd411ce0ba9f 6 days ago 510MB
docker.elastic.co/logstash/logstash 6.6.1 3e7fbb7964ee 11 days ago 786MB
docker.elastic.co/kibana/kibana 6.6.1 b94222148a00 11 days ago 710MB
docker.elastic.co/elasticsearch/elasticsearch 6.6.1 c6ffcb0ee97e 11 days ago 842MB
cyb3rward0g/helk-elastalert 0.2.1 569f588a22fc 3 weeks ago 758MB
cyb3rward0g/helk-kafka-broker 2.1.0 7b3e7f9ce732 2 months ago 388MB
cyb3rward0g/helk-zookeeper 2.1.0 abb732da3e50 2 months ago 388MB
cyb3rward0g/helk-spark-worker 2.4.0 b1545b0582db 2 months ago 579MB
cyb3rward0g/helk-spark-master 2.4.0 70fc61de3445 2 months ago 579MB
cyb3rward0g/helk-nginx 0.0.7 280d044b6719 6 months ago 329MB</code></pre>
<ul>
<li>List all the containers running in the non-isolated system via the docker ps command</li>
</ul>
<div class="highlight"><pre><span></span>sudo docker ps
</pre></div>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de048c88dc7f confluentinc/cp-ksql-cli:5.1.2 "/bin/sh" 6 hours ago Up 6 hours helk-ksql-cli
69e06070c14c confluentinc/cp-ksql-server:5.1.2 "/etc/confluent/dock…" 6 hours ago Up 6 hours 0.0.0.0:8088-&gt;8088/tcp helk-ksql-server
d57967977c9c cyb3rward0g/helk-kafka-broker:2.1.0 "./kafka-entrypoint.…" 6 hours ago Up 6 hours 0.0.0.0:9092-&gt;9092/tcp helk-kafka-broker
4889e917d76d cyb3rward0g/helk-spark-worker:2.4.0 "./spark-worker-entr…" 6 hours ago Up 6 hours helk-spark-worker
c0a29d8b18a7 cyb3rward0g/helk-nginx:0.0.7 "/opt/helk/scripts/n…" 6 hours ago Up 6 hours 0.0.0.0:80-&gt;80/tcp, 0.0.0.0:443-&gt;443/tcp helk-nginx
6a887d693a31 cyb3rward0g/helk-elastalert:0.2.1 "./elastalert-entryp…" 6 hours ago Up 6 hours helk-elastalert
a32be7a399c7 cyb3rward0g/helk-zookeeper:2.1.0 "./zookeeper-entrypo…" 6 hours ago Up 6 hours 2181/tcp, 2888/tcp, 3888/tcp helk-zookeeper
c636a8a1e8f7 cyb3rward0g/helk-spark-master:2.4.0 "./spark-master-entr…" 6 hours ago Up 6 hours 7077/tcp, 0.0.0.0:8080-&gt;8080/tcp helk-spark-master
ef1b8d8015ab cyb3rward0g/helk-jupyter:0.1.1 "./jupyter-entrypoin…" 6 hours ago Up 6 hours 8000/tcp helk-jupyter
bafeeb1587cf docker.elastic.co/logstash/logstash:6.6.1 "/usr/share/logstash…" 6 hours ago Up 6 hours 0.0.0.0:5044-&gt;5044/tcp, 0.0.0.0:8531-&gt;8531/tcp, 9600/tcp helk-logstash
29b57e5c71e5 docker.elastic.co/kibana/kibana:6.6.1 "/usr/share/kibana/s…" 6 hours ago Up 6 hours 5601/tcp helk-kibana
48499aa83917 docker.elastic.co/elasticsearch/elasticsearch:6.6.1 "/usr/share/elastics…" 6 hours ago Up 6 hours 9200/tcp, 9300/tcp helk-elasticsearch</code></pre>
<ul>
<li>Export images as tar files:</li>
</ul>
<div class="highlight"><pre><span></span>sudo docker save -o /home/helk/helk-ksql-cli.tar confluentinc/cp-ksql-cli:5.1.2
sudo docker save -o /home/helk/helk-ksql-server.tar confluentinc/cp-ksql-server:5.1.2
sudo docker save -o /home/helk/helk-kafka-broker.tar cyb3rward0g/helk-kafka-broker:2.1.0
sudo docker save -o /home/helk/helk-spark-worker.tar cyb3rward0g/helk-spark-worker:2.4.0
sudo docker save -o /home/helk/helk-nginx.tar cyb3rward0g/helk-nginx:0.0.7
sudo docker save -o /home/helk/helk-elastalert.tar cyb3rward0g/helk-elastalert:0.2.1
sudo docker save -o /home/helk/helk-zookeeper.tar cyb3rward0g/helk-zookeeper:2.1.0
sudo docker save -o /home/helk/helk-spark-master.tar cyb3rward0g/helk-spark-master:2.4.0
sudo docker save -o /home/helk/helk-logstash.tar docker.elastic.co/logstash/logstash:6.6.1
sudo docker save -o /home/helk/helk-kibana.tar docker.elastic.co/kibana/kibana:6.6.1
sudo docker save -o /home/helk/helk-elasticsearch.tar docker.elastic.co/elasticsearch/elasticsearch:6.6.1
sudo docker save -o /home/helk/helk-jupyter.tar cyb3rward0g/helk-jupyter:0.1.1
</pre></div>
<ul>
<li>check if images exist locally</li>
</ul>
<div class="highlight"><pre><span></span>ls -l
</pre></div>
<pre><code>total 10810584
drwxrwxr-x 9 helk helk 4096 Feb 24 21:01 HELK
-rw------- 1 root root 778629632 Feb 25 03:07 helk-elastalert.tar
-rw------- 1 root root 854236160 Feb 25 03:12 helk-elasticsearch.tar
-rw------- 1 root root 2254629888 Feb 25 03:14 helk-jupyter.tar
-rw------- 1 root root 395871744 Feb 25 03:04 helk-kafka-broker.tar
-rw------- 1 root root 767277568 Feb 25 03:11 helk-kibana.tar
-rw------- 1 root root 521177600 Feb 25 03:00 helk-ksql-cli.tar
-rw------- 1 root root 525901824 Feb 25 03:02 helk-ksql-server.tar
-rw------- 1 root root 810578944 Feb 25 03:09 helk-logstash.tar
-rw------- 1 root root 335945728 Feb 25 03:06 helk-nginx.tar
-rw------- 1 root root 587616768 Feb 25 03:08 helk-spark-master.tar
-rw------- 1 root root 587616768 Feb 25 03:05 helk-spark-worker.tar
-rw------- 1 root root 395854848 Feb 25 03:08 helk-zookeeper.tar
helk@ubuntu:~$</code></pre>
</div>
</div>
</div>
</div>
</main>

View File

@ -0,0 +1,113 @@
---
title: |-
Load Local Docker Images
pagenum: 7
prev_page:
url: /how-to/docker/docker-export-images.html
next_page:
url: /how-to/logstash/logstash.html
suffix: .md
search: mb kb helk loading layer tar ago docker images cybrwardg load ksql elasticsearch days spark s months bash logstash loaded image elastic co isolated system f tmp worker server cli elastalert jupyter kibana gb done check nginx kafka broker master zookeeper sudo ba confluentinc cp local followed document export locally should ready into where cannot access dockerhub registry copy home scp exist ls commands input fddce fbe ccddaa cdfde ab fffef cffbcf adfdc cce fdecbc bbdae affdeb ccbfe dbd adcf aaff fdbc beced cbe afd eec fbd faacff cbcbe cdaf ecdbbfdc edccf efbcd via command repository tag id created size
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Load Local Docker Images</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>If you followed <a href="/docker-load-images">this document</a> to export your docker images locally, you should be ready to load them into an isolated system where it cannot access the dockerhub registry.</p>
<ul>
<li>Copy images to the isolated (10.0.10.102) system</li>
</ul>
<div class="highlight"><pre><span></span><span class="k">for</span> f in /home/helk/*.tar<span class="p">;</span> <span class="k">do</span> scp <span class="nv">$f</span> helk@10.0.10.102:/tmp/<span class="p">;</span> <span class="k">done</span>
</pre></div>
<pre><code>helk-spark-worker.tar 100% 560MB 24.4MB/s 00:23
helk-ksql-server.tar 100% 502MB 29.5MB/s 00:17
helk-logstash.tar 100% 773MB 28.6MB/s 00:27
helk-ksql-cli.tar 100% 497MB 21.6MB/s 00:23
helk-elasticsearch.tar 100% 815MB 29.1MB/s 00:28</code></pre>
<ul>
<li>Check if images exist in the isolated system</li>
</ul>
<div class="highlight"><pre><span></span>ls /tmp/
</pre></div>
<pre><code>helk-elastalert.tar helk-jupyter.tar
helk-kibana.tar helk-ksql-server.tar helk-nginx.tar
helk-spark-worker.tar helk-elasticsearch.tar
helk-kafka-broker.tar helk-ksql-cli.tar helk-logstash.tar
helk-spark-master.tar helk-zookeeper.tar</code></pre>
<ul>
<li>Load images with the docker load commands:</li>
</ul>
<div class="highlight"><pre><span></span><span class="k">for</span> i in /tmp/*.tar<span class="p">;</span> <span class="k">do</span> sudo docker load --input <span class="nv">$i</span><span class="p">;</span> <span class="k">done</span>
</pre></div>
<pre><code>f49017d4d5ce: Loading layer [==================================================&gt;] 85.96MB/85.96MB
8f2b771487e9: Loading layer [==================================================&gt;] 15.87kB/15.87kB
ccd4d61916aa: Loading layer [==================================================&gt;] 10.24kB/10.24kB
c01d74f99de4: Loading layer [==================================================&gt;] 5.632kB/5.632kB
268a067217b5: Loading layer [==================================================&gt;] 3.072kB/3.072kB
831fff32e4f2: Loading layer [==================================================&gt;] 65.02kB/65.02kB
c89f4fbc01f8: Loading layer [==================================================&gt;] 103.4MB/103.4MB
adfd094c5517: Loading layer [==================================================&gt;] 3.245MB/3.245MB
c73538215c3e: Loading layer [==================================================&gt;] 567.6MB/567.6MB
080f01d1ecbc: Loading layer [==================================================&gt;] 13.31kB/13.31kB
60bbd38a907e: Loading layer [==================================================&gt;] 3.584kB/3.584kB
9affd17eb100: Loading layer [==================================================&gt;] 5.632kB/5.632kB
0561c04cbf7e: Loading layer [==================================================&gt;] 7.168kB/7.168kB
ba0201512417: Loading layer [==================================================&gt;] 18.29MB/18.29MB
Loaded image: cyb3rward0g/helk-elastalert:0.2.1
071d8bd76517: Loading layer [==================================================&gt;] 210.2MB/210.2MB
a175339dcf83: Loading layer [==================================================&gt;] 310.5MB/310.5MB
9a70a6f483f7: Loading layer [==================================================&gt;] 95.68MB/95.68MB
f4db77828c81: Loading layer [==================================================&gt;] 311.3kB/311.3kB
be48c67e9d13: Loading layer [==================================================&gt;] 237.5MB/237.5MB
432cb712190e: Loading layer [==================================================&gt;] 7.68kB/7.68kB
a512981fd597: Loading layer [==================================================&gt;] 9.728kB/9.728kB
Loaded image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
49778752e7ec: Loading layer [==================================================&gt;] 394.9MB/394.9MB
5f3913b1d541: Loading layer [==================================================&gt;] 1.667GB/1.667GB
77fa3a9c5ff6: Loading layer [==================================================&gt;] 7.168kB/7.168kB
cbc15b984e03: Loading layer [==================================================&gt;] 10.24kB/10.24kB
38c44d7a52f6: Loading layer [==================================================&gt;] 5.12kB/5.12kB
0ec2dbbfd6c7: Loading layer [==================================================&gt;] 3.584kB/3.584kB
Loaded image: cyb3rward0g/helk-jupyter:0.1.1
4e31d8c1cf96: Loading layer [==================================================&gt;] 203.1MB/203.1MB
efb23c49455d: Loading layer [==================================================&gt;] 11.26kB/11.26kB</code></pre>
<ul>
<li>check if images are loaded via the docker images command</li>
</ul>
<div class="highlight"><pre><span></span>sudo docker images
</pre></div>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
cyb3rward0g/helk-jupyter 0.1.1 efa46ecc8d32 2 days ago 2.18GB
confluentinc/cp-ksql-server 5.1.2 f57298019757 6 days ago 514MB
confluentinc/cp-ksql-cli 5.1.2 bd411ce0ba9f 6 days ago 510MB
docker.elastic.co/logstash/logstash 6.6.1 3e7fbb7964ee 11 days ago 786MB
docker.elastic.co/kibana/kibana 6.6.1 b94222148a00 11 days ago 710MB
docker.elastic.co/elasticsearch/elasticsearch 6.6.1 c6ffcb0ee97e 11 days ago 842MB
cyb3rward0g/helk-elastalert 0.2.1 569f588a22fc 3 weeks ago 758MB
cyb3rward0g/helk-kafka-broker 2.1.0 7b3e7f9ce732 2 months ago 388MB
cyb3rward0g/helk-zookeeper 2.1.0 abb732da3e50 2 months ago 388MB
cyb3rward0g/helk-spark-worker 2.4.0 b1545b0582db 2 months ago 579MB
cyb3rward0g/helk-spark-master 2.4.0 70fc61de3445 2 months ago 579MB
cyb3rward0g/helk-nginx 0.0.7 280d044b6719 6 months ago 329MB
helk@helk:~$</code></pre>
</div>
</div>
</div>
</div>
</main>

22
docs/_build/how-to/docker/docker.html vendored Normal file
View File

@ -0,0 +1,22 @@
---
title: |-
Docker
pagenum: 5
prev_page:
url: /architecture/kibana.html
next_page:
url: /how-to/docker/docker-export-images.html
suffix: .md
search: docker
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Docker</div>
</div>
</main>

View File

@ -0,0 +1,49 @@
---
title: |-
Check Kafka Topic Ingestion
pagenum: 2
prev_page:
url: /installation.html
next_page:
url:
suffix: .md
search: kafka broker helk kafkacat bash consumer topic container console sh winlogbeat following sudo docker exec ti run script opt bin bootstrap server beginning apache install github com edenhill check ingestion few ways accomplish helks access running command available simply without interactive shell generic non jvm producer think netcat instructions repo b t c references example org quickstart quickstartconsume
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Check Kafka Topic Ingestion</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>There are a few ways that you can accomplish this</p>
<h2 id="HELK's-Kafka-broker-container">HELK's Kafka broker container<a class="anchor-link" href="#HELK's-Kafka-broker-container"> </a></h2><p>Access your kafka broker container by running the following command:</p>
<div class="highlight"><pre><span></span>sudo docker <span class="nb">exec</span> -ti helk-kafka-broker bash
</pre></div>
<p>Run the kafka-console-consumer.sh script available in the container:</p>
<div class="highlight"><pre><span></span>/opt/helk/kafka/bin/kafka-console-consumer.sh --bootstrap-server helk-kafka-broker:9092 --topic winlogbeat --from-beginning
</pre></div>
<p>or simply run the script without an interactive shell</p>
<div class="highlight"><pre><span></span>sudo docker <span class="nb">exec</span> -ti helk-kafka-broker /opt/helk/kafka/bin/kafka-console-consumer.sh --bootstrap-server helk-kafka-broker:9092 --topic winlogbeat --from-beginning
</pre></div>
<h2 id="Kafkacat">Kafkacat<a class="anchor-link" href="#Kafkacat"> </a></h2><p>It is generic non-JVM producer and consumer for Apache Kafka &gt;=0.8, think of it as a netcat for Kafka. You can install it by following the <a href="https://github.com/edenhill/kafkacat#install">instructions</a> from the Kafkacat repo.</p>
<div class="highlight"><pre><span></span>kafkacat -b <span class="m">10</span>.0.10.100:9092 -t winlogbeat -C
</pre></div>
<h2 id="References">References<a class="anchor-link" href="#References"> </a></h2><ul>
<li><a href="https://kafka.apache.org/quickstart#quickstart_consume">Kafka Consumer Example</a></li>
<li><a href="https://github.com/edenhill/kafkacat">Kafkacat</a></li>
</ul>
</div>
</div>
</div>
</div>
</main>

View File

@ -0,0 +1,49 @@
---
title: |-
Check Kafka Topic Ingestion
pagenum: 11
prev_page:
url: /how-to/kafka/kafka.html
next_page:
url: /how-to/kafka/kafka-update-ip.html
suffix: .md
search: kafka broker helk kafkacat bash consumer topic container console sh winlogbeat following sudo docker exec ti run script opt bin bootstrap server beginning apache install github com edenhill check ingestion few ways accomplish helks access running command available simply without interactive shell generic non jvm producer think netcat instructions repo b t c references example org quickstart quickstartconsume
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Check Kafka Topic Ingestion</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>There are a few ways that you can accomplish this</p>
<h2 id="HELK's-Kafka-broker-container">HELK's Kafka broker container<a class="anchor-link" href="#HELK's-Kafka-broker-container"> </a></h2><p>Access your kafka broker container by running the following command:</p>
<div class="highlight"><pre><span></span>sudo docker <span class="nb">exec</span> -ti helk-kafka-broker bash
</pre></div>
<p>Run the kafka-console-consumer.sh script available in the container:</p>
<div class="highlight"><pre><span></span>/opt/helk/kafka/bin/kafka-console-consumer.sh --bootstrap-server helk-kafka-broker:9092 --topic winlogbeat --from-beginning
</pre></div>
<p>or simply run the script without an interactive shell</p>
<div class="highlight"><pre><span></span>sudo docker <span class="nb">exec</span> -ti helk-kafka-broker /opt/helk/kafka/bin/kafka-console-consumer.sh --bootstrap-server helk-kafka-broker:9092 --topic winlogbeat --from-beginning
</pre></div>
<h2 id="Kafkacat">Kafkacat<a class="anchor-link" href="#Kafkacat"> </a></h2><p>It is generic non-JVM producer and consumer for Apache Kafka &gt;=0.8, think of it as a netcat for Kafka. You can install it by following the <a href="https://github.com/edenhill/kafkacat#install">instructions</a> from the Kafkacat repo.</p>
<div class="highlight"><pre><span></span>kafkacat -b <span class="m">10</span>.0.10.100:9092 -t winlogbeat -C
</pre></div>
<h2 id="References">References<a class="anchor-link" href="#References"> </a></h2><ul>
<li><a href="https://kafka.apache.org/quickstart#quickstart_consume">Kafka Consumer Example</a></li>
<li><a href="https://github.com/edenhill/kafkacat">Kafkacat</a></li>
</ul>
</div>
</div>
</div>
</div>
</main>

View File

@ -0,0 +1,57 @@
---
title: |-
Update Kafka Broker IP
pagenum: 12
prev_page:
url: /how-to/kafka/kafka-topic-ingestion.html
next_page:
url: /how-to/ksql/ksql.html
suffix: .md
search: not broker kafka warn controller id targetbrokerid connection node established available org apache clients networkclient docker update environment variable advertisedlistener helk re system containers bash just compose create ip deployment hosting entire itself distributed across systems export simply run same used build new value assigned sudo e f kibana notebook analysis basic yml d restart container creating still show messages ones below
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Update Kafka Broker IP</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>For the docker deployment, you will have to update the environment variable ADVERTISED_LISTENER first. You can do this in your system hosting the entire HELK or the Kafka broker itself if your distributed your docker containers across other systems.</p>
<div class="highlight"><pre><span></span><span class="nb">export</span> <span class="nv">ADVERTISED_LISTENER</span><span class="o">=</span><span class="m">10</span>.0.10.104
</pre></div>
<p>Then, you can simply just run docker-compose the same way how it was used to build the HELK. This will re-create the system with the new value assigned to the environment variable <code>ADVERTISED_LISTENER</code>.</p>
<div class="highlight"><pre><span></span>sudo -E docker-compose -f helk-kibana-notebook-analysis-basic.yml up -d
</pre></div>
<p>If you just restart your containers, it will not update the environment variable in the Kafka broker. You have to re-create the container. Not re-creating the broker would still show you messages like the ones below:</p>
<pre><code>[2019-01-25 05:35:21,026] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:24,194] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:27,362] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:30,530] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:33,698] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:36,866] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:40,034] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:43,238] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:46,306] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:49,382] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:52,450] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:55,522] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:35:58,594] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:36:01,714] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:36:04,770] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:36:08,450] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-01-25 05:36:11,650] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (/10.0.10.104:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)</code></pre>
</div>
</div>
</div>
</div>
</main>

22
docs/_build/how-to/kafka/kafka.html vendored Normal file
View File

@ -0,0 +1,22 @@
---
title: |-
Kafka
pagenum: 10
prev_page:
url: /how-to/logstash/logstash-create-plugins-offline.html
next_page:
url: /how-to/kafka/kafka-topic-ingestion.html
suffix: .md
search: kafka
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Kafka</div>
</div>
</main>

View File

@ -0,0 +1,184 @@
---
title: |-
Deploy KSQL Locally
pagenum: 14
prev_page:
url: /how-to/ksql/ksql.html
next_page:
url: /how-to/winlogbeat/winlogbeat.html
suffix: .md
search: kafka ksql confluent server stop control center run start tar x class gz bash src consumer options help cli connect console false avro bin schema registry reset producer zookeeper rest query value v download mqtt topics configs file output limit helk self managed software platform format io lib cd ls acls api streams service verifiable security broker replica perf test metrics partitions log robertos mbp wardog config configfile h outputformat row streamedqueryrowlimit timeout streamedquerytimeoutms list available tabular optional maximum streamed queries must fall following range option command line arguments address streaming sql engine apache copyright inc located having trouble type
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Deploy KSQL Locally</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>You can use KSQL CLI to connect to the HELK's KSQL Server from a different system. You will have to download the self-managed software Confluent platform and then run KSQL</p>
<ul>
<li>Download the self-managed software Confluent platform in a .tar.gz format from: <a href="https://www.confluent.io/download/#popup_form_3109">https://www.confluent.io/download/#popup_form_3109</a></li>
<li>Decompress the folder:</li>
</ul>
<div class="highlight"><pre><span></span>tar -xvzf confluent-5.1.2-2.11.tar.gz
</pre></div>
<pre><code>x confluent-5.1.2/
x confluent-5.1.2/src/
x confluent-5.1.2/src/avro-cpp-1.8.0-confluent5.1.2.tar.gz
x confluent-5.1.2/src/librdkafka-0.11.6-confluent5.1.2.tar.gz
x confluent-5.1.2/src/confluent-libserdes-5.1.2.tar.gz
x confluent-5.1.2/src/avro-c-1.8.0-confluent5.1.2.tar.gz
x confluent-5.1.2/lib/</code></pre>
<ul>
<li>Access the KSQL scripts:</li>
</ul>
<div class="highlight"><pre><span></span><span class="nb">cd</span> confluent-5.1.2
ls
</pre></div>
<pre><code>README bin etc lib logs share src</code></pre>
<div class="highlight"><pre><span></span><span class="nb">cd</span> bin/
ls
</pre></div>
<pre><code>confluent kafka-acls kafka-mirror-maker kafka-server-stop schema-registry-start
confluent-hub kafka-api-start kafka-mqtt-run-class kafka-streams-application-reset schema-registry-stop
confluent-rebalancer kafka-avro-console-consumer kafka-mqtt-start kafka-topics schema-registry-stop-service
connect-distributed kafka-avro-console-producer kafka-mqtt-stop kafka-verifiable-consumer security-plugins-run-class
connect-standalone kafka-broker-api-versions kafka-preferred-replica-election kafka-verifiable-producer sr-acl-cli
control-center-3_0_0-reset kafka-configs kafka-producer-perf-test ksql support-metrics-bundle
control-center-3_0_1-reset kafka-console-consumer kafka-reassign-partitions ksql-datagen windows
control-center-console-consumer kafka-console-producer kafka-replica-verification ksql-print-metrics zookeeper-security-migration
control-center-export kafka-consumer-groups kafka-rest-run-class ksql-run-class zookeeper-server-start
control-center-reset kafka-consumer-perf-test kafka-rest-start ksql-server-start zookeeper-server-stop
control-center-run-class kafka-delegation-tokens kafka-rest-stop ksql-server-stop zookeeper-shell
control-center-set-acls kafka-delete-records kafka-rest-stop-service ksql-stop
control-center-start kafka-dump-log kafka-run-class replicator
control-center-stop kafka-log-dirs kafka-server-start schema-registry-run-class
Robertos-MBP:bin wardog$</code></pre>
<ul>
<li>Check the options for KSQL:</li>
</ul>
<div class="highlight"><pre><span></span>./ksql --help
</pre></div>
<pre><code>NAME
ksql - KSQL CLI
SYNOPSIS
ksql [ --config-file &lt;configFile&gt; ] [ {-h | --help} ]
[ --output &lt;outputFormat&gt; ]
[ --query-row-limit &lt;streamedQueryRowLimit&gt; ]
[ --query-timeout &lt;streamedQueryTimeoutMs&gt; ] [--] &lt;server&gt;
OPTIONS
--config-file &lt;configFile&gt;
A file specifying configs for Ksql and its underlying Kafka Streams
instance(s). Refer to KSQL documentation for a list of available
configs.
-h, --help
Display help information
--output &lt;outputFormat&gt;
The output format to use (either 'JSON' or 'TABULAR'; can be
changed during REPL as well; defaults to TABULAR)
--query-row-limit &lt;streamedQueryRowLimit&gt;
An optional maximum number of rows to read from streamed queries
This options value must fall in the following range: value &gt;= 1
--query-timeout &lt;streamedQueryTimeoutMs&gt;
An optional time limit (in milliseconds) for streamed queries
This options value must fall in the following range: value &gt;= 1
--
This option can be used to separate command-line options from the
list of arguments (useful when arguments might be mistaken for
command-line options)
&lt;server&gt;
The address of the Ksql server to connect to (ex:
http://confluent.io:9098)
This option may occur a maximum of 1 times
Robertos-MBP:bin wardog$</code></pre>
<ul>
<li>Connect to the HELK KSQL Server. You will just need to point to the IP address of your HELK Docker environment over port 8088</li>
</ul>
<div class="highlight"><pre><span></span>./ksql http://192.168.64.138:8088
</pre></div>
<pre><code> ===========================================
= _ __ _____ ____ _ =
= | |/ // ____|/ __ \| | =
= | ' /| (___ | | | | | =
= | &lt; \___ \| | | | | =
= | . \ ____) | |__| | |____ =
= |_|\_\_____/ \___\_\______| =
= =
= Streaming SQL Engine for Apache Kafka® =
===========================================
Copyright 2017-2018 Confluent Inc.
CLI v5.1.2, Server v5.1.0 located at http://192.168.64.138:8088
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql&gt;</code></pre>
<ul>
<li>Verify that you can see the topics available in the HELK Kafka broker</li>
</ul>
<div class="highlight"><pre><span></span>./ksql http://192.168.64.138:8088
</pre></div>
<pre><code> ===========================================
= _ __ _____ ____ _ =
= | |/ // ____|/ __ \| | =
= | ' /| (___ | | | | | =
= | &lt; \___ \| | | | | =
= | . \ ____) | |__| | |____ =
= |_|\_\_____/ \___\_\______| =
= =
= Streaming SQL Engine for Apache Kafka® =
===========================================
Copyright 2017-2018 Confluent Inc.
CLI v5.1.2, Server v5.1.0 located at http://192.168.64.138:8088
Having trouble? Type 'help' (case-insensitive) for a rundown of how things work!
ksql&gt; SHOW TOPICS;
Kafka Topic | Registered | Partitions | Partition Replicas | Consumers | ConsumerGroups
-----------------------------------------------------------------------------------------
filebeat | false | 1 | 1 | 0 | 0
SYSMON_JOIN | false | 1 | 1 | 0 | 0
winlogbeat | false | 1 | 1 | 0 | 0
winsecurity | false | 1 | 1 | 0 | 0
winsysmon | false | 1 | 1 | 0 | 0
-----------------------------------------------------------------------------------------
ksql&gt;</code></pre>
</div>
</div>
</div>
</div>
</main>

22
docs/_build/how-to/ksql/ksql.html vendored Normal file
View File

@ -0,0 +1,22 @@
---
title: |-
KSQL
pagenum: 13
prev_page:
url: /how-to/kafka/kafka-update-ip.html
next_page:
url: /how-to/ksql/ksql-deploy-locally.html
suffix: .md
search: ksql
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">KSQL</div>
</div>
</main>

View File

@ -0,0 +1,62 @@
---
title: |-
Create Plugins Offline Package
pagenum: 9
prev_page:
url: /how-to/logstash/logstash.html
next_page:
url: /how-to/kafka/kafka.html
suffix: .md
search: logstash offline filter helk plugins system bash package zip installed docker plugin usr share installing internet export access where already successfully container sudo prepare bin kafka input codec install copy isolated create extra still being following steps zipped loaded does not stuck remember need exec ti using script pack translate dns cidr geoip dissect output alter fingerprint prune gziplines netflow environment dedot wmi clone created command file local cp e bust authorized ssh scp home should able
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Create Plugins Offline Package</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>If you are installing HELK, and the helk-logstash extra plugins are still being installed over the Internet, you can use the following steps to export them in an zipped offline package to then be loaded to the system that does not have access to the Internet and it is stuck at installing plugins.</p>
<p>Remember that you will need to do this in a system where HELK is already installed and the plugins were installed successfully.</p>
<ul>
<li>Access your helk-logstash docker container in the system where HELK was successfully installed already:</li>
</ul>
<div class="highlight"><pre><span></span>sudo docker <span class="nb">exec</span> -ti helk-logstash bash
</pre></div>
<pre><code>bash-4.2$</code></pre>
<ul>
<li>Using the logstash-plugin script prepare and export the plugins offline package</li>
</ul>
<div class="highlight"><pre><span></span>bin/logstash-plugin prepare-offline-pack logstash-filter-translate logstash-filter-dns logstash-filter-cidr logstash-filter-geoip logstash-filter-dissect logstash-output-kafka logstash-input-kafka logstash-filter-alter logstash-filter-fingerprint logstash-filter-prune logstash-codec-gzip_lines logstash-codec-netflow logstash-filter-i18n logstash-filter-environment logstash-filter-de_dot logstash-input-wmi logstash-filter-clone
</pre></div>
<pre><code>Offline package created at: /usr/share/logstash/logstash-offline-plugins-6.6.1.zip
You can install it with this command
bin/logstash-plugin install file:///usr/share/logstash/logstash-offline-plugins-6.6.1.zip</code></pre>
<ul>
<li>Copy the offline package from your helk-logstash container to your local system</li>
</ul>
<div class="highlight"><pre><span></span>sudo docker cp helk-logstash:/usr/share/logstash/logstash-offline-plugins-6.6.1.zip .
</pre></div>
<ul>
<li>Copy the logstash-offline-plugins-6.6.1.zip to the OFFLINE-ISOLATED (i.e. 10.0.10.102) system. You bust be authorized to ssh to it.</li>
</ul>
<div class="highlight"><pre><span></span>scp logstash-offline-plugins-6.6.1.zip helk@10.0.10.102:/home/helk/
</pre></div>
<p>Now you should be able to use it in the offline-isolated HELK system</p>
</div>
</div>
</div>
</div>
</main>

View File

@ -0,0 +1,22 @@
---
title: |-
Logstash
pagenum: 8
prev_page:
url: /how-to/docker/docker-load-images.html
next_page:
url: /how-to/logstash/logstash-create-plugins-offline.html
suffix: .md
search: logstash
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Logstash</div>
</div>
</main>

View File

@ -0,0 +1,45 @@
---
title: |-
Check Winlogbeat Shipping
pagenum: 16
prev_page:
url: /how-to/winlogbeat/winlogbeat.html
next_page:
url:
suffix: .md
search: winlogbeat logs running shipping contain shown below should installed service within programdata manually notepad powershell output img src images kafka producer png check believe not being sent helk couple things going document stick looking only itself sending properly therefore issue somewhere else thus consult wiki additional resources located executable directory exe path viewing view simply command such get content c tail wait log information verbiage successfully published events similar
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Check Winlogbeat Shipping</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>If you believe logs are not being sent to HELK from winlogbeat then there are a couple of things that could be going on. For this document we will stick to looking at only winlogbeat itself. If your logs contain what is shown below then winlogbeat is shipping/sending the logs properly and therefore could be an issue somewhere else and thus you should consult the wiki for additional resources.</p>
<h2 id="Installed-as-a-Service">Installed as a Service<a class="anchor-link" href="#Installed-as-a-Service"> </a></h2><p>If winlogbeat is installed as a service then the logs will be located within:</p>
<pre><code>%PROGRAMDATA%\winlogbeat\logs\winlogbeat</code></pre>
<h2 id="Manually-Running-the-Executable">Manually Running the Executable<a class="anchor-link" href="#Manually-Running-the-Executable"> </a></h2><p>If you are manually running winlogbeat then the logs will be within the directory you are running winlogbeat.exe at the path</p>
<pre><code>.\logs\winlogbeat</code></pre>
<h2 id="Viewing-Logs">Viewing Logs<a class="anchor-link" href="#Viewing-Logs"> </a></h2><p>To view the logs you can simply use notepad or notepad++ or use a powershell command such as</p>
<div class="highlight"><pre><span></span><span class="nb">Get-Content</span> <span class="n">C</span><span class="err">:</span><span class="p">\</span><span class="n">ProgramData</span><span class="p">\</span><span class="n">winlogbeat</span><span class="p">\</span><span class="n">logs</span><span class="p">\</span><span class="n">winlogbeat</span> <span class="n">-Tail</span> <span class="n">10</span> <span class="n">-Wait</span>
</pre></div>
<h2 id="Log-Output">Log Output<a class="anchor-link" href="#Log-Output"> </a></h2><p>Your logs should contain information with the verbiage successfully published #NUMBER events, similar to the output shown below</p>
<p><img src="../../images/KAFKA-producer1.png"></p>
<p><img src="../../images/KAFKA-producer2.png"></p>
</div>
</div>
</div>
</div>
</main>

View File

@ -0,0 +1,22 @@
---
title: |-
Winlogbeat
pagenum: 15
prev_page:
url: /how-to/ksql/ksql-deploy-locally.html
next_page:
url: /how-to/winlogbeat/winlogbeat-shipping.html
suffix: .md
search: winlogbeat
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Winlogbeat</div>
</div>
</main>

View File

Before

Width:  |  Height:  |  Size: 388 KiB

After

Width:  |  Height:  |  Size: 388 KiB

View File

Before

Width:  |  Height:  |  Size: 547 KiB

After

Width:  |  Height:  |  Size: 547 KiB

View File

Before

Width:  |  Height:  |  Size: 289 KiB

After

Width:  |  Height:  |  Size: 289 KiB

View File

Before

Width:  |  Height:  |  Size: 177 KiB

After

Width:  |  Height:  |  Size: 177 KiB

View File

Before

Width:  |  Height:  |  Size: 179 KiB

After

Width:  |  Height:  |  Size: 179 KiB

View File

Before

Width:  |  Height:  |  Size: 150 KiB

After

Width:  |  Height:  |  Size: 150 KiB

View File

Before

Width:  |  Height:  |  Size: 279 KiB

After

Width:  |  Height:  |  Size: 279 KiB

View File

Before

Width:  |  Height:  |  Size: 236 KiB

After

Width:  |  Height:  |  Size: 236 KiB

View File

Before

Width:  |  Height:  |  Size: 255 KiB

After

Width:  |  Height:  |  Size: 255 KiB

View File

Before

Width:  |  Height:  |  Size: 115 KiB

After

Width:  |  Height:  |  Size: 115 KiB

View File

Before

Width:  |  Height:  |  Size: 213 KiB

After

Width:  |  Height:  |  Size: 213 KiB

View File

Before

Width:  |  Height:  |  Size: 150 KiB

After

Width:  |  Height:  |  Size: 150 KiB

View File

Before

Width:  |  Height:  |  Size: 174 KiB

After

Width:  |  Height:  |  Size: 174 KiB

View File

Before

Width:  |  Height:  |  Size: 263 KiB

After

Width:  |  Height:  |  Size: 263 KiB

View File

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 74 KiB

View File

Before

Width:  |  Height:  |  Size: 137 KiB

After

Width:  |  Height:  |  Size: 137 KiB

View File

Before

Width:  |  Height:  |  Size: 78 KiB

After

Width:  |  Height:  |  Size: 78 KiB

View File

Before

Width:  |  Height:  |  Size: 720 KiB

After

Width:  |  Height:  |  Size: 720 KiB

View File

Before

Width:  |  Height:  |  Size: 582 KiB

After

Width:  |  Height:  |  Size: 582 KiB

View File

Before

Width:  |  Height:  |  Size: 513 KiB

After

Width:  |  Height:  |  Size: 513 KiB

View File

Before

Width:  |  Height:  |  Size: 538 KiB

After

Width:  |  Height:  |  Size: 538 KiB

View File

Before

Width:  |  Height:  |  Size: 766 KiB

After

Width:  |  Height:  |  Size: 766 KiB

View File

Before

Width:  |  Height:  |  Size: 351 KiB

After

Width:  |  Height:  |  Size: 351 KiB

View File

Before

Width:  |  Height:  |  Size: 521 KiB

After

Width:  |  Height:  |  Size: 521 KiB

View File

Before

Width:  |  Height:  |  Size: 441 KiB

After

Width:  |  Height:  |  Size: 441 KiB

View File

Before

Width:  |  Height:  |  Size: 443 KiB

After

Width:  |  Height:  |  Size: 443 KiB

View File

Before

Width:  |  Height:  |  Size: 742 KiB

After

Width:  |  Height:  |  Size: 742 KiB

View File

Before

Width:  |  Height:  |  Size: 861 KiB

After

Width:  |  Height:  |  Size: 861 KiB

View File

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 162 KiB

View File

Before

Width:  |  Height:  |  Size: 462 KiB

After

Width:  |  Height:  |  Size: 462 KiB

View File

Before

Width:  |  Height:  |  Size: 535 KiB

After

Width:  |  Height:  |  Size: 535 KiB

View File

Before

Width:  |  Height:  |  Size: 279 KiB

After

Width:  |  Height:  |  Size: 279 KiB

View File

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 62 KiB

View File

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 160 KiB

View File

Before

Width:  |  Height:  |  Size: 345 KiB

After

Width:  |  Height:  |  Size: 345 KiB

View File

Before

Width:  |  Height:  |  Size: 246 KiB

After

Width:  |  Height:  |  Size: 246 KiB

View File

Before

Width:  |  Height:  |  Size: 403 KiB

After

Width:  |  Height:  |  Size: 403 KiB

View File

Before

Width:  |  Height:  |  Size: 186 KiB

After

Width:  |  Height:  |  Size: 186 KiB

View File

Before

Width:  |  Height:  |  Size: 233 KiB

After

Width:  |  Height:  |  Size: 233 KiB

View File

Before

Width:  |  Height:  |  Size: 497 KiB

After

Width:  |  Height:  |  Size: 497 KiB

View File

Before

Width:  |  Height:  |  Size: 192 KiB

After

Width:  |  Height:  |  Size: 192 KiB

View File

Before

Width:  |  Height:  |  Size: 279 KiB

After

Width:  |  Height:  |  Size: 279 KiB

View File

Before

Width:  |  Height:  |  Size: 172 KiB

After

Width:  |  Height:  |  Size: 172 KiB

View File

Before

Width:  |  Height:  |  Size: 162 KiB

After

Width:  |  Height:  |  Size: 162 KiB

View File

Before

Width:  |  Height:  |  Size: 343 KiB

After

Width:  |  Height:  |  Size: 343 KiB

View File

Before

Width:  |  Height:  |  Size: 186 KiB

After

Width:  |  Height:  |  Size: 186 KiB

View File

Before

Width:  |  Height:  |  Size: 626 KiB

After

Width:  |  Height:  |  Size: 626 KiB

View File

Before

Width:  |  Height:  |  Size: 342 KiB

After

Width:  |  Height:  |  Size: 342 KiB

View File

Before

Width:  |  Height:  |  Size: 324 KiB

After

Width:  |  Height:  |  Size: 324 KiB

View File

Before

Width:  |  Height:  |  Size: 157 KiB

After

Width:  |  Height:  |  Size: 157 KiB

View File

Before

Width:  |  Height:  |  Size: 253 KiB

After

Width:  |  Height:  |  Size: 253 KiB

View File

Before

Width:  |  Height:  |  Size: 396 KiB

After

Width:  |  Height:  |  Size: 396 KiB

View File

Before

Width:  |  Height:  |  Size: 183 KiB

After

Width:  |  Height:  |  Size: 183 KiB

View File

Before

Width:  |  Height:  |  Size: 354 KiB

After

Width:  |  Height:  |  Size: 354 KiB

View File

Before

Width:  |  Height:  |  Size: 150 KiB

After

Width:  |  Height:  |  Size: 150 KiB

BIN
docs/_build/images/logo/favicon.ico vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 KiB

BIN
docs/_build/images/logo/logo.png vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

284
docs/_build/installation.html vendored Normal file
View File

@ -0,0 +1,284 @@
---
title: |-
Installation
pagenum: 1
prev_page:
url: /introduction.html
next_page:
url: /architecture/elasticsearch.html
suffix: .md
search: helk info docker installation spark cybrwardg mb kibana pulling ksql jupyter server e elasticsearch t o kb kafka tcp xx creating elastalert minutes gib set master done bash script following elastic logstash zookeeper ago p pluginsservice loaded module version access gb elk hour mib url option log nginx worker install co running password logs broker b des ip ngnix sudo default want usr share true using vm run current basic hunting build file confluentinc cp cli useconcmarksweepgc node ubuntu github ce compose sure bit available includes helks helkinstall license ui local monitor container n name data however work centos supported
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Installation</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<h2 id="Requirements-(Please-Read-Carefully)">Requirements (Please Read Carefully)<a class="anchor-link" href="#Requirements-(Please-Read-Carefully)"> </a></h2><h3 id="Operating-System-&amp;-Docker:">Operating System &amp; Docker:<a class="anchor-link" href="#Operating-System-&amp;-Docker:"> </a></h3><ul>
<li>Ubuntu 18.04 (preferred). However, Ubuntu 16 will work. CentOS is not fully supported but some have been able to get it to work, documentation is yet to come - so use CentOS at your own expense at the moment. However, open a GitHub issue but we cant promise we can help.</li>
<li>HELK uses the official Docker Community Edition (CE) bash script (Edge Version) to install Docker for you. The Docker CE Edge script supports the following distros: ubuntu, debian, raspbian, centos, and fedora.</li>
<li>You can see the specific distro versions supported in the script here.</li>
<li>If you have Docker &amp; Docker-Compose already installed in your system, make sure you uninstall them to avoid old incompatible version. Let HELK use the official Docker CE Edge script execution to install Docker.</li>
</ul>
<h3 id="Processor/OS-Architecture:">Processor/OS Architecture:<a class="anchor-link" href="#Processor/OS-Architecture:"> </a></h3><ul>
<li>64-bit also known as x64, x86_64, AMD64 or Intel 64.</li>
<li>FYI: old processors don't support SSE3 instructions to start ML (Machine Learning) on elasticsearch. Since version 6.1 Elastic has been compiling the ML programs on the assumption that SSE4.2 instructions are available (See: <a href="https://github.com/Cyb3rWard0g/HELK/issues/321">https://github.com/Cyb3rWard0g/HELK/issues/321</a> and <a href="https://discuss.elastic.co/t/failed-to-start-machine-learning-on-elasticsearch-7-0-0/178216/7">https://discuss.elastic.co/t/failed-to-start-machine-learning-on-elasticsearch-7-0-0/178216/7</a>)</li>
</ul>
<h3 id="Cores:">Cores:<a class="anchor-link" href="#Cores:"> </a></h3><p>Minimum of 4 cores (whether logical or physical)</p>
<h3 id="Network-Connection:-NAT-or-Bridge">Network Connection: NAT or Bridge<a class="anchor-link" href="#Network-Connection:-NAT-or-Bridge"> </a></h3><ul>
<li>IP version 4 address. IPv6 has not been tested yet.</li>
<li>Internet access</li>
<li>If using a proxy, documentation is yet to come - so use a proxy at your own expense. However, open a GitHub issue and we will try to help until it is officially documented/supported.</li>
<li>If using a VM then NAT or Bridge will work.</li>
<li>List of required domains/IPs will be listed in future documentation.</li>
</ul>
<h3 id="RAM:">RAM:<a class="anchor-link" href="#RAM:"> </a></h3><p>There are four options, and the following are minimum requirements (include more if you are able).</p>
<ul>
<li>Option 1: 5GB includes KAFKA + KSQL + ELK + NGNIX.</li>
<li>Option 2: 5GB includes KAFKA + KSQL + ELK + NGNIX + ELASTALERT</li>
<li>Option 3: 7GB includes KAFKA + KSQL + ELK + NGNIX + SPARK + JUPYTER.</li>
<li>Option 4: 8GB includes KAFKA + KSQL + ELK + NGNIX + SPARK + JUPYTER + ELASTALERT.</li>
</ul>
<h3 id="Disk:">Disk:<a class="anchor-link" href="#Disk:"> </a></h3><p>25GB for testing purposes and 100GB+ for production (minimum)</p>
<h3 id="Applications:">Applications:<a class="anchor-link" href="#Applications:"> </a></h3><ul>
<li>Docker: 18.06.1-ce+ &amp; Docker-Compose (HELK INSTALLS THIS FOR YOU)</li>
<li>Winlogbeat running on your endpoints or centralized WEF server (that your endpoints are forwarding to).</li>
<li>You can install Winlogbeat by following one of @Cyb3rWard0g posts here.</li>
<li>Winlogbeat config recommended by the HELK since it uses the Kafka output plugin and it is already pointing to the right ports with recommended options. You will just have to add your HELK's IP address.</li>
</ul>
<h2 id="HELK-Download">HELK Download<a class="anchor-link" href="#HELK-Download"> </a></h2><p>Run the following commands to clone the HELK repo via git.</p>
<div class="highlight"><pre><span></span>git clone https://github.com/Cyb3rWard0g/HELK.git
</pre></div>
<p>Change your current directory location to the new HELK directory, and run the helk_install.sh bash script as root.</p>
<div class="highlight"><pre><span></span><span class="nb">cd</span> HELK/docker
sudo ./helk_install.sh
</pre></div>
<h2 id="HELK-Install">HELK Install<a class="anchor-link" href="#HELK-Install"> </a></h2><p>In order to make the installation of the HELK easy for everyone, the project comes with an install script named helk_install.sh. This script builds and runs everything you for HELK automatically. During the installation process, the script will allow you to set up the following:</p>
<ul>
<li>Set the HELK's option. For this document we are going to use option 2 (ELK + KSQL + Elastalert + Spark + Jupyter)</li>
<li>Set the Kibana User's password. Default user is helk</li>
<li>Set the HELK's IP. By default you can confirm that you want to use your HOST IP address for the HELK, unless you want to use a different one. Press [Return] or let the script continue on its own (30 Seconds sleep).</li>
<li>Set the HELK's License Subscription. By default the HELK has the basic subscription selected. You can set it to trial if you want. If you want to learn more about subscriptions go here<ul>
<li>If the license is set to trial, HELK asks you to set the password for the elastic account.</li>
</ul>
</li>
</ul>
<pre><code>**********************************************
** HELK - THE HUNTING ELK **
** **
** Author: Roberto Rodriguez (@Cyb3rWard0g) **
** HELK build version: v0.1.7-alpha02262019 **
** HELK ELK version: 6.6.1 **
** License: GPL-3.0 **
**********************************************
[HELK-INSTALLATION-INFO] HELK being hosted on a Linux box
[HELK-INSTALLATION-INFO] Available Memory: 12463 MBs
[HELK-INSTALLATION-INFO] You're using ubuntu version xenial
*****************************************************
* HELK - Docker Compose Build Choices *
*****************************************************
1. KAFKA + KSQL + ELK + NGNIX + ELASTALERT
2. KAFKA + KSQL + ELK + NGNIX + ELASTALERT + SPARK + JUPYTER
Enter build choice [ 1 - 2]: 2
[HELK-INSTALLATION-INFO] HELK build set to 2
[HELK-INSTALLATION-INFO] Set HELK elastic subscription (basic or trial): basic
[HELK-INSTALLATION-INFO] Set HELK IP. Default value is your current IP: 192.168.64.138
[HELK-INSTALLATION-INFO] Set HELK Kibana UI Password: hunting
[HELK-INSTALLATION-INFO] Verify HELK Kibana UI Password: hunting
[HELK-INSTALLATION-INFO] Docker already installed
[HELK-INSTALLATION-INFO] Making sure you assigned enough disk space to the current Docker base directory
[HELK-INSTALLATION-INFO] Available Docker Disk: 67 GBs
[HELK-INSTALLATION-INFO] Installing docker-compose..
[HELK-INSTALLATION-INFO] Checking local vm.max_map_count variable and setting it to 4120294
[HELK-INSTALLATION-INFO] Building &amp; running HELK from helk-kibana-notebook-analysis-basic.yml file..
[HELK-INSTALLATION-INFO] Waiting for some services to be up .....
....
......</code></pre>
<h2 id="Monitor-HELK-installation-Logs-(Always)">Monitor HELK installation Logs (Always)<a class="anchor-link" href="#Monitor-HELK-installation-Logs-(Always)"> </a></h2><p>Once the installation kicks in, it will start showing you pre-defined messages about the installation, but no all the details of what is actually happening in the background. It is designed that way to keep your main screen clean and let you know where it is in the installation process.</p>
<p>What I recommend to do all the time is to open another shell and monitor the HELK installation logs by using the tail command and pointing it to the /var/log/helk-install.log file that gets created by the helk_install script as soon as it is run. This log file is available on your local host even if you are deploying the HELK via Docker (I want to make sure it is clear that it is a local file).</p>
<div class="highlight"><pre><span></span>tail -f /var/log/helk-install.log
</pre></div>
<pre><code>Creating network "docker_helk" with driver "bridge"
Creating volume "docker_esdata" with local driver
Pulling helk-elasticsearch (docker.elastic.co/elasticsearch/elasticsearch:6.6.1)...
6.6.1: Pulling from elasticsearch/elasticsearch
Pulling helk-kibana (docker.elastic.co/kibana/kibana:6.6.1)...
6.6.1: Pulling from kibana/kibana
Pulling helk-logstash (docker.elastic.co/logstash/logstash:6.6.1)...
6.6.1: Pulling from logstash/logstash
Pulling helk-jupyter (cyb3rward0g/helk-jupyter:0.1.2)...
0.1.2: Pulling from cyb3rward0g/helk-jupyter
Pulling helk-nginx (cyb3rward0g/helk-nginx:0.0.7)...
0.0.7: Pulling from cyb3rward0g/helk-nginx
Pulling helk-spark-master (cyb3rward0g/helk-spark-master:2.4.0-a)...
2.4.0-a: Pulling from cyb3rward0g/helk-spark-master
Pulling helk-spark-worker (cyb3rward0g/helk-spark-worker:2.4.0-a)...
2.4.0-a: Pulling from cyb3rward0g/helk-spark-worker
Pulling helk-zookeeper (cyb3rward0g/helk-zookeeper:2.1.0)...
2.1.0: Pulling from cyb3rward0g/helk-zookeeper
Pulling helk-kafka-broker (cyb3rward0g/helk-kafka-broker:2.1.0)...
2.1.0: Pulling from cyb3rward0g/helk-kafka-broker
Pulling helk-ksql-server (confluentinc/cp-ksql-server:5.1.2)...
5.1.2: Pulling from confluentinc/cp-ksql-server
Pulling helk-ksql-cli (confluentinc/cp-ksql-cli:5.1.2)...
5.1.2: Pulling from confluentinc/cp-ksql-cli
Pulling helk-elastalert (cyb3rward0g/helk-elastalert:0.2.1)...
0.2.1: Pulling from cyb3rward0g/helk-elastalert
Creating helk-elasticsearch ... done
Creating helk-kibana ... done
Creating helk-logstash ... done
Creating helk-spark-master ... done
Creating helk-elastalert ... done
Creating helk-zookeeper ... done
Creating helk-jupyter ... done
Creating helk-spark-worker ... done
Creating helk-kafka-broker ... done
Creating helk-nginx ... done
Creating helk-ksql-server ... done
Creating helk-ksql-cli ... done</code></pre>
<p>Once you see that the containers have been created you can check all the containers running by executing the following:</p>
<div class="highlight"><pre><span></span>sudo docker ps
</pre></div>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
968576241e9c confluentinc/cp-ksql-server:5.1.2 "/etc/confluent/dock…" 28 minutes ago Up 26 minutes 0.0.0.0:8088-&gt;8088/tcp helk-ksql-server
154593559d13 cyb3rward0g/helk-kafka-broker:2.1.0 "./kafka-entrypoint.…" 28 minutes ago Up 26 minutes 0.0.0.0:9092-&gt;9092/tcp helk-kafka-broker
d883541a64f1 cyb3rward0g/helk-nginx:0.0.7 "/opt/helk/scripts/n…" About an hour ago Up 26 minutes 0.0.0.0:80-&gt;80/tcp, 0.0.0.0:443-&gt;443/tcp helk-nginx
527ef236543a cyb3rward0g/helk-spark-worker:2.4.0-a "./spark-worker-entr…" About an hour ago Up 26 minutes helk-spark-worker
27cfaf7a8e84 cyb3rward0g/helk-jupyter:0.1.2 "./jupyter-entrypoin…" About an hour ago Up 26 minutes 8000/tcp, 8888/tcp helk-jupyter
75002248e916 cyb3rward0g/helk-zookeeper:2.1.0 "./zookeeper-entrypo…" About an hour ago Up 26 minutes 2181/tcp, 2888/tcp, 3888/tcp helk-zookeeper
ee0120167ffa cyb3rward0g/helk-elastalert:0.2.1 "./elastalert-entryp…" About an hour ago Up 26 minutes helk-elastalert
4dc2722cdd53 cyb3rward0g/helk-spark-master:2.4.0-a "./spark-master-entr…" About an hour ago Up 26 minutes 7077/tcp, 0.0.0.0:8080-&gt;8080/tcp helk-spark-master
9c1eb230b0ff docker.elastic.co/logstash/logstash:6.6.1 "/usr/share/logstash…" About an hour ago Up 26 minutes 0.0.0.0:5044-&gt;5044/tcp, 0.0.0.0:8531-&gt;8531/tcp, 9600/tcp helk-logstash
f018f16d9792 docker.elastic.co/kibana/kibana:6.6.1 "/usr/share/kibana/s…" About an hour ago Up 26 minutes 5601/tcp helk-kibana
6ec5779e9e01 docker.elastic.co/elasticsearch/elasticsearch:6.6.1 "/usr/share/elastics…" About an hour ago Up 26 minutes 9200/tcp, 9300/tcp helk-elasticsearch</code></pre>
<p>If you want to monitor the resources being utilized (Memory, CPU, etc), you can run the following:</p>
<div class="highlight"><pre><span></span>sudo docker stats --all
</pre></div>
<pre><code>CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
ba46d256ee18 helk-ksql-cli 0.00% 0B / 0B 0.00% 0B / 0B 0B / 0B 0
968576241e9c helk-ksql-server 1.43% 242MiB / 12.62GiB 1.87% 667kB / 584kB 96.1MB / 73.7kB 29
154593559d13 helk-kafka-broker 2.83% 318.7MiB / 12.62GiB 2.47% 1.47MB / 1.6MB 50.7MB / 2.01MB 67
d883541a64f1 helk-nginx 0.10% 3.223MiB / 12.62GiB 0.02% 14.7MB / 14.8MB 9.35MB / 12.3kB 5
527ef236543a helk-spark-worker 0.43% 177.7MiB / 12.62GiB 1.38% 19.5kB / 147kB 37.1MB / 32.8kB 28
27cfaf7a8e84 helk-jupyter 0.12% 45.42MiB / 12.62GiB 0.35% 1.64kB / 0B 66.3MB / 733kB 9
75002248e916 helk-zookeeper 0.26% 62.6MiB / 12.62GiB 0.48% 150kB / 118kB 2.75MB / 172kB 23
ee0120167ffa helk-elastalert 2.60% 40.97MiB / 12.62GiB 0.32% 12MB / 17.4MB 38.3MB / 8.19kB 1
4dc2722cdd53 helk-spark-master 0.50% 187.2MiB / 12.62GiB 1.45% 148kB / 17.8kB 52.3MB / 32.8kB 28
9c1eb230b0ff helk-logstash 15.96% 1.807GiB / 12.62GiB 14.32% 871kB / 110MB 165MB / 2.95MB 62
f018f16d9792 helk-kibana 2.73% 179.1MiB / 12.62GiB 1.39% 3.71MB / 17.6MB 250MB / 4.1kB 13
6ec5779e9e01 helk-elasticsearch 12.56% 2.46GiB / 12.62GiB 19.50% 130MB / 15.8MB 293MB / 226MB 61</code></pre>
<p>You should also monitor the logs of each container while they are being initialized:</p>
<p>Just run the following:</p>
<div class="highlight"><pre><span></span>sudo docker logs --follow helk-elasticsearch
</pre></div>
<pre><code>[HELK-ES-DOCKER-INSTALLATION-INFO] Setting ES_JAVA_OPTS to -Xms1200m -Xmx1200m -XX:-UseConcMarkSweepGC -XX:-UseCMSInitiatingOccupancyOnly -XX:+UseG1GC
[HELK-ES-DOCKER-INSTALLATION-INFO] Setting Elastic license to basic
[HELK-ES-DOCKER-INSTALLATION-INFO] Running docker-entrypoint script..
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[2019-03-16T17:13:58,710][INFO ][o.e.e.NodeEnvironment ] [helk-1] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [60.7gb], net total_space [72.7gb], types [ext4]
[2019-03-16T17:13:58,722][INFO ][o.e.e.NodeEnvironment ] [helk-1] heap size [1.1gb], compressed ordinary object pointers [true]
[2019-03-16T17:13:58,728][INFO ][o.e.n.Node ] [helk-1] node name [helk-1], node ID [En7HptZKTNmv4R6-Qb99UA]
[2019-03-16T17:13:58,729][INFO ][o.e.n.Node ] [helk-1] version[6.6.1], pid[12], build[default/tar/1fd8f69/2019-02-13T17:10:04.160291Z], OS[Linux/4.4.0-116-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/11.0.1/11.0.1+13]
[2019-03-16T17:13:58,734][INFO ][o.e.n.Node ] [helk-1] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-7720073513605769733, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -Djava.locale.providers=COMPAT, -XX:UseAVX=2, -Des.cgroups.hierarchy.override=/, -Xms1200m, -Xmx1200m, -XX:-UseConcMarkSweepGC, -XX:-UseCMSInitiatingOccupancyOnly, -XX:+UseG1GC, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=default, -Des.distribution.type=tar]
[2019-03-16T17:14:03,510][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [aggs-matrix-stats]
[2019-03-16T17:14:03,517][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [analysis-common]
[2019-03-16T17:14:03,517][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [ingest-common]
[2019-03-16T17:14:03,517][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [lang-expression]
[2019-03-16T17:14:03,517][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [lang-mustache]
[2019-03-16T17:14:03,518][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [lang-painless]
[2019-03-16T17:14:03,518][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [mapper-extras]
[2019-03-16T17:14:03,518][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [parent-join]
[2019-03-16T17:14:03,518][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [percolator]
[2019-03-16T17:14:03,519][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [rank-eval]
[2019-03-16T17:14:03,519][INFO ][o.e.p.PluginsService ] [helk-1] loaded module [reindex]
..
....</code></pre>
<p>All you need to do now for the other ones is just replace helk-elasticsearch with the specific containers name:</p>
<div class="highlight"><pre><span></span>sudo docker logs --follow &lt;container name&gt;
</pre></div>
<p>Remember that you can also access your docker images by running the following commands:</p>
<div class="highlight"><pre><span></span>sudo docker <span class="nb">exec</span> -ti helk-elasticsearch bash
</pre></div>
<pre><code>root@7a9d6443a4bf:/opt/helk/scripts#</code></pre>
<h2 id="Final-Details">Final Details<a class="anchor-link" href="#Final-Details"> </a></h2><p>Once your HELK installation ends, you will be presented with information that you will need to access the HELK and all its other components.</p>
<p>You will get the following information:</p>
<pre><code>***********************************************************************************
** [HELK-INSTALLATION-INFO] HELK WAS INSTALLED SUCCESSFULLY **
** [HELK-INSTALLATION-INFO] USE THE FOLLOWING SETTINGS TO INTERACT WITH THE HELK **
***********************************************************************************
HELK KIBANA URL: https://192.168.64.138
HELK KIBANA USER: helk
HELK KIBANA PASSWORD: hunting
HELK SPARK MASTER UI: http://192.168.64.138:8080
HELK JUPYTER SERVER URL: http://192.168.64.138/jupyter
HELK JUPYTER CURRENT TOKEN: e8e83f5c9fe93882a970ce352d566adfb032b0975549449c
HELK ZOOKEEPER: 192.168.64.138:2181
HELK KSQL SERVER: 192.168.64.138:8088
IT IS HUNTING SEASON!!!!!</code></pre>
<table>
<thead><tr>
<th style="text-align:left">Type</th>
<th style="text-align:left">Description</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align:left">HELK KIBANA URL</td>
<td style="text-align:left">URL to access the Kibana server. You will need to copy that and paste it in your browser to access Kibana. Make sure you use https since Kibana is running behind NGINX via port 443 with a self-signed certificate</td>
</tr>
<tr>
<td style="text-align:left">HELK KIBANA USER &amp; PASSWORD</td>
<td style="text-align:left">Credentials used to access Kibana</td>
</tr>
<tr>
<td style="text-align:left">HELK SPARK MASTER UI</td>
<td style="text-align:left">URL to access the Spark Master server (Spark Standalone). That server manages the Spark Workers used during execution of code by Jupyter Notebooks. Spark Master acts as a proxy to Spark Workers and applications running</td>
</tr>
<tr>
<td style="text-align:left">HELK JUPYTER SERVER URL</td>
<td style="text-align:left">URL to access the Jupyter notebook server.</td>
</tr>
<tr>
<td style="text-align:left">HELK JUPYTER CURRENT TOKEN</td>
<td style="text-align:left">Jupyter token to log in instead of providing a password</td>
</tr>
<tr>
<td style="text-align:left">ZOOKEEPER</td>
<td style="text-align:left">URL for the kafka cluster zookeeper</td>
</tr>
<tr>
<td style="text-align:left">KSQL SERVER</td>
<td style="text-align:left">URL to access the KSQL server and send SQL queries to the data in the kafka brokers</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</main>

80
docs/_build/introduction.html vendored Normal file
View File

@ -0,0 +1,80 @@
---
title: |-
Introduction
pagenum: 0
prev_page:
url:
next_page:
url: /installation.html
suffix: .md
search: open com source img helk spark data src href svg class left github cybrwardg twitter apache shields io license hunting jupyter notebooks elasticsearch engine thehelk badges analytics capabilities such sql streaming scalable platform features kafka allows general build gnu gpl badge issues follow style v blob master stability div images design elk hunt language structured via research community share basics system designed fast text real hadoop libraries cluster provides java python code ksql processing elastalert sigma neuron adding integration integrate add introduction www org licenses gplv blue q isaissueisaclosedimg closed thehelkimg sociallabel ellerbrock frapsoft os mkenney software guides md alphaimg
comment: "***PROGRAMMATICALLY GENERATED, DO NOT EDIT. SEE ORIGINAL FILES IN /content***"
---
<main class="jupyter-page">
<div id="page-info"><div id="page-title">Introduction</div>
</div>
<div class="jb_cell">
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p><a href="https://www.gnu.org/licenses/gpl-3.0"><img src="https://img.shields.io/badge/License-GPLv3-blue.svg" class="left"></a>
<a href="https://GitHub.com/Cyb3rWard0g/HELK/issues?q=is%3Aissue+is%3Aclosed"><img src="https://img.shields.io/github/issues-closed/Cyb3rward0g/HELK.svg" class="left"></a>
<a href="https://twitter.com/THE_HELK"><img src="https://img.shields.io/twitter/follow/THE_HELK.svg?style=social&label=Follow" class="left"></a>
<a href="https://github.com/ellerbrock/open-source-badges/"><img src="https://badges.frapsoft.com/os/v3/open-source.svg?v=103" class="left"></a>
<a href="https://github.com/mkenney/software-guides/blob/master/STABILITY-BADGES.md#alpha"><img src="https://img.shields.io/badge/stability-alpha-f4d03f.svg" class="left"></a></p>
<div style="clear:both;"></div><p><img src="images/HELK-Design.png"></p>
<p>The Hunting ELK or simply the HELK is one of the first open source hunt platforms with advanced analytics capabilities such as SQL declarative language, graphing, structured streaming, and even machine learning via Jupyter notebooks and Apache Spark over an ELK stack. This project was developed primarily for research, but due to its flexible design and core components, it can be deployed in larger environments with the right configurations and scalable infrastructure.</p>
<h2 id="Goals">Goals<a class="anchor-link" href="#Goals"> </a></h2><ul>
<li>Provide an open source hunting platform to the community and share the basics of Threat Hunting.</li>
<li>Expedite the time it takes to deploy a hunt platform.</li>
<li>Improve the testing and development of hunting use cases in an easier and more affordable way.</li>
<li>Enable Data Science capabilities while analyzing data via Apache Spark, GraphFrames &amp; Jupyter Notebooks.</li>
</ul>
<h2 id="Main-Features">Main Features<a class="anchor-link" href="#Main-Features"> </a></h2><ul>
<li><strong>Kafka</strong>: A distributed publish-subscribe messaging system that is designed to be fast, scalable, fault-tolerant, and durable.</li>
<li><strong>Elasticsearch</strong>: A highly scalable open-source full-text search and analytics engine.</li>
<li><strong>Logstash</strong>: A data collection engine with real-time pipelining capabilities.</li>
<li><strong>Kibana</strong>: An open source analytics and visualization platform designed to work with Elasticsearch.</li>
<li><strong>ES-Hadoop</strong>: An open-source, stand-alone, self-contained, small library that allows Hadoop jobs (whether using Map/Reduce or libraries built upon it such as Hive, Pig or Cascading or new upcoming libraries like Apache Spark ) to interact with Elasticsearch.</li>
<li><strong>Spark</strong>: A fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala, Python and R, and an optimized engine that supports general execution graphs.</li>
<li><strong>Jupyter Notebooks</strong>: An open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text.</li>
</ul>
<h2 id="Optional-Features">Optional Features<a class="anchor-link" href="#Optional-Features"> </a></h2><ul>
<li><strong>KSQL</strong>: Confluent KSQL is the open source, streaming SQL engine that enables real-time data processing against Apache Kafka®. It provides an easy-to-use, yet powerful interactive SQL interface for stream processing on Kafka, without the need to write code in a programming language such as Java or Python</li>
<li><strong>Elastalert</strong>: ElastAlert is a simple framework for alerting on anomalies, spikes, or other patterns of interest from data in Elasticsearch</li>
<li><strong>Sigma</strong>: Sigma is a generic and open signature format that allows you to describe relevant log events in a straightforward manner.</li>
</ul>
<h2 id="Author">Author<a class="anchor-link" href="#Author"> </a></h2><ul>
<li>Roberto Rodriguez <a href="https://twitter.com/Cyb3rWard0g">@Cyb3rWard0g</a> <a href="https://twitter.com/THE_HELK">@THE_HELK</a></li>
</ul>
<h2 id="Current-Committers">Current Committers<a class="anchor-link" href="#Current-Committers"> </a></h2><ul>
<li>Nate Guagenti <a href="https://twitter.com/neu5ron">@neu5ron</a></li>
</ul>
<h2 id="Contributing">Contributing<a class="anchor-link" href="#Contributing"> </a></h2><p>There are a few things that I would like to accomplish with the HELK as shown in the To-Do list below. I would love to make the HELK a stable build for everyone in the community. If you are interested on making this build a more robust one and adding some cool features to it, PLEASE feel free to submit a pull request. #SharingIsCaring</p>
<h1 id="TO-Do">TO-Do<a class="anchor-link" href="#TO-Do"> </a></h1><ul>
<li>[ ] Kubernetes Cluster Migration</li>
<li>[ ] OSQuery Data Ingestion</li>
<li>[ ] MITRE ATT&amp;CK mapping to logs or dashboards</li>
<li>[ ] Cypher for Apache Spark Integration (Adding option for Zeppelin Notebook)</li>
<li>[ ] Test and integrate neo4j spark connectors with build</li>
<li>[ ] Add more network data sources (i.e Bro)</li>
<li>[ ] Research &amp; integrate spark structured direct streaming</li>
<li>[ ] Packer Images</li>
<li>[ ] Terraform integration (AWS, Azure, GC)</li>
<li>[ ] Add more Jupyter Notebooks to teach the basics</li>
<li>[ ] Auditd beat intergation</li>
</ul>
<h2 id="License:-GPL-3.0">License: GPL-3.0<a class="anchor-link" href="#License:-GPL-3.0"> </a></h2><p><a href="https://github.com/Cyb3rWard0g/HELK/blob/master/LICENSE"> HELK's GNU General Public License</a></p>
</div>
</div>
</div>
</div>
</main>

165
docs/_config.yml Executable file
View File

@ -0,0 +1,165 @@
# Welcome to Jekyll!
#
# This config file is meant for settings that affect your whole blog, values
# which you are expected to set up once and rarely edit after that. If you find
# yourself editing this file very often, consider using Jekyll's data files
# feature for the data you need to update frequently.
#
# For technical reasons, this file is *NOT* reloaded automatically when you use
# 'bundle exec jekyll serve'. If you change this file, please restart the server process.
# Site settings
# These are used to personalize your new site. If you look in the HTML files,
# you will see them accessed via {{ site.title }}, {{ site.email }}, and so on.
# You can create any custom variable you would like, and they will be accessible
# in the templates via {{ site.myvariable }}.
#######################################################################################
# Jekyll site settings
title: HELK
author: Roberto Rodriguez
email: #myemail
description: The HELK Docs.
baseurl: / # the subpath of your site, e.g. /blog. If there is no subpath for your site, use an empty string ""
url: https://thehelk.com # the base hostname & protocol for your site, e.g. http://example.com
#######################################################################################
# Jupyter Book settings
# Main page settings
footer_text: This page was created by <a href="https://twitter.com/Cyb3rWard0g">Roberto Rodriguez @Cyb3rWard0g </a>
# Sidebar settings
show_sidebar: true # Show the sidebar. Only set to false if your only wish to host a single page.
collapse_inactive_chapters: true # Whether to collapse the inactive chapters in the sidebar
collapse_inactive_sections: true # Whether to collapse the sub-sections within a non-active section in the sidebar
textbook_logo: images/logo/logo.png # A logo to be displayed at the top of your textbook sidebar. Should be square
textbook_logo_link: https://thehelk.com # A link for the logo.
sidebar_footer_text: Powered by <a href="https://jupyterbook.org">Jupyter Book</a>
number_toc_chapters: false # Whether to add numbers to chapterse in your Table of Contents. If true, you can control this at the Chapter level in _data/toc.yml
# Search settings
search_max_words_in_content: 100 # In the search function, use at most this many words (too many words will make search slow)
# Controlling page information
page_titles: infer # Either `None`, `infer`, or `toc`
page_authors: infer # Either `None` or `infer`
filename_title_split_character: _ # If inferring titles based on filename, splt on this character.
# Math settings
number_equations: false # Whether to automatically number all block equations with MathJax
#######################################################################################
# Interact link settings
# General interact settings
use_jupyterlab: false # If 'true', interact links will use JupyterLab as the interface
# Jupyterhub link settings
use_jupyterhub_button: false # If 'true', display a button that will direct users to a JupyterHub (that you provide)
jupyterhub_url: '' # The URL for your JupyterHub. If no URL, use ""
jupyterhub_interact_text: Interact # The text that interact buttons will contain.
# Binder link settings
use_binder_button: false # If 'true', add a binder button for interactive links
binderhub_url: # The URL for your BinderHub. If no URL, use ""
binder_repo_base: # The site on which the textbook repository is hosted
binder_repo_org: # The username or organization that owns this repository
binder_repo_name: # The name of the repository on the web
binder_repo_branch: # The branch on which your textbook is hosted.
binderhub_interact_text: # The text that interact buttons will contain.
# Thebelab settings
use_thebelab_button: false # If 'true', display a button to allow in-page running code cells with Thebelab
thebelab_button_text: Thebelab # The text to display inside the Thebelab initialization button
codemirror_theme: abcdef # Theme for codemirror cells, for options see https://codemirror.net/doc/manual.html#config
# nbinteract settings
use_show_widgets_button: true # If 'true', display a button to allow in-page running code cells with nbinteract
# Download settings
use_download_button: true # If 'true', display a button to download a zip file for the notebook
download_button_text: Download # The text that download buttons will contain
download_page_header: Made with Jupyter Book # A header that will be displayed at the top of and PDF-printed page
#######################################################################################
# Jupyter book extensions and additional features
# Bibliography and citation settings. See https://github.com/inukshuk/jekyll-scholar#configuration for options
scholar:
style: apa
#######################################################################################
# Option to add a Goggle analytics tracking code
# Navigate to https://analytics.google.com, add a new property for your jupyter book and copy the tracking id here.
#google_analytics:
# mytrackingcode: UA-52617120-7
#######################################################################################
# Jupyter book settings you probably don't need to change
google_analytics:
mytrackingcode: ''
#######################################################################################
# Jupyter book settings you probably don't need to change
content_folder_name: content # The folder where your raw content (notebooks/markdown files) are located
images_url: /assets/images # Path to static image files
css_url: /assets/css # Path to static CSS files
js_url: /assets/js # Path to JS files
custom_static_url: /assets/custom # Path to user's custom CSS/JS files
#######################################################################################
# Jekyll build settings (only modify if you know what you're doing)
# Site settings
defaults:
- scope:
path: ''
values:
layout: default
toc: true
toc_label: ' On this page'
toc_icon: list-ul
excerpt: ''
favicon_path: images/logo/favicon.ico
# Markdown Processing
markdown: kramdown
kramdown:
input: GFM
syntax_highlighter: rouge
sass:
style: compressed
collections:
build:
output: true
permalink: /:path.html
# Exclude from processing.
# The following items will not be processed, by default. Create a custom list
# to override the default setting.
exclude:
- scripts/
- Gemfile
- Gemfile.lock
- node_modules
- vendor/bundle/
- vendor/cache/
- vendor/gems/
- vendor/ruby/
plugins:
- jekyll-redirect-from
- jekyll-scholar
# Jupyter Book version - DO NOT CHANGE THIS. It is generated when a new book is created
jupyter_book_version: 0.6.4

29
docs/_data/toc.yml Normal file
View File

@ -0,0 +1,29 @@
- url: /introduction
- url: /installation
- title: GitHub Repository
url: https://github.com/Cyb3rWard0g/HELK
external: true
- divider: true
- header: Architecture
- url: /architecture/elasticsearch
- url: /architecture/logstash
- url: /architecture/kibana
- divider: true
- header: How-To
- url: /how-to/docker/docker
sections:
- url: /how-to/docker/docker-export-images
- url: /how-to/docker/docker-load-images
- url: /how-to/logstash/logstash
sections:
- url: /how-to/logstash/logstash-create-plugins-offline
- url: /how-to/kafka/kafka
sections:
- url: /how-to/kafka/kafka-topic-ingestion
- url: /how-to/kafka/kafka-update-ip
- url: /how-to/ksql/ksql
sections:
- url: /how-to/ksql/ksql-deploy-locally
- url: /how-to/winlogbeat/winlogbeat
sections:
- url: /how-to/winlogbeat/winlogbeat-shipping

9
docs/_includes/buttons.html Executable file
View File

@ -0,0 +1,9 @@
<div class="buttons">
{% include buttons/download.html %}
{% if page.interact_link %}
{% include buttons/thebelab.html %}
{% include buttons/nbinteract.html %}
{% include buttons/binder.html %}
{% include buttons/jupyterhub.html %}
{% endif %}
</div>

View File

@ -0,0 +1,14 @@
{% if site.use_binder_button %}
{% if site.use_jupyterlab %}
{% assign binder_interact_prefix="urlpath=lab/tree/" %}
{% else %}
{% assign binder_interact_prefix="filepath=" %}
{% endif %}
{% capture interact_url_binder %}v2/gh/{{ site.binder_repo_org }}/{{ site.binder_repo_name }}/{{ site.binder_repo_branch }}?{{ binder_interact_prefix }}{{ page.interact_link | url_encode }}{% endcapture %}
{% capture interact_icon_binder %}{{ site.images_url | relative_url }}/logo_binder.svg{% endcapture %}
<a href="{{ site.binderhub_url }}/{{ interact_url_binder }}"><button class="interact-button" id="interact-button-binder"><img class="interact-button-logo" src="{{ interact_icon_binder }}" alt="Interact" />{{ site.binderhub_interact_text }}</button></a>
{%- endif %}

View File

@ -0,0 +1,13 @@
{% if site.use_download_button -%}
<div class="download-buttons-dropdown">
<button id="dropdown-button-trigger" class="interact-button"><img src="{{ site.images_url | relative_url }}/download-solid.svg" alt="Download" /></button>
<div class="download-buttons">
{% if page.interact_link -%}
<a href="{{ page.interact_link | relative_url }}" download>
<button id="interact-button-download" class="interact-button">{{ page.suffix | capitalize }}</button>
</a>
{% endif %}
<a id="interact-button-print"><button id="interact-button-download" class="interact-button">.pdf</button></a>
</div>
</div>
{%- endif %}

View File

@ -0,0 +1,13 @@
{% if site.use_jupyterhub_button %}
{% if site.use_jupyterlab %}
{% assign hub_app="lab" %}
{% else %}
{% assign hub_app="notebook" %}
{% endif %}
{% capture interact_url_jupyterhub %}hub/user-redirect/git-pull?repo={{ site.binder_repo_base }}/{{ site.binder_repo_org }}/{{ site.binder_repo_name }}&amp;branch={{ site.binder_repo_branch }}&amp;subPath={{ page.interact_link | url_encode }}&amp;app={{ hub_app }}{% endcapture %}
{% capture interact_icon_jupyterhub %}{{ site.images_url | relative_url }}/logo_jupyterhub.svg{% endcapture %}
<a href="{{ site.jupyterhub_url }}/{{ interact_url_jupyterhub }}"><button class="interact-button" id="interact-button-jupyterhub"><img class="interact-button-logo" src="{{ interact_icon_jupyterhub }}" alt="Interact" />{{ site.jupyterhub_interact_text }}</button></a>
{% endif %}

View File

@ -0,0 +1,3 @@
{% if site.use_show_widgets_button and page.has_widgets -%}
<button id="interact-button-show-widgets" class="interact-button js-nbinteract-widget">Show Widgets</button>
{% endif %}

View File

@ -0,0 +1,3 @@
{% if site.use_thebelab_button -%}
<button id="interact-button-thebelab" class="interact-button">{{ site.thebelab_button_text }}</button>
{% endif %}

18
docs/_includes/css_entry.scss Executable file
View File

@ -0,0 +1,18 @@
@import 'inuitcss/settings/settings.core';
@import 'settings/settings.global.scss';
@import 'inuitcss/tools/tools.font-size';
@import 'inuitcss/tools/tools.clearfix';
@import 'inuitcss/tools/tools.hidden';
@import 'inuitcss/tools/tools.mq';
@import 'inuitcss/elements/elements.page';
@import 'inuitcss/elements/elements.headings';
@import 'inuitcss/elements/elements.images';
@import 'inuitcss/elements/elements.tables';
@import 'elements/elements.typography';
@import 'elements/elements.syntax-highlighting';
@import 'elements/elements.tables';
@import 'elements/elements.links';
@import 'components/components.textbook__page';

7
docs/_includes/fb_tags.html Executable file
View File

@ -0,0 +1,7 @@
<meta property="og:url" content="{{ page.url | replace:'index.html','' | prepend: site.baseurl | prepend: site.url | relative_url }}" />
<meta property="og:type" content="article" />
<meta property="og:title" content="{% if page.title %}{{ page.title | escape }}{% else %}{{ site.title | escape }}{% endif %}" />
<meta property="og:description" content="{{ page.content | strip_html | strip_newlines | truncate: 160 }}" />
<meta property="og:image" content="{{ site.textbook_logo | absolute_url }}" />
<meta name="twitter:card" content="summary">

3
docs/_includes/footer.html Executable file
View File

@ -0,0 +1,3 @@
<footer>
<p class="footer">{{ site.footer_text }}</p>
</footer>

View File

@ -0,0 +1,11 @@
{% if site.google_analytics.mytrackingcode %}
<!-- Global site tag (gtag.js) - Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id={{ site.google_analytics.mytrackingcode }}"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', '{{ site.google_analytics.mytrackingcode }}');
</script>
{% endif %}

89
docs/_includes/head.html Executable file
View File

@ -0,0 +1,89 @@
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width,minimum-scale=1">
<title>{% if page.title %}{{ page.title | escape }}{% else %}{{ site.title | escape }}{% endif %}</title>
<meta name="description" content="{{ page.content | strip_html | strip_newlines | truncate: 160 }}">
<link rel="canonical" href="{{ page.url | replace:'index.html','' | prepend: site.baseurl | prepend: site.url | relative_url }}">
<link rel="alternate" type="application/rss+xml" title="{{ site.title }}" href="{{ "/feed.xml" | prepend: site.baseurl | prepend: site.url | relative_url }}">
{% include fb_tags.html %}
<script type="application/ld+json">
{% include metadata.json %}
</script>
<link rel="stylesheet" href="{{ site.css_url | relative_url }}/styles.css">
<!-- <link rel="manifest" href="/manifest.json"> -->
<!-- <link rel="mask-icon" href="/safari-pinned-tab.svg" color="#efae0a"> -->
<meta name="msapplication-TileColor" content="#da532c">
<meta name="msapplication-TileImage" content="/mstile-144x144.png">
<meta name="theme-color" content="#233947">
<!-- Favicon -->
<link rel="shortcut icon" type="image/x-icon" href="{{ site.favicon_path | relative_url }}">
<!-- MathJax Config -->
{% include mathjax.html %}
<!-- DOM updating function -->
<script src="{{ site.js_url | relative_url }}/page/dom-update.js"></script>
<!-- Selectors for elements on the page -->
<script src="{{ site.js_url | relative_url }}/page/documentSelectors.js"></script>
<!-- Define some javascript variables that will be useful in other javascript -->
<script>
const site_basename = '{{ site.baseurl | strip / }}';
</script>
<!-- Add AnchorJS to let headers be linked -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/anchor-js/4.2.0/anchor.min.js" async></script>
<script src="{{ site.js_url | relative_url }}/page/anchors.js" async></script>
<!-- Include Turbolinks to make page loads fast -->
<!-- https://github.com/turbolinks/turbolinks -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/turbolinks/5.2.0/turbolinks.js" async></script>
<meta name="turbolinks-cache-control" content="no-cache">
<!-- Load nbinteract for widgets -->
{% include js/nbinteract.html %}
<!-- Load Thebelab for interactive widgets -->
{% include js/thebelab.html %}
<!-- Load the auto-generating TOC (non-async otherwise the TOC won't load w/ turbolinks) -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/tocbot/4.8.1/tocbot.min.js" async></script>
<script src="{{ site.js_url | relative_url }}/page/tocbot.js"></script>
<!-- Google analytics -->
{% include google_analytics.html %}
<!-- Clipboard copy button -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/clipboard.js/2.0.4/clipboard.min.js" async></script>
<!-- Load custom website scripts -->
<script src="{{ site.js_url | relative_url }}/scripts.js" async></script>
<!-- Load custom user CSS and JS -->
<script src="{{ site.custom_static_url | relative_url }}/custom.js" async></script>
<link rel="stylesheet" href="{{ site.custom_static_url | relative_url }}/custom.css">
<!-- Update interact links w/ REST param, is defined in includes so we can use templates -->
{% include js/interact-update.html %}
<!-- Lunr search code - will only be executed on the /search page -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/lunr.js/2.3.6/lunr.min.js" async></script>
<script>{% include search/lunr/lunr-en.js %}</script>
<!-- Load JS that depends on site variables -->
<script src="{{ site.js_url | relative_url }}/page/copy-button.js" async></script>
<!-- Hide cell code -->
<script src="{{ site.js_url | relative_url }}/page/hide-cell.js" async></script>
<!-- Printing the screen -->
{% include js/print.html %}
</head>

View File

@ -0,0 +1,142 @@
{% if site.use_jupyterhub_button or site.use_binder_button %}
<script>
/**
* To auto-embed hub URLs in interact links if given in a RESTful fashion
*/
function getJsonFromUrl(url) {
var query = url.split('?');
if (query.length < 2) {
// No queries so just return false
return false;
}
query = query[1];
// Collect REST params into a dictionary
var result = {};
query.split("&").forEach(function(part) {
var item = part.split("=");
result[item[0]] = decodeURIComponent(item[1]);
});
return result;
}
function dict2param(dict) {
params = Object.keys(dict).map(function(k) {
return encodeURIComponent(k) + '=' + encodeURIComponent(dict[k])
});
return params.join('&')
}
// Parse a Binder URL, converting it to the string needed for JupyterHub
function binder2Jupyterhub(url) {
newUrl = {};
parts = url.split('v2/gh/')[1];
// Grab the base repo information
repoinfo = parts.split('?')[0];
var [org, repo, ref] = repoinfo.split('/');
newUrl['repo'] = ['https://github.com', org, repo].join('/');
newUrl['branch'] = ref
// Grab extra parameters passed
params = getJsonFromUrl(url);
if (params['filepath'] !== undefined) {
newUrl['subPath'] = params['filepath']
}
return dict2param(newUrl);
}
// Filter out potentially unsafe characters to prevent xss
function safeUrl(url)
{
return String(encodeURIComponent(url))
.replace(/&/g, '&amp;')
.replace(/"/g, '&quot;')
.replace(/'/g, '&#39;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;');
}
function addParamToInternalLinks(hub) {
var links = document.querySelectorAll("a").forEach(function(link) {
var href = link.href;
// If the link is an internal link...
if (href.search("{{ site.url }}") !== -1 || href.startsWith('/') || href.search("127.0.0.1:") !== -1) {
// Assume we're an internal link, add the hub param to it
var params = getJsonFromUrl(href);
if (params !== false) {
// We have REST params, so append a new one
params['jupyterhub'] = hub;
} else {
// Create the REST params
params = {'jupyterhub': hub};
}
// Update the link
var newHref = href.split('?')[0] + '?' + dict2param(params);
link.setAttribute('href', decodeURIComponent(newHref));
}
});
return false;
}
// Update interact links
function updateInteractLink() {
// hack to make this work since it expects a ? in the URL
rest = getJsonFromUrl("?" + location.search.substr(1));
jupyterHubUrl = rest['jupyterhub'];
var hubType = null;
var hubUrl = null;
if (jupyterHubUrl !== undefined) {
hubType = 'jupyterhub';
hubUrl = jupyterHubUrl;
}
if (hubType !== null) {
// Sanitize the hubUrl
hubUrl = safeUrl(hubUrl);
// Add HTTP text if omitted
if (hubUrl.indexOf('http') < 0) {hubUrl = 'http://' + hubUrl;}
var interactButtons = document.querySelectorAll("button.interact-button")
var lastButton = interactButtons[interactButtons.length-1];
var link = lastButton.parentElement;
// If we've already run this, skip the link updating
if (link.nextElementSibling !== null) {
return;
}
// Update the link and add context div
var href = link.getAttribute('href');
if (lastButton.id === 'interact-button-binder') {
// If binder links exist, we need to re-work them for jupyterhub
if (hubUrl.indexOf('http%3A%2F%2Flocalhost') > -1) {
// If localhost, assume we're working from a local Jupyter server and remove `/hub`
first = [hubUrl, 'git-sync'].join('/')
} else {
first = [hubUrl, 'hub', 'user-redirect', 'git-sync'].join('/')
}
href = first + '?' + binder2Jupyterhub(href);
} else {
// If interact button isn't binderhub, assume it's jupyterhub
// If JupyterHub links, we only need to replace the hub url
href = href.replace("{{ site.jupyterhub_url }}", hubUrl);
if (hubUrl.indexOf('http%3A%2F%2Flocalhost') > -1) {
// Assume we're working from a local Jupyter server and remove `/hub`
href = href.replace("/hub/user-redirect", "");
}
}
link.setAttribute('href', decodeURIComponent(href));
// Add text after interact link saying where we're launching
hubUrlNoHttp = decodeURIComponent(hubUrl).replace('http://', '').replace('https://', '');
link.insertAdjacentHTML('afterend', '<div class="interact-context">on ' + hubUrlNoHttp + '</div>');
// Update internal links so we retain the hub url
addParamToInternalLinks(hubUrl);
}
}
runWhenDOMLoaded(updateInteractLink)
document.addEventListener('turbolinks:load', updateInteractLink)
</script>
{% endif %}

View File

@ -0,0 +1,33 @@
{% if site.use_show_widgets_button and page.has_widgets %}
<!-- Include nbinteract for interactive widgets -->
<script src="https://unpkg.com/nbinteract-core" async></script>
<script>
let interact
const initializeNbinteract = () => {
// If NbInteract hasn't loaded, wait one second and try again
if (window.NbInteract === undefined) {
setTimeout(initializeNbinteract, 1000)
return
}
if (interact === undefined) {
console.log('Initializing nbinteract...')
interact = new window.NbInteract({
baseUrl: 'https://mybinder.org',
spec: '{{ site.binder_repo_org }}/{{ site.binder_repo_name }}/{{ site.binder_repo_branch }}',
provider: 'gh',
})
window.interact = interact
} else {
console.log("nbinteract already initialized...")
}
interact.prepare()
}
// Initialize nbinteract
initFunction(initializeNbinteract);
</script>
{% endif %}

32
docs/_includes/js/print.html Executable file
View File

@ -0,0 +1,32 @@
<!-- Include nbinteract for interactive widgets -->
<script src="https://printjs-4de6.kxcdn.com/print.min.js" async></script>
<script>
printContent = () => {
// MathJax displays a second version of any math for assistive devices etc.
// This prevents double-rendering in the PDF output.
var ignoreAssistList = [];
assistives = document.querySelectorAll('.MathJax_Display span.MJX_Assistive_MathML').forEach((element, index) => {
var thisId = 'MathJax-assistive-' + index.toString();
element.setAttribute('id', thisId);
ignoreAssistList.push(thisId)
});
// Print the actual content object
printJS({
printable: 'textbook_content',
type: 'html',
css: "{{ site.css_url | relative_url }}/styles.css",
style: "#textbook_content {padding-top: 40px};",
scanStyles: false,
targetStyles: ["*"],
ignoreElements: ignoreAssistList,
documentTitle: "{{ site.download_page_header }}"
})
};
initPrint = () => {
document.querySelector('#interact-button-print').addEventListener('click', printContent)
}
initFunction(initPrint)
</script>

View File

@ -0,0 +1,27 @@
{% if site.use_thebelab_button -%}
<script>
/**
* Set up thebelab button for code blocks
*/
const thebelabCellButton = id =>
`<a id="thebelab-cell-button-${id}" class="btn thebebtn o-tooltip--left" data-tooltip="Interactive Mode">
<img src="{{ site.images_url | relative_url }}/edit-button.svg" alt="Start thebelab interactive mode">
</a>`
const addThebelabButtonToCodeCells = () => {
const codeCells = document.querySelectorAll('div.input_area > div.highlight:not(.output) pre')
codeCells.forEach((codeCell, index) => {
const id = codeCellId(index)
codeCell.setAttribute('id', id)
if (document.getElementById("thebelab-cell-button-" + id) == null) {
codeCell.insertAdjacentHTML('afterend', thebelabCellButton(id));
}
})
}
initFunction(addThebelabButtonToCodeCells);
</script>
{% endif %}

View File

@ -0,0 +1,32 @@
<script type="text/x-thebe-config">
{%- if page.kernel_name %}
{% assign kernelName = page.kernel_name %}
{% else %}
{% assign kernelName = "python3" %}
{% endif -%}
{%- if page.kernel_name %}
{% if page.kernel_name contains "python" %}
{% assign cm_language="python" %}
{% else %}
{% assign cm_language={{ page.kernel_name }} %}
{% endif %}
{% else %}
{% assign cm_language="python" %}
{% endif -%}
{
requestKernel: true,
binderOptions: {
repo: "{{ site.binder_repo_org }}/{{ site.binder_repo_name }}",
ref: "{{ site.binder_repo_branch }}",
},
codeMirrorConfig: {
theme: "{{ site.codemirror_theme }}",
mode: "{{ cm_language }}"
},
kernelOptions: {
kernelName: "{{ kernelName }}",
path: "{{ page.kernel_path }}"
}
}
</script>

Some files were not shown because too many files have changed in this diff Show More