HELK 6.2.2 - 022518

HELK Design
+ moved everything to docker-compose approach for a more modular design.
+ separated the HELK in 3 services:
++helk-elk, helk-kafka, helk-analytics
+ Updated Design picture to show WEF ideas and also show Jupyter Lab integrations.

HELK Docker-Compose
+ Added ESDATA volume to keep logs after contaners get stopped
+ Services restart automatically after reboot
+ created blank env file for Kafka service. This allows the host to pass its own local IP to Kafka. This is needed for advertised listener configs on each broker.

HELK-ELK Version
- Updated to 6.2.2

ELasticsearch
- Added local docker network as part of the network.host option. This allows the HELK-ELK service to publish its docker local IP to other services/images in the docker compose environment.

Logstash
+ minimal updates to  certain configs (Mainly renaming files and replacing certain strings)

Kibana
+ enableExternalUrls set to true for Vega visualization that need external libraries.

Spark - Analytics
+ Renamed service to Analytics
+ Integrated Apache Toree to allow Scala kernel in Jupyter
+ Pyspark, Scala and SQL are now available in Jupyter

Jupyter
+ Jupyter LAB has been enabled
keyword-vs-text-changes
Roberto Rodriguez 2018-02-25 02:59:44 -05:00
parent d623246f4c
commit 063e5835ec
74 changed files with 877 additions and 513 deletions

View File

@ -44,10 +44,10 @@ The project is currently in an alpha stage, which means that the code and the fu
* [Spark](https://github.com/Cyb3rWard0g/HELK/wiki/Spark)
* [Installation](https://github.com/Cyb3rWard0g/HELK/wiki/Installation)
## (Docker) Accessing the HELK's container
By default, the HELK's container is run in the background. Therefore, you will have to access your docker container by running the following commands:
## (Docker) Accessing the HELK's Images
By default, the HELK's containers are run in the background (Detached). Therefore, you will have to access your docker images by running the following commands:
```
sudo docker exec -ti helk bash
sudo docker exec -ti <image-name> bash
root@7a9d6443a4bf:/opt/helk/scripts#
```
@ -71,12 +71,10 @@ There are a few things that I would like to accomplish with the HELK as shown in
- [X] Add Jupyter Notebook on the top of Spark
- [X] Kafka Integration
- [ ] Create Jupyter Notebooks showing how to use Spark & GraphFrames
- [ ] Enhance elasticsearch configuration to make it more scalable
- [ ] MITRE ATT&CK mapping to logs or dashboards
- [ ] Cypher for Apache Spark Integration (Might have to switch from Jupyter to Zeppelin Notebook)
- [ ] Somehow integrate neo4j spark connectors with build
- [ ] Install Elastalert
- [ ] Create Elastalert rules
- [ ] Install Elastalert & Create Rules
- [ ] Nxlog parsers (Logstash Filters)
- [ ] Add more network data sources (i.e Bro)

68
docker-compose.yml Normal file
View File

@ -0,0 +1,68 @@
version: '3.2'
services:
helk-elk:
image: cyb3rward0g/helk-elk:6.2.2
container_name: helk-elk
volumes:
- esdata:/var/lib/elasticsearch
environment:
- bootstrap.memory_lock=true
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "80:80"
- "5044:5044"
- "9000:9000"
- "8082:8082"
restart: always
networks:
helk:
ipv4_address: 172.18.0.2
aliases:
- helk_elk.hunt.local
helk-kafka:
image: cyb3rward0g/helk-kafka:1.0.0
container_name: helk-kafka
env_file: ./helk.env
ports:
- "2181:2181"
- "9092:9092"
- "9093:9093"
- "9094:9094"
restart: always
depends_on:
- helk-elk
networks:
helk:
ipv4_address: 172.18.0.3
aliases:
- helk_kafka.hunt.local
helk-analytics:
image: cyb3rward0g/helk-analytics:0.0.1
container_name: helk-analytics
ports:
- "8880:8880"
- "4040:4040"
restart: always
depends_on:
- helk-elk
networks:
helk:
ipv4_address: 172.18.0.4
aliases:
- helk_analytics.hunt.local
networks:
helk:
driver: bridge
ipam:
config:
- subnet: 172.18.0.0/16
volumes:
esdata:
driver: local

98
helk-analytics/Dockerfile Normal file
View File

@ -0,0 +1,98 @@
# HELK script: HELK Analytics Dockerfile
# HELK build version: 0.9 (Alpha)
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
FROM phusion/baseimage
LABEL maintainer="Roberto Rodriguez @Cyb3rWard0g"
LABEL description="Dockerfile base for HELK Analytics."
ENV DEBIAN_FRONTEND noninteractive
# *********** Installing Prerequisites ***************
# -qq : No output except for errors
RUN echo "[HELK-DOCKER-INSTALLATION-INFO] Updating Ubuntu base image.." \
&& apt-get update -qq \
&& echo "[HELK-DOCKER-INSTALLATION-INFO] Extracting templates from packages.." \
&& apt-get install -qqy \
openjdk-8-jre-headless \
wget \
sudo \
nano \
python3-pip \
python-tk \
unzip \
zip \
locales
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen
RUN apt-get -qy clean \
autoremove \
&& rm -rf /var/lib/apt/lists/*
# *********** Upgrading PIP ***************
RUN pip3 install --upgrade pip
# *********** Installing HELK python packages ***************
RUN pip3 install \
pandas==0.22.0 \
jupyter \
jupyterhub==0.8.1 \
jupyterlab==0.31.8 \
https://dist.apache.org/repos/dist/dev/incubator/toree/0.2.0/snapshots/dev1/toree-pip/toree-0.2.0.dev1.tar.gz
RUN pip3 install scipy==1.0.0 \
scikit-learn==0.19.1 \
nltk==3.2.5 \
matplotlib==2.1.2 \
seaborn==0.8.1 \
datasketch==1.2.5 \
keras==2.1.3 \
pyflux==0.4.15 \
imbalanced-learn==0.3.2 \
lime==0.1.1.29 \
bokeh==0.12.14
# *********** Creating the right directories ***************
RUN bash -c 'mkdir -pv /opt/helk/{scripts,training,es-hadoop,spark,packages}'
# *********** Adding HELK scripts and files to Container ***************
ADD scripts/analytics-entrypoint.sh /opt/helk/scripts/
RUN chmod +x /opt/helk/scripts/analytics-entrypoint.sh
ADD training/ /opt/helk/training/
# *********** Install ES-Hadoop ***************
RUN wget https://artifacts.elastic.co/downloads/elasticsearch-hadoop/elasticsearch-hadoop-6.2.2.zip -P /opt/helk/es-hadoop/ \
&& unzip /opt/helk/es-hadoop/*.zip -d /opt/helk/es-hadoop/ \
&& rm /opt/helk/es-hadoop/*.zip
# *********** Install Spark ***************
ENV ANALYTIC_LOGS_PATH=/var/log/analytics
RUN wget -qO- http://mirrors.gigenet.com/apache/spark/spark-2.2.1/spark-2.2.1-bin-hadoop2.7.tgz | sudo tar xvz -C /opt/helk/spark/ \
&& mkdir -v $ANALYTIC_LOGS_PATH
ADD spark/.bashrc ~/.bashrc
ADD spark/log4j.properties /opt/helk/spark/spark-2.2.1-bin-hadoop2.7/conf/
ADD spark/spark-defaults.conf /opt/helk/spark/spark-2.2.1-bin-hadoop2.7/conf/
ADD analytics-init /etc/init.d/analytics
# Adding SPARK environment variables
ENV SPARK_HOME=/opt/helk/spark/spark-2.2.1-bin-hadoop2.7
ENV PATH=$SPARK_HOME/bin:$PATH
#ENV SPARK_OPTS="--driver-java-options=-Xms1024M --driver-java-options=-Xmx2096M --driver-java-options=-Dlog4j.logLevel=info --master=local[4]"
ENV SPARK_OPTS="--master local[*]"
ENV PYSPARK_PYTHON=/usr/bin/python3
ENV PYSPARK_DRIVER_PYTHON=/usr/local/bin/jupyter
ENV PYSPARK_DRIVER_PYTHON_OPTS="lab --no-browser --ip=* --port=8880 --allow-root"
# *********** Attaching Toree Kernel to Jupyter ***************
RUN jupyter toree install --spark_home=$SPARK_HOME --interpreters=Scala,SQL
# *********** Update Jupyter PySpark Kernel *************
ADD jupyter/pyspark_kernel.json /usr/local/share/jupyter/kernels/python3/kernel.json
# *********** RUN HELK ***************
EXPOSE 4040 8880
WORKDIR "/opt/helk/scripts/"
ENTRYPOINT ["./analytics-entrypoint.sh"]

View File

@ -1,21 +1,21 @@
#!/bin/bash
# Init script for spark
# /etc/init.d/analytics -- startup script for Analytics
# Maintained by Roberto Rodriguez @Cyb3rWard0g
# Reference:
# https://github.com/elastic/logstash/blob/master/distribution/rpm/src/main/packaging/init.d/logstash
# https://github.com/spujadas/elk-docker/blob/master/logstash-init
### BEGIN INIT INFO
# Provides: spark
# Provides: analytics
# Required-Start:
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: spark service
# Short-Description: analytic service
### END INIT INFO
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
NAME=spark
NAME=analytics
DEFAULT=/etc/default/$NAME
export PATH
@ -32,15 +32,15 @@ if [ -r /etc/default/rcS ]; then
fi
SPARK_HOME=/opt/helk/spark/spark-2.2.1-bin-hadoop2.7
SPARK_CONSOLE_PYSPARK_LOG=/var/log/spark/spark_pyspark.log
SPARK_EXEC=$SPARK_HOME/bin/pyspark
SPARK_CONFIG="2>&1 >> $SPARK_CONSOLE_PYSPARK_LOG 2>&1"
SPARK_USER=root
SPARK_GROUP=root
SPARK_NICE=""
SERVICE_NAME="spark"
SERVICE_DESCRIPTION="spark"
SPARK_PIDFILE=/var/run/spark.pid
ANALYTIC_CONSOLE_LOG=/var/log/analytics/analytics.log
ANALYTIC_EXEC=$SPARK_HOME/bin/pyspark
ANALYTIC_OPTS=">> $ANALYTIC_CONSOLE_LOG 2>&1"
ANALYTIC_USER=root
ANALYTIC_GROUP=root
ANALYTIC_NICE=""
SERVICE_NAME="analytics"
SERVICE_DESCRIPTION="analytics"
ANALYTIC_PIDFILE=/var/run/analytics.pid
# End of variables that can be overwritten in $DEFAULT
@ -49,39 +49,37 @@ if [ -f "$DEFAULT" ]; then
. "$DEFAULT"
fi
# Adding SPARK location
export SPARK_HOME=/opt/helk/spark/spark-2.2.1-bin-hadoop2.7
# Spark Variables
export PATH=$SPARK_HOME/bin:$PATH
# Adding Jupyter Notebook Integration
export SPARK_OPTS="--master local[*]"
export PYSPARK_PYTHON=/usr/bin/python3
export PYSPARK_DRIVER_PYTHON=/usr/local/bin/jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook --NotebookApp.open_browser=False --NotebookApp.ip='*' --NotebookApp.port=8880 --allow-root"
export PYSPARK_PYTHON=/usr/bin/python
export PYSPARK_DRIVER_PYTHON_OPTS="lab --no-browser --ip=* --port=8880 --allow-root"
[ -z "$SPARK_NICE" ] && SPARK_NICE=0
[ -z "$ANALYTIC_NICE" ] && ANALYTIC_NICE=0
if [ ! -x "$SPARK_EXEC" ]; then
echo "The spark startup script does not exists or it is not executable, tried: $SPARK_EXEC"
if [ ! -x "$ANALYTIC_EXEC" ]; then
echo "The analytics startup script does not exists or it is not executable, tried: $ANALYTIC_EXEC"
exit 1
fi
start() {
echo "Starting $NAME"
if [ -n "$SPARK_PIDFILE" ] && [ ! -e "$SPARK_PIDFILE" ]; then
touch "$SPARK_PIDFILE" && chown $SPARK_USER:$SPARK_GROUP "$SPARK_PIDFILE"
if [ -n "$ANALYTIC_PIDFILE" ] && [ ! -e "$ANALYTIC_PIDFILE" ]; then
touch "$ANALYTIC_PIDFILE" && chown $ANALYTIC_USER:$ANALYTIC_GROUP "$ANALYTIC_PIDFILE"
fi
# Start Service
nice -n$SPARK_NICE chroot --userspec $SPARK_USER:$SPARK_GROUP / sh -c "
nice -n$ANALYTIC_NICE chroot --userspec $ANALYTIC_USER:$ANALYTIC_GROUP / sh -c "
cd /opt/helk
exec $SPARK_EXEC $SPARK_CONFIG
exec $ANALYTIC_EXEC $SPARK_OPTS $ANALYTIC_OPTS
" &
# Generate the pidfile from here. If we instead made the forked process
# generate it there will be a race condition between the pidfile writing
# and a process possibly asking for status.
echo $! > $SPARK_PIDFILE
echo $! > $ANALYTIC_PIDFILE
echo "$NAME started."
return 0
@ -90,7 +88,7 @@ start() {
stop() {
# Try a few times to kill TERM the program
if status; then
pid=$(cat "$SPARK_PIDFILE")
pid=$(cat "$ANALYTIC_PIDFILE")
echo "Killing $NAME (pid $pid) with SIGTERM"
kill -TERM $pid
# Wait for it to exit.
@ -103,14 +101,14 @@ stop() {
echo "$NAME stop failed; still running."
else
echo "$NAME stopped."
rm -f $SPARK_PIDFILE
rm -f $ANALYTIC_PIDFILE
fi
fi
}
status() {
if [ -f "$SPARK_PIDFILE" ] ; then
pid=$(cat "$SPARK_PIDFILE")
if [ -f "$ANALYTIC_PIDFILE" ] ; then
pid=$(cat "$ANALYTIC_PIDFILE")
if kill -0 $pid > /dev/null 2> /dev/null; then
# process by this pid is running.
# It may not be our pid, but that's what you get with just pidfiles.
@ -129,8 +127,8 @@ status() {
force_stop() {
if status; then
stop
status && kill -KILL $(cat "$SPARK_PIDFILE")
rm -f $SPARK_PIDFILE
status && kill -KILL $(cat "$ANALYTIC_PIDFILE")
rm -f $ANALYTIC_PIDFILE
fi
}

View File

@ -0,0 +1,11 @@
{
"argv": [
"python3",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "PySpark",
"language": "python"
}

View File

@ -0,0 +1,24 @@
#!/bin/sh
# HELK script: analytics-entryppoint.sh
# HELK script description: Restart HELK Analytic services
# HELK build version: 0.9 (Alpha)
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
# Start graceful termination of HELK services that might be running before running the entrypoint script.
_term() {
echo "Terminating HELK analytics services"
service analytics stop
exit 0
}
trap _term SIGTERM
# Removing PID files just in case the graceful termination fails
rm -f /var/run/analytics.pid
echo "[HELK-DOCKER-INSTALLATION-INFO] Starting analytic services.."
service analytics start
sleep 5
echo "[HELK-DOCKER-INSTALLATION-INFO] Pushing analytic Logs to console.."
tail -f /var/log/analytics/analytics.log

View File

@ -117,10 +117,10 @@ if ! shopt -oq posix; then
fi
# Adding SPARK location
export SPARK_HOME=/opt/helk/spark/spark-2.2.1-bin-hadoop2.7
export PATH=$SPARK_HOME/bin:$PATH
#export SPARK_HOME=/opt/helk/spark/spark-2.2.1-bin-hadoop2.7
#export PATH=$SPARK_HOME/bin:$PATH
# Adding Jupyter Notebook Integration
export PYSPARK_DRIVER_PYTHON=/usr/local/bin/jupyter
export PYSPARK_DRIVER_PYTHON_OPTS="notebook --NotebookApp.open_browser=False --NotebookApp.ip='*' --NotebookApp.port=8880 --allow-root"
export PYSPARK_PYTHON=/usr/bin/python
#export PYSPARK_DRIVER_PYTHON=/usr/local/bin/jupyter
#export PYSPARK_DRIVER_PYTHON_OPTS="notebook --NotebookApp.open_browser=False --NotebookApp.ip='*' --NotebookApp.port=8880 --allow-root"
#export PYSPARK_PYTHON=/usr/bin/python

View File

@ -31,6 +31,6 @@
# https://graphframes.github.io/quick-start.html
# https://spark-packages.org/package/graphframes/graphframes
spark.jars /opt/helk/es-hadoop/elasticsearch-hadoop-6.2.0/dist/elasticsearch-hadoop-6.2.0.jar
spark.jars /opt/helk/es-hadoop/elasticsearch-hadoop-6.2.2/dist/elasticsearch-hadoop-6.2.2.jar
spark.jars.packages graphframes:graphframes:0.5.0-spark2.1-s_2.11,org.apache.spark:spark-sql-kafka-0-10_2.11:2.2.1,databricks:spark-sklearn:0.2.3
spark.python.profile true
#spark.python.profile true

View File

@ -31,7 +31,7 @@
" <div>\n",
" <p><b>SparkContext</b></p>\n",
"\n",
" <p><a href=\"http://192.168.1.243:4040\">Spark UI</a></p>\n",
" <p><a href=\"http://172.18.0.4:4040\">Spark UI</a></p>\n",
"\n",
" <dl>\n",
" <dt>Version</dt>\n",
@ -66,45 +66,49 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(u'EyaM12ABGZB0cH7uy-kS',\n",
" {u'@timestamp': u'2018-01-08T20:19:23.195Z',\n",
"(u'2053487453',\n",
" {u'@meta': {u'log': {u'timestamp': u'2018-02-20T17:16:29.294Z'}},\n",
" u'@timestamp': u'2018-02-20T17:16:29.299Z',\n",
" u'@version': u'1',\n",
" u'beat': {u'hostname': u'WD-HR001',\n",
" u'name': u'WD-HR001',\n",
" u'version': u'6.0.0'},\n",
" u'computer_name': u'WD-HR001.wardog.com',\n",
" u'event': {u'creationtime': {u'utc': u'2018-01-08 20:19:22.978'}},\n",
" u'event_id': 9,\n",
" u'host': u'WD-HR001',\n",
" u'action': u'processaccess',\n",
" u'beat': {u'hostname': u'DESKTOP-29DJI4T',\n",
" u'name': u'DESKTOP-29DJI4T',\n",
" u'version': u'6.1.2'},\n",
" u'computer_name': u'DESKTOP-29DJI4T',\n",
" u'event_id': 10,\n",
" u'level': u'Information',\n",
" u'log_name': u'Microsoft-Windows-Sysmon/Operational',\n",
" u'opcode': u'Info',\n",
" u'process': {u'guid': u'{DBA5A4A0-2F96-5A50-0000-00106D560100}',\n",
" u'id': 1428,\n",
" u'name': u'C:\\\\Windows\\\\System32\\\\svchost.exe'},\n",
" u'process_id': 2216,\n",
" u'process': {u'calltrace': u'C:\\\\WINDOWS\\\\SYSTEM32\\\\ntdll.dll+a0344|C:\\\\WINDOWS\\\\System32\\\\KERNELBASE.dll+3dc5d|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+ec56d|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+ec490|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+6a4fdd|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+6a6b42|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+69e6da|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+69caa0|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+202a6|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+77f76e|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+471b|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+2b5e|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+98d9c|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+11125f|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\Definition Updates\\\\{14FF058F-73CB-47DF-835D-8B578620CD35}\\\\mpengine.dll+110ef6|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\platform\\\\4.12.17007.18011-0\\\\mpsvc.dll+117f8|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\platform\\\\4.12.17007.18011-0\\\\mprtp.dll+12113|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\platform\\\\4.12.17007.18011-0\\\\mprtp.dll+33318|C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\platform\\\\4.12.17007.18011-0\\\\mpclient.dll+7cb40|C:\\\\WINDOWS\\\\SYSTEM32\\\\ntdll.dll+362a1|C:\\\\WINDOWS\\\\SYSTEM32\\\\ntdll.dll+346fe|C:\\\\WINDOWS\\\\System32\\\\KERNEL32.DLL+11fe4|C:\\\\WINDOWS\\\\SYSTEM32\\\\ntdll.dll+6efc1',\n",
" u'grantedaccess': u'0x1000',\n",
" u'guid': u'{A98268C1-E251-5A83-0000-0010F50A0200}',\n",
" u'id': 2120,\n",
" u'path': u'C:\\\\ProgramData\\\\Microsoft\\\\Windows Defender\\\\platform\\\\4.12.17007.18011-0\\\\MsMpEng.exe',\n",
" u'target': {u'guid': u'{A98268C1-E251-5A83-0000-0010B4120200}',\n",
" u'id': 2152,\n",
" u'path': u'C:\\\\Program Files\\\\winlogbeat\\\\winlogbeat.exe'},\n",
" u'threadid': 3900},\n",
" u'process_id': 1896,\n",
" u'provider_guid': u'{5770385F-C22A-43E0-BF4C-06F5698FFBD9}',\n",
" u'rawaccess': {u'read': {u'device': u'\\\\Device\\\\HarddiskVolume2'}},\n",
" u'record_number': u'1006036',\n",
" u'record_number': u'9876601',\n",
" u'source_name': u'Microsoft-Windows-Sysmon',\n",
" u'subject': {u'user': {u'domain': u'NT AUTHORITY',\n",
" u'name': u'SYSTEM',\n",
" u'sid': u'S-1-5-18'}},\n",
" u'tags': (u'beats_input_codec_plain_applied', u'_grokparsefailure'),\n",
" u'task': u'RawAccessRead detected (rule: RawAccessRead)',\n",
" u'thread_id': 3548,\n",
" u'task': u'Process accessed (rule: ProcessAccess)',\n",
" u'thread_id': 3460,\n",
" u'type': u'wineventlog',\n",
" u'user': {u'type': u'User'},\n",
" u'version': 2})"
" u'version': 3})"
]
},
"execution_count": 2,
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
}
@ -114,7 +118,10 @@
" inputFormatClass=\"org.elasticsearch.hadoop.mr.EsInputFormat\",\n",
" keyClass=\"org.apache.hadoop.io.NullWritable\",\n",
" valueClass=\"org.elasticsearch.hadoop.mr.LinkedMapWritable\",\n",
" conf={ \"es.resource\" : \"logs-endpoint-winevent-sysmon-*/doc\" })\n",
" conf={ \n",
" \"es.resource\" : \"logs-endpoint-winevent-sysmon-*/doc\",\n",
" \"es.nodes\" : \"172.18.0.2\"\n",
" })\n",
"es_rdd.first()"
]
},
@ -127,49 +134,45 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 19,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(u'tiZ912ABGZB0cH7uLt6P',\n",
" {u'@timestamp': u'2018-01-08T20:31:24.258Z',\n",
"(u'2852068011',\n",
" {u'@timestamp': u'2018-02-21T00:08:12.345Z',\n",
" u'@version': u'1',\n",
" u'beat': {u'hostname': u'WD-DC001',\n",
" u'name': u'WD-DC001',\n",
" u'version': u'6.1.1'},\n",
" u'computer_name': u'WD-DC001.wardog.com',\n",
" u'beat': {u'hostname': u'DESKTOP-29DJI4T',\n",
" u'name': u'DESKTOP-29DJI4T',\n",
" u'version': u'6.1.2'},\n",
" u'computer_name': u'DESKTOP-29DJI4T',\n",
" u'event_data': {},\n",
" u'event_id': 4703,\n",
" u'host': u'WD-DC001',\n",
" u'event_id': 4616,\n",
" u'keywords': (u'Audit Success',),\n",
" u'level': u'Information',\n",
" u'log_name': u'Security',\n",
" u'message': u'A token right was adjusted.\\n\\nSubject:\\n\\tSecurity ID:\\t\\tS-1-5-18\\n\\tAccount Name:\\t\\tWD-DC001$\\n\\tAccount Domain:\\t\\tWARDOG\\n\\tLogon ID:\\t\\t0x3E7\\n\\nTarget Account:\\n\\tSecurity ID:\\t\\tS-1-0-0\\n\\tAccount Name:\\t\\tWD-DC001$\\n\\tAccount Domain:\\t\\tWARDOG\\n\\tLogon ID:\\t\\t0x3E7\\n\\nProcess Information:\\n\\tProcess ID:\\t\\t0xe8\\n\\tProcess Name:\\t\\tC:\\\\Windows\\\\System32\\\\svchost.exe\\n\\nEnabled Privileges:\\n\\t\\t\\tSeAssignPrimaryTokenPrivilege\\n\\t\\t\\tSeIncreaseQuotaPrivilege\\n\\t\\t\\tSeSecurityPrivilege\\n\\t\\t\\tSeTakeOwnershipPrivilege\\n\\t\\t\\tSeLoadDriverPrivilege\\n\\t\\t\\tSeSystemtimePrivilege\\n\\t\\t\\tSeBackupPrivilege\\n\\t\\t\\tSeRestorePrivilege\\n\\t\\t\\tSeShutdownPrivilege\\n\\t\\t\\tSeSystemEnvironmentPrivilege\\n\\t\\t\\tSeUndockPrivilege\\n\\t\\t\\tSeManageVolumePrivilege\\n\\nDisabled Privileges:\\n\\t\\t\\t-',\n",
" u'message': u'The system time was changed.\\n\\nSubject:\\n\\tSecurity ID:\\t\\tS-1-5-18\\n\\tAccount Name:\\t\\tDESKTOP-29DJI4T$\\n\\tAccount Domain:\\t\\tWORKGROUP\\n\\tLogon ID:\\t\\t0x3E7\\n\\nProcess Information:\\n\\tProcess ID:\\t0x834\\n\\tName:\\t\\tC:\\\\Program Files\\\\VMware\\\\VMware Tools\\\\vmtoolsd.exe\\n\\nPrevious Time:\\t\\t\\u200e2018\\u200e-\\u200e02\\u200e-\\u200e20T17:16:32.271066000Z\\nNew Time:\\t\\t\\u200e2018\\u200e-\\u200e02\\u200e-\\u200e21T00:08:12.117000000Z\\n\\nThis event is generated when the system time is changed. It is normal for the Windows Time Service, which runs with System privilege, to change the system time on a regular basis. Other system time changes may be indicative of attempts to tamper with the computer.',\n",
" u'opcode': u'Info',\n",
" u'process': {u'id': 0, u'name': u'C:\\\\Windows\\\\System32\\\\svchost.exe'},\n",
" u'process': {u'id': 0,\n",
" u'path': u'C:\\\\Program Files\\\\VMware\\\\VMware Tools\\\\vmtoolsd.exe'},\n",
" u'process_id': 4,\n",
" u'provider_guid': u'{54849625-5478-4994-A5BA-3E3B0328C30D}',\n",
" u'record_number': u'508597',\n",
" u'record_number': u'11346',\n",
" u'source_name': u'Microsoft-Windows-Security-Auditing',\n",
" u'tags': (u'beats_input_codec_plain_applied',),\n",
" u'task': u'Token Right Adjusted Events',\n",
" u'thread_id': 4800,\n",
" u'subject': {u'logon': {u'id': u'0x3e7'},\n",
" u'user': {u'domain': u'WORKGROUP',\n",
" u'name': u'DESKTOP-29DJI4T$',\n",
" u'sid': u'S-1-5-18'}},\n",
" u'system': {u'newtime': u'2018-02-21T00:08:12.117000000Z',\n",
" u'previoustime': u'2018-02-20T17:16:32.271066000Z'},\n",
" u'task': u'Security State Change',\n",
" u'thread_id': 4300,\n",
" u'type': u'wineventlog',\n",
" u'user': {u'domain': u'WARDOG',\n",
" u'logon': {u'id': u'0x3e7'},\n",
" u'name': u'WD-DC001$',\n",
" u'sid': u'S-1-5-18',\n",
" u'target': {u'disabledprivilegelist': u'-',\n",
" u'domain': u'WARDOG',\n",
" u'enabledprivilegelist': u'SeAssignPrimaryTokenPrivilege\\n\\t\\t\\tSeIncreaseQuotaPrivilege\\n\\t\\t\\tSeSecurityPrivilege\\n\\t\\t\\tSeTakeOwnershipPrivilege\\n\\t\\t\\tSeLoadDriverPrivilege\\n\\t\\t\\tSeSystemtimePrivilege\\n\\t\\t\\tSeBackupPrivilege\\n\\t\\t\\tSeRestorePrivilege\\n\\t\\t\\tSeShutdownPrivilege\\n\\t\\t\\tSeSystemEnvironmentPrivilege\\n\\t\\t\\tSeUndockPrivilege\\n\\t\\t\\tSeManageVolumePrivilege',\n",
" u'logonid': u'0x3e7',\n",
" u'name': u'WD-DC001$',\n",
" u'sid': u'S-1-0-0'}}})"
" u'version': 1})"
]
},
"execution_count": 3,
"execution_count": 19,
"metadata": {},
"output_type": "execute_result"
}
@ -179,7 +182,10 @@
" inputFormatClass=\"org.elasticsearch.hadoop.mr.EsInputFormat\",\n",
" keyClass=\"org.apache.hadoop.io.NullWritable\",\n",
" valueClass=\"org.elasticsearch.hadoop.mr.LinkedMapWritable\",\n",
" conf={ \"es.resource\" : \"logs-endpoint-winevent-security-*/doc\" })\n",
" conf={ \n",
" \"es.resource\" : \"logs-endpoint-winevent-security-*/doc\",\n",
" \"es.nodes\" : \"172.18.0.2\"\n",
" })\n",
"es_rdd.first()"
]
},
@ -192,7 +198,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
@ -261,7 +267,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
@ -269,6 +275,7 @@
" .builder \\\n",
" .appName(\"HELK\") \\\n",
" .config(\"es.read.field.as.array.include\", \"tags\") \\\n",
" .config(\"es.nodes\",\"172.18.0.2:9200\") \\\n",
" .getOrCreate()"
]
},
@ -281,7 +288,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
@ -290,7 +297,7 @@
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 16,
"metadata": {},
"outputs": [
{
@ -307,43 +314,10 @@
" | |-- version: string (nullable = true)\n",
" |-- computer_name: string (nullable = true)\n",
" |-- destination: struct (nullable = true)\n",
" | |-- hostnameid: string (nullable = true)\n",
" | |-- ip: string (nullable = true)\n",
" | |-- port: struct (nullable = true)\n",
" | | |-- number: integer (nullable = true)\n",
" | |-- userid: string (nullable = true)\n",
" |-- event_data: struct (nullable = true)\n",
" | |-- ActiveProfile: string (nullable = true)\n",
" |-- event_id: long (nullable = true)\n",
" |-- externaldevice: struct (nullable = true)\n",
" | |-- classid: string (nullable = true)\n",
" | |-- classname: string (nullable = true)\n",
" | |-- compatibleids: string (nullable = true)\n",
" | |-- description: string (nullable = true)\n",
" | |-- id: string (nullable = true)\n",
" | |-- locationinformation: string (nullable = true)\n",
" | |-- vendorids: string (nullable = true)\n",
" |-- filtering: struct (nullable = true)\n",
" | |-- action: string (nullable = true)\n",
" | |-- calloutkey: string (nullable = true)\n",
" | |-- calloutname: string (nullable = true)\n",
" | |-- changetype: string (nullable = true)\n",
" | |-- conditions: string (nullable = true)\n",
" | |-- id: string (nullable = true)\n",
" | |-- key: string (nullable = true)\n",
" | |-- layerid: string (nullable = true)\n",
" | |-- layerkey: string (nullable = true)\n",
" | |-- layername: string (nullable = true)\n",
" | |-- name: string (nullable = true)\n",
" | |-- providerkey: string (nullable = true)\n",
" | |-- providername: string (nullable = true)\n",
" | |-- type: string (nullable = true)\n",
" | |-- weight: string (nullable = true)\n",
" |-- firewall: struct (nullable = true)\n",
" | |-- ruleattr: string (nullable = true)\n",
" | |-- ruleid: string (nullable = true)\n",
" | |-- rulename: string (nullable = true)\n",
" |-- host: string (nullable = true)\n",
" |-- impersonationlevel: string (nullable = true)\n",
" |-- keywords: string (nullable = true)\n",
" |-- level: string (nullable = true)\n",
@ -360,66 +334,20 @@
" | |-- type: string (nullable = true)\n",
" | |-- virtualaccount: string (nullable = true)\n",
" |-- message: string (nullable = true)\n",
" |-- network: struct (nullable = true)\n",
" | |-- direction: string (nullable = true)\n",
" | |-- filterrtid: string (nullable = true)\n",
" | |-- layername: string (nullable = true)\n",
" | |-- layerrtid: string (nullable = true)\n",
" | |-- protocol: string (nullable = true)\n",
" |-- object: struct (nullable = true)\n",
" | |-- access: struct (nullable = true)\n",
" | | |-- listrequested: string (nullable = true)\n",
" | | |-- maskrequested: string (nullable = true)\n",
" | | |-- reason: string (nullable = true)\n",
" | | |-- transactionid: string (nullable = true)\n",
" | |-- additionalinfo: string (nullable = true)\n",
" | |-- additionalinfo2: string (nullable = true)\n",
" | |-- handleid: string (nullable = true)\n",
" | |-- name: string (nullable = true)\n",
" | |-- newsddl: string (nullable = true)\n",
" | |-- oldsddl: string (nullable = true)\n",
" | |-- operationtype: string (nullable = true)\n",
" | |-- privilegelist: string (nullable = true)\n",
" | |-- properties: string (nullable = true)\n",
" | |-- resourceattributes: string (nullable = true)\n",
" | |-- restrictedsidcount: string (nullable = true)\n",
" | |-- server: string (nullable = true)\n",
" | |-- type: string (nullable = true)\n",
" |-- opcode: string (nullable = true)\n",
" |-- proces: struct (nullable = true)\n",
" | |-- tokenelevationtype: string (nullable = true)\n",
" |-- process: struct (nullable = true)\n",
" | |-- handleid: string (nullable = true)\n",
" | |-- id: integer (nullable = true)\n",
" | |-- mandatorylevel: string (nullable = true)\n",
" | |-- name: string (nullable = true)\n",
" | |-- parent: struct (nullable = true)\n",
" | | |-- id: integer (nullable = true)\n",
" | | |-- name: string (nullable = true)\n",
" | |-- status: string (nullable = true)\n",
" | |-- path: string (nullable = true)\n",
" | |-- target: struct (nullable = true)\n",
" | | |-- handleid: string (nullable = true)\n",
" | | |-- id: integer (nullable = true)\n",
" | |-- terminalsessionid: integer (nullable = true)\n",
" |-- process_id: long (nullable = true)\n",
" |-- provider_guid: string (nullable = true)\n",
" |-- record_number: string (nullable = true)\n",
" |-- service: struct (nullable = true)\n",
" | |-- name: string (nullable = true)\n",
" | |-- privilegelist: string (nullable = true)\n",
" | |-- ticket: struct (nullable = true)\n",
" | | |-- id: string (nullable = true)\n",
" | | |-- name: string (nullable = true)\n",
" | | |-- preauthtype: string (nullable = true)\n",
" | | |-- requested: string (nullable = true)\n",
" | | |-- status: string (nullable = true)\n",
" |-- share: struct (nullable = true)\n",
" | |-- localpath: string (nullable = true)\n",
" | |-- name: string (nullable = true)\n",
" | |-- relativetargetname: string (nullable = true)\n",
" |-- source: struct (nullable = true)\n",
" | |-- hostname: string (nullable = true)\n",
" | |-- hostnameinfo: string (nullable = true)\n",
" | |-- ip: string (nullable = true)\n",
" | |-- port: struct (nullable = true)\n",
" | | |-- number: integer (nullable = true)\n",
@ -434,25 +362,11 @@
" |-- system: struct (nullable = true)\n",
" | |-- newtime: timestamp (nullable = true)\n",
" | |-- previoustime: timestamp (nullable = true)\n",
" |-- tags: array (nullable = true)\n",
" | |-- element: string (containsNull = true)\n",
" |-- task: string (nullable = true)\n",
" |-- task_name: string (nullable = true)\n",
" |-- task_newcontent: string (nullable = true)\n",
" |-- thread_id: long (nullable = true)\n",
" |-- ticket: struct (nullable = true)\n",
" | |-- encryptiontype: string (nullable = true)\n",
" | |-- options: string (nullable = true)\n",
" |-- type: string (nullable = true)\n",
" |-- user: struct (nullable = true)\n",
" | |-- access: struct (nullable = true)\n",
" | | |-- reason: string (nullable = true)\n",
" | |-- domain: string (nullable = true)\n",
" | |-- explicit: struct (nullable = true)\n",
" | | |-- domain: string (nullable = true)\n",
" | | |-- logonguid: string (nullable = true)\n",
" | | |-- name: string (nullable = true)\n",
" | |-- groupmembership: string (nullable = true)\n",
" | |-- logon: struct (nullable = true)\n",
" | | |-- guid: string (nullable = true)\n",
" | | |-- id: string (nullable = true)\n",
@ -461,20 +375,7 @@
" | |-- networkaccount: struct (nullable = true)\n",
" | | |-- domain: string (nullable = true)\n",
" | | |-- name: string (nullable = true)\n",
" | |-- principal: struct (nullable = true)\n",
" | | |-- domain: string (nullable = true)\n",
" | | |-- id: string (nullable = true)\n",
" | | |-- name: string (nullable = true)\n",
" | | |-- sid: string (nullable = true)\n",
" | |-- sessionid: string (nullable = true)\n",
" | |-- sid: string (nullable = true)\n",
" | |-- target: struct (nullable = true)\n",
" | | |-- disabledprivilegelist: string (nullable = true)\n",
" | | |-- domain: string (nullable = true)\n",
" | | |-- enabledprivilegelist: string (nullable = true)\n",
" | | |-- logonid: string (nullable = true)\n",
" | | |-- name: string (nullable = true)\n",
" | | |-- sid: string (nullable = true)\n",
" |-- version: integer (nullable = true)\n",
"\n"
]
@ -486,7 +387,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 18,
"metadata": {},
"outputs": [
{
@ -496,28 +397,10 @@
"+--------------------+\n",
"| task|\n",
"+--------------------+\n",
"|Token Right Adjus...|\n",
"|Filtering Platfor...|\n",
"|Filtering Platfor...|\n",
"|Filtering Platfor...|\n",
"|Filtering Platfor...|\n",
"|Filtering Platfor...|\n",
"|Token Right Adjus...|\n",
"|Filtering Platfor...|\n",
"|Token Right Adjus...|\n",
"|Token Right Adjus...|\n",
"|Token Right Adjus...|\n",
"|Token Right Adjus...|\n",
"|Token Right Adjus...|\n",
"|Token Right Adjus...|\n",
"|Token Right Adjus...|\n",
"|Filtering Platfor...|\n",
"|Filtering Platfor...|\n",
"|Filtering Platfor...|\n",
"|Token Right Adjus...|\n",
"|Token Right Adjus...|\n",
"|Security State Ch...|\n",
"| Special Logon|\n",
"| Logon|\n",
"+--------------------+\n",
"only showing top 20 rows\n",
"\n"
]
}

View File

@ -1,7 +1,6 @@
# HELK script: HELK Dockerfile
# HELK script description: Dockerize the HELK build
# HELK script: HELK ELK Dockerfile
# HELK build version: 0.9 (ALPHA)
# HELK ELK version: 6.2.0
# HELK ELK version: 6.2.2
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
@ -10,8 +9,8 @@
# https://github.com/spujadas/elk-docker/blob/master/Dockerfile
FROM phusion/baseimage
MAINTAINER Roberto Rodriguez @Cyb3rWard0g
LABEL description="Dockerfile base for the HELK."
LABEL maintainer="Roberto Rodriguez @Cyb3rWard0g"
LABEL description="Dockerfile base for the HELK ELK."
ENV DEBIAN_FRONTEND noninteractive
@ -27,8 +26,10 @@ RUN echo "[HELK-DOCKER-INSTALLATION-INFO] Updating Ubuntu base image.." \
nano \
python \
python-pip \
python-tk \
unzip
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen
RUN apt-get -qy clean \
autoremove
@ -38,32 +39,21 @@ RUN pip install --upgrade pip
# *********** Installing HELK python packages ***************
RUN pip install \
OTXv2 \
pandas==0.22.0 \
jupyter
RUN pip install scipy==1.0.0 \
scikit-learn==0.19.1 \
nltk==3.2.5 \
matplotlib==2.1.2 \
seaborn==0.8.1 \
datasketch==1.2.5 \
tensorflow==1.5.0 \
keras==2.1.3 \
pyflux==0.4.15 \
imbalanced-learn==0.3.2 \
lime==0.1.1.29
pandas==0.22.0
# *********** Creating the right directories ***************
RUN bash -c 'mkdir -pv /opt/helk/{scripts,training,otx,es-hadoop,spark,output_templates,dashboards,kafka,elasticsearch,logstash,kibana,cerebro,ksql}'
#RUN bash -c 'mkdir -pv /opt/helk/{scripts,training,otx,es-hadoop,spark,output_templates,dashboards,kafka,elasticsearch,logstash,kibana,cerebro,ksql}'
RUN bash -c 'mkdir -pv /opt/helk/{scripts,otx,output_templates,dashboards,elasticsearch,logstash,kibana,cerebro,ksql}'
# *********** Adding HELK scripts and files to Container ***************
ADD scripts/helk_otx.py /opt/helk/scripts/
ADD scripts/helk_kibana_setup.sh /opt/helk/scripts/
ADD scripts/helk_docker_entrypoint.sh /opt/helk/scripts/
ADD training/ /opt/helk/training/
ADD scripts/elk-kibana-setup.sh /opt/helk/scripts/
ADD scripts/elk-entrypoint.sh /opt/helk/scripts/
RUN chmod +x /opt/helk/scripts/elk-kibana-setup.sh
RUN chmod +x /opt/helk/scripts/elk-entrypoint.sh
# *********** ELK Version ***************
ENV ELK_VERSION=6.2.0
ENV ELK_VERSION=6.2.2
# *********** Installing Elasticsearch ***************
ENV ES_HELK_HOME=/opt/helk/elasticsearch
@ -141,31 +131,6 @@ ADD enrichments/otx/ /opt/helk/otx/
RUN cronjob="0 8 * * 1 python /opt/helk/scripts/helk_otx.py" \
&& echo "$cronjob" | crontab
# *********** Install ES-Hadoop ***************
RUN wget https://artifacts.elastic.co/downloads/elasticsearch-hadoop/elasticsearch-hadoop-6.2.0.zip -P /opt/helk/es-hadoop/ \
&& unzip /opt/helk/es-hadoop/*.zip -d /opt/helk/es-hadoop/ \
&& rm /opt/helk/es-hadoop/*.zip
# *********** Install Spark ***************
ENV SPARK_LOGS_PATH=/var/log/spark
RUN wget -qO- http://mirrors.gigenet.com/apache/spark/spark-2.2.1/spark-2.2.1-bin-hadoop2.7.tgz | sudo tar xvz -C /opt/helk/spark/ \
&& mkdir -v $SPARK_LOGS_PATH
ADD spark/.bashrc ~/.bashrc
ADD spark/log4j.properties /opt/helk/spark/spark-2.2.1-bin-hadoop2.7/conf/
ADD spark/spark-defaults.conf /opt/helk/spark/spark-2.2.1-bin-hadoop2.7/conf/
ADD spark/spark-init /etc/init.d/spark
# *********** Install Kafka ***************
ENV KAFKA_LOGS_PATH=/var/log/kafka
RUN wget -qO- http://apache.mirrors.lucidnetworks.net/kafka/1.0.0/kafka_2.11-1.0.0.tgz | sudo tar xvz -C /opt/helk/kafka/ \
&& mkdir -v $KAFKA_LOGS_PATH \
&& mv /opt/helk/kafka/kafka_2.11-1.0.0/config/server.properties /opt/helk/kafka/kafka_2.11-1.0.0/config/backup_server.properties
ADD kafka/*.properties /opt/helk/kafka/kafka_2.11-1.0.0/config/
ADD kafka/kafka-init /etc/init.d/kafka
# *********** Download KSQL (Experiment) ***************
# RUN wget -qO- https://github.com/confluentinc/ksql/archive/v0.4.tar.gz | sudo tar xvz -C /opt/helk/ksql/
# *********** Install Cerebro ***************
ENV CEREBRO_HOME=/opt/helk/cerebro
ENV CEREBRO_LOGS_PATH=/var/log/cerebro
@ -173,16 +138,7 @@ RUN wget -qO- https://github.com/lmenezes/cerebro/releases/download/v0.7.2/cereb
&& mkdir -v $CEREBRO_LOGS_PATH
ADD cerebro/cerebro-init /etc/init.d/cerebro
# Adding SPARK location
ENV SPARK_HOME=/opt/helk/spark/spark-2.2.1-bin-hadoop2.7
ENV PATH=$SPARK_HOME/bin:$PATH
# Adding Jupyter Notebook Integration
ENV PYSPARK_DRIVER_PYTHON=/usr/local/bin/jupyter
ENV PYSPARK_DRIVER_PYTHON_OPTS="notebook --NotebookApp.open_browser=False --NotebookApp.ip='*' --NotebookApp.port=8880 --allow-root"
ENV PYSPARK_PYTHON=/usr/bin/python
# *********** RUN HELK ***************
EXPOSE 80 5044 4040 8880 2181 9092 9093 9094 9000 8082
EXPOSE 80 5044 9000 8082
WORKDIR "/opt/helk/scripts/"
ENTRYPOINT ["./helk_docker_entrypoint.sh"]
ENTRYPOINT ["./elk-entrypoint.sh"]

View File

@ -1,6 +1,7 @@
#!/bin/bash
#
# /etc/init.d/elasticsearch -- startup script for Elasticsearch
# Maintained by Roberto Rodriguez @Cyb3rWard0g
#
### BEGIN INIT INFO
# Provides: elasticsearch

View File

@ -14,7 +14,7 @@
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
cluster.name: helk-elk
#
# ------------------------------------ Node ------------------------------------
#
@ -53,6 +53,7 @@ bootstrap.memory_lock: true
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: localhost
network.host: ["localhost", "172.18.0.2"]
#
# Set a custom port for HTTP:
#
@ -86,3 +87,4 @@ bootstrap.memory_lock: true
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
discovery.type: single-node

View File

Can't render this file because it is too large.

View File

Can't render this file because it is too large.

View File

@ -1,5 +1,5 @@
#!/bin/sh
# Init script for Kibana
# /etc/init.d/kibana-- startup script for Kibana
# Maintained by Roberto Rodriguez @Cyb3rWard0g
# Reference:
# https://github.com/elastic/logstash/blob/master/distribution/rpm/src/main/packaging/init.d/logstash

View File

@ -99,4 +99,4 @@ server.host: "localhost"
#ops.interval: 5000
# Experimental Visualizations:
# "vega": {"enableExternalUrls": true}
"vega": {"enableExternalUrls": true}

View File

@ -1,4 +1,4 @@
# HELK beats input conf file
# HELK Kafka input conf file
# HELK build version: 0.9 (Alpha)
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
@ -6,7 +6,7 @@
input {
kafka
{
bootstrap_servers => "localhost:9092,localhost:9093,localhost:9094"
bootstrap_servers => "172.18.0.3:9092,172.18.0.3:9093,172.18.0.3:9094"
topics => ["winlogbeat"]
codec => "json"
auto_offset_reset => "earliest"

View File

@ -1,13 +1,14 @@
# HELK powershell-direct input conf file
# HELK Beats input conf file
# HELK build version: 0.9 (Alpha)
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
input {
tcp {
beats {
port => 5044
add_field => { "[@metadata][source]" => "beats"}
codec => "json"
type => "powershell-direct"
ssl => false
}
}

View File

@ -5,6 +5,19 @@
filter {
if [log_name] == "Microsoft-Windows-Sysmon/Operational"{
if [event_data][Image] =~ /^(\w*$)|^(\w*\..*$)/ {
mutate {
copy => {"[event_data][Image]" => "[process][name]"}
}
}
else {
grok {
match => {
"[event_data][Image]" => ".*\\%{GREEDYDATA:[process][name]}"
}
tag_on_failure => [ "_grokparsefailure", "_parsefailure" ]
}
}
mutate {
rename => {
"[user][domain]" => "[subject][user][domain]"
@ -23,7 +36,11 @@ filter {
target => [hash]
}
grok {
match => {"[event_data][User]" => "%{GREEDYDATA:[user][domain]}\\%{GREEDYDATA:[user][name]}"}
match => {
"[event_data][User]" => "%{GREEDYDATA:[user][domain]}\\%{GREEDYDATA:[user][name]}"
"[event_data][ParentImage]" => ".*\\%{GREEDYDATA:[process][parent][name]}"
}
tag_on_failure => [ "_grokparsefailure", "_parsefailure" ]
}
mutate {
add_field => { "action" => "processcreate" }
@ -147,6 +164,13 @@ filter {
}
}
if [event_id] == 10 {
grok {
match => {
"[event_data][SourceImage]" => ".*\\%{GREEDYDATA:[process][name]}"
"[event_data][TargetImage]" => ".*\\%{GREEDYDATA:[process][target][name]}"
}
tag_on_failure => [ "_grokparsefailure", "_parsefailure" ]
}
mutate {
add_field => { "action" => "processaccess" }
rename => {
@ -206,7 +230,7 @@ filter {
date {
timezone => "UTC"
match => [ "[event_data][UtcTime]", "YYYY-MM-dd HH:mm:ss.SSS" ]
target => "[@meta][log][timestamp]"
target => "[@meta][sysmon][timestamp]"
remove_field => [ "[event_data][UtcTime]" ]
tag_on_failure => [ "_sysmon_datefailure", "_dateparsefailure" ]
}

View File

@ -5,6 +5,12 @@
filter {
if [log_name] == "Security"{
grok {
match => {
"[event_data][ProcessName]" => ".*\\%{GREEDYDATA:[process][name]}"
}
tag_on_failure => [ "_grokparsefailure", "_parsefailure" ]
}
if [event_id] == 4611 {
# https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/auditing/event-4611.md
mutate {
@ -73,7 +79,7 @@ filter {
}
}
if [event_id] == 4627 {
# hhttps://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/auditing/event-4627.md
# https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/auditing/event-4627.md
mutate {
rename => {
"[event_data][SubjectUserSid]" => "[subject][user][sid]"
@ -273,6 +279,13 @@ filter {
}
if [event_id] == 4688 {
# https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/auditing/event-4688.md
grok {
match => {
"[event_data][NewProcessName]" => ".*\\%{GREEDYDATA:[process][name]}"
"[event_data][ParentProcessName]" => ".*\\%{GREEDYDATA:[process][parent][name]}"
}
tag_on_failure => [ "_grokparsefailure", "_parsefailure" ]
}
mutate {
rename => {
"[event_data][NewProcessId]" => "[process][id]"
@ -517,6 +530,12 @@ filter {
if [event_id] == 4798 or [event_id] == 4799 {
# https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/auditing/event-4798.md
# https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/auditing/event-4799.md
grok {
match => {
"[event_data][CallerProcessName]" => ".*\\%{GREEDYDATA:[process][name]}"
}
tag_on_failure => [ "_grokparsefailure", "_parsefailure" ]
}
mutate {
rename => {
"[event_data][CallerProcessId]" => "[process][id]"
@ -620,6 +639,12 @@ filter {
# https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/auditing/event-5156.md
# https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/auditing/event-5157.md
# https://github.com/MicrosoftDocs/windows-itpro-docs/blob/master/windows/security/threat-protection/auditing/event-5158.md
grok {
match => {
"[event_data][Application]" => ".*\\%{GREEDYDATA:[process][name]}"
}
tag_on_failure => [ "_grokparsefailure", "_parsefailure" ]
}
mutate {
rename => {
"[event_data][Application]" => "[process][path]"

View File

@ -0,0 +1,13 @@
# HELK Beats output conf file
# HELK build version: 0.9 (BETA)
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
output {
if [@metadata][source] == "beats"{
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logs-endpoint-beats-%{+YYYY.MM.dd}"
}
}
}

View File

@ -1,7 +1,7 @@
#!/bin/bash
#!/bin/sh
# HELK script: helk_docker_entryppoint.sh
# HELK script description: Restart ELK services and runs Spark
# HELK script: elk-entrypoint.sh
# HELK script description: Restarts and runs ELK services
# HELK build version: 0.9 (Alpha)
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
@ -13,8 +13,6 @@ _term() {
service logstash stop
service kibana stop
service cerebro stop
service spark stop
service kafka stop
exit 0
}
trap _term SIGTERM
@ -23,12 +21,7 @@ trap _term SIGTERM
rm -f /var/run/elasticsearch/elasticsearch.pid \
/var/run/logstash.pid \
/var/run/kibana.pid \
/var/run/spark.pid \
/var/run/cerebro.pid \
/var/run/kafka_zookeeper.pid \
/var/run/kafka.pid \
/var/run/kafka_1.pid \
/var/run/kafka_2.pid
/var/run/cerebro.pid
# *********** Setting ES Heap Size***************
# https://serverfault.com/questions/881383/automatically-set-java-heap-size-for-elasticsearch-on-linux
@ -55,23 +48,11 @@ service kibana start
service nginx restart
service logstash start
service cerebro start
service spark start
service cron start
# *********** Creating Kibana Dashboards, visualizations and index-patterns ***************
echo "[HELK-DOCKER-INSTALLATION-INFO] Running helk_kibana_setup.sh script..."
./helk_kibana_setup.sh
./elk-kibana-setup.sh
# *********** Start Kafka **************
echo "[HELK-DOCKER-INSTALLATION-INFO] Setting current host IP to brokers server.properties files.."
sed -i "s/advertised\.listeners\=PLAINTEXT:\/\/HELKIP\:9092/advertised\.listeners\=PLAINTEXT\:\/\/${ADVERTISED_LISTENER}\:9092/g" /opt/helk/kafka/kafka_2.11-1.0.0/config/server.properties
sed -i "s/advertised\.listeners\=PLAINTEXT:\/\/HELKIP\:9093/advertised\.listeners\=PLAINTEXT\:\/\/${ADVERTISED_LISTENER}\:9093/g" /opt/helk/kafka/kafka_2.11-1.0.0/config/server-1.properties
sed -i "s/advertised\.listeners\=PLAINTEXT:\/\/HELKIP\:9094/advertised\.listeners\=PLAINTEXT\:\/\/${ADVERTISED_LISTENER}\:9094/g" /opt/helk/kafka/kafka_2.11-1.0.0/config/server-2.properties
echo "[HELK-DOCKER-INSTALLATION-INFO] Starting Kafka.."
service kafka start
sleep 20
echo "[HELK-DOCKER-INSTALLATION-INFO] Creating Kafka Winlogbeat Topic.."
/opt/helk/kafka/kafka_2.11-1.0.0/bin/kafka-topics.sh --create --zookeeper $ADVERTISED_LISTENER:2181 --replication-factor 3 --partitions 1 --topic winlogbeat
echo "[HELK-DOCKER-INSTALLATION-INFO] Pushing Spark Logs to console.."
tail -f /var/log/spark/spark_pyspark.log
echo "[HELK-DOCKER-INSTALLATION-INFO] Pushing logstash Logs to console.."
tail -f /var/log/logstash/*-plain.log

View File

@ -1,6 +1,6 @@
#!/bin/bash
# HELK script: helk_kibana_setup.sh
# HELK script: elk-kibana-setup.sh
# HELK script description: Creates Kibana index patterns, dashboards and visualizations automatically.
# HELK build version: 0.9 (Alpha)
# Author: Roberto Rodriguez (@Cyb3rWard0g)

43
helk-kafka/Dockerfile Normal file
View File

@ -0,0 +1,43 @@
# HELK script: HELK Kafka Dockerfile
# HELK build version: 0.9 (Alpha)
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
FROM phusion/baseimage
LABEL maintainer="Roberto Rodriguez @Cyb3rWard0g"
LABEL description="Dockerfile base for the HELK Kafka."
ENV DEBIAN_FRONTEND noninteractive
# *********** Installing Prerequisites ***************
# -qq : No output except for errors
RUN echo "[HELK-DOCKER-INSTALLATION-INFO] Updating Ubuntu base image.." \
&& apt-get update -qq \
&& echo "[HELK-DOCKER-INSTALLATION-INFO] Extracting templates from packages.." \
&& apt-get install -qqy \
openjdk-8-jre-headless \
wget \
sudo \
nano
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && \
locale-gen
RUN apt-get -qy clean \
autoremove \
&& rm -rf /var/lib/apt/lists/*
# *********** Creating the right directories ***************
RUN bash -c 'mkdir -pv /opt/helk/{scripts,kafka}'
# *********** Install Kafka ***************
ENV KAFKA_LOGS_PATH=/var/log/kafka
RUN wget -qO- http://apache.mirrors.lucidnetworks.net/kafka/1.0.0/kafka_2.11-1.0.0.tgz | sudo tar xvz -C /opt/helk/kafka/ \
&& mkdir -v $KAFKA_LOGS_PATH \
&& mv /opt/helk/kafka/kafka_2.11-1.0.0/config/server.properties /opt/helk/kafka/kafka_2.11-1.0.0/config/backup_server.properties
ADD *.properties /opt/helk/kafka/kafka_2.11-1.0.0/config/
ADD kafka-init /etc/init.d/kafka
ADD scripts/kafka-entrypoint.sh /opt/helk/scripts/
RUN chmod +x /opt/helk/scripts/kafka-entrypoint.sh
EXPOSE 2181 9092 9093 9094
WORKDIR "/opt/helk/scripts/"
ENTRYPOINT ["./kafka-entrypoint.sh"]

View File

@ -1,5 +1,5 @@
#!/bin/bash
# Init script for logstash
# /etc/init.d/kafka -- startup script for Kafka
# Maintained by Roberto Rodriguez @Cyb3rWard0g
# Reference:
# https://github.com/elastic/logstash/blob/master/distribution/rpm/src/main/packaging/init.d/logstash
@ -34,7 +34,7 @@ fi
KAFKA_HOME=/opt/helk/kafka/kafka_2.11-1.0.0
KAFKA_USER=root
KAFKA_USER=root
KAFKA_GROUP=root
KAFKA_NICE=18
SERVICE_NAME="kafka"
SERVICE_DESCRIPTION="kafka"

View File

@ -0,0 +1,35 @@
#!/bin/sh
# HELK script: kafka-entrypoint.sh
# HELK script description: Restarts and runs Kafka services
# HELK build version: 0.9 (Alpha)
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
# Start graceful termination of HELK services that might be running before running the entrypoint script.
_term() {
echo "Terminating HELK-Kafka Service"
service kafka stop
exit 0
}
trap _term SIGTERM
# Removing PID files just in case the graceful termination fails
rm -f /var/run/kafka_zookeeper.pid \
/var/run/kafka.pid \
/var/run/kafka_1.pid \
/var/run/kafka_2.pid
# *********** Start Kafka **************
echo "[HELK-DOCKER-INSTALLATION-INFO] Setting current host IP to brokers server.properties files.."
sed -i "s/advertised\.listeners\=PLAINTEXT:\/\/HELKIP\:9092/advertised\.listeners\=PLAINTEXT\:\/\/${ADVERTISED_LISTENER}\:9092/g" /opt/helk/kafka/kafka_2.11-1.0.0/config/server.properties
sed -i "s/advertised\.listeners\=PLAINTEXT:\/\/HELKIP\:9093/advertised\.listeners\=PLAINTEXT\:\/\/${ADVERTISED_LISTENER}\:9093/g" /opt/helk/kafka/kafka_2.11-1.0.0/config/server-1.properties
sed -i "s/advertised\.listeners\=PLAINTEXT:\/\/HELKIP\:9094/advertised\.listeners\=PLAINTEXT\:\/\/${ADVERTISED_LISTENER}\:9094/g" /opt/helk/kafka/kafka_2.11-1.0.0/config/server-2.properties
echo "[HELK-DOCKER-INSTALLATION-INFO] Starting Kafka.."
service kafka start
sleep 30
echo "[HELK-DOCKER-INSTALLATION-INFO] Creating Kafka Winlogbeat Topic.."
/opt/helk/kafka/kafka_2.11-1.0.0/bin/kafka-topics.sh --create --zookeeper $ADVERTISED_LISTENER:2181 --replication-factor 3 --partitions 1 --topic winlogbeat
echo "[HELK-DOCKER-INSTALLATION-INFO] Pushing Spark Logs to console.."
tail -f /var/log/kafka/helk-*.log

0
helk.env Normal file
View File

View File

@ -3,7 +3,7 @@
# HELK script: helk_install.sh
# HELK script description: Start
# HELK build version: 0.9 (Alpha)
# HELK ELK version: 6.2.0
# HELK ELK version: 6.2.2
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
@ -23,180 +23,144 @@ systemKernel="$(uname -s)"
# *********** Getting Jupyter Token ***************
get_token(){
echo "[HELK-DOCKER-INSTALLATION-INFO] Waiting for HELK services and Jupyter Server to start.."
echo "[HELK-INSTALLATION-INFO] Waiting for HELK services and Jupyter Server to start.."
until curl -s localhost:8880 -o /dev/null; do
sleep 1
done
jupyter_token="$(docker exec -ti helk jupyter notebook list | grep -oP '(?<=token=).*(?= ::)' | awk '{$1=$1};1')" >> $LOGFILE 2>&1
docker_access="HELK DOCKER BASH ACCESS: sudo docker exec -ti helk bash"
jupyter_token="$(docker exec -ti helk-analytics jupyter notebook list | grep -oP '(?<=token=).*(?= ::)' | awk '{$1=$1};1')" >> $LOGFILE 2>&1
}
# *********** Pulling latest HELK image from DockerHub ***************
one(){
echo "[HELK-DOCKER-INSTALLATION-INFO] Pulling the latest HELK image from Dockerhub.."
docker pull cyb3rward0g/helk >> $LOGFILE 2>&1
echo "[HELK-DOCKER-INSTALLATION-INFO] Running the HELK container in the background.."
docker run -d -p 80:80 -p 5044:5044 -p 8880:8880 -p 4040:4040 -p 2181:2181 -p 9092:9092 -p 9093:9093 -p 9094:9094 -p 9000:9000 -p 8082:8082 -e "bootstrap.memory_lock=true" -e ADVERTISED_LISTENER="${host_ip}" --ulimit memlock=-1:-1 --name helk cyb3rward0g/helk >> $LOGFILE 2>&1
# *********** Getting Jupyter Token ***************
get_token
}
# *********** Building HELK image from local Dockerfile ***************
two(){
echo "[HELK-DOCKER-INSTALLATION-INFO] Building the HELK container from local Dockerfile.."
docker build -t my_helk . >> $LOGFILE 2>&1
ERROR=$?
# *********** Building and Running HELK Images ***************
build_run(){
echo "[HELK-INSTALLATION-INFO] Installing HELK via docker-compose"
echo "ADVERTISED_LISTENER=$host_ip" >> helk.env
docker-compose up -d >> $LOGFILE 2>&1
if [ $ERROR -ne 0 ]; then
echoerror "Could not build HELK image from local Dockerfile (Error Code: $ERROR)."
echoerror "Could not build HELK via docker-compose (Error Code: $ERROR)."
exit 1
fi
echo "[HELK-DOCKER-INSTALLATION-INFO] Running the HELK container in the background.."
docker run -d -p 80:80 -p 5044:5044 -p 8880:8880 -p 4040:4040 -p 2181:2181 -p 9092:9092 -p 9093:9093 -p 9094:9094 -p 9000:9000 -p 8082:8082 -e "bootstrap.memory_lock=true" -e ADVERTISED_LISTENER="${host_ip}" --ulimit memlock=-1:-1 --name helk my_helk >> $LOGFILE 2>&1
# *********** Getting Jupyter Token ***************
get_token
}
# *********** Building the HELK from local bash script ***************
three(){
echo "[HELK-BASH-INSTALLATION-INFO] Installing the HELK from local bash script"
cd scripts/
./helk_debian_tar_install.sh
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not build HELK image from bash script (Error Code: $ERROR)."
exit 1
fi
# *********** Getting Jupyter Token ***************
echo "[HELK-BASH-INSTALLATION-INFO] Waiting for Jupyter Server to start.."
until curl -s localhost:8880 -o /dev/null; do
sleep 1
done
jupyter_token="$( cat /var/log/spark/spark_pyspark.log | grep -oP '(?<=token=).*(?=)' | sort -u)"
}
# *********** Showing HELK Docker menu options ***************
show_menus() {
show_banner() {
echo " "
echo "**********************************************"
echo "** HELK - M E N U **"
echo "** HELK - THE HUNTING ELK **"
echo "** **"
echo "** Author: Roberto Rodriguez (@Cyb3rWard0g) **"
echo "** HELK build version: 0.9 (Alpha) **"
echo "** HELK ELK version: 6.2.0 **"
echo "** HELK ELK version: 6.2.2 **"
echo "** License: BSD 3-Clause **"
echo "**********************************************"
echo " "
echo "1. Pull the latest HELK image from DockerHub"
echo "2. Build the HELK image from local Dockerfile"
echo "3. Install the HELK from local bash script"
echo "4. Exit"
echo " "
}
read_options(){
local choice
read -p "[HELK-INSTALLATION-INFO] Enter choice [ 1 - 4] " choice
prepare_helk(){
get_host_ip
if [ $choice = "1" ] || [ $choice = "2" ]; then
if [ "$systemKernel" == "Linux" ]; then
# Reference: https://get.docker.com/
echo "[HELK-DOCKER-INSTALLATION-INFO] HELK identified Linux as the system kernel"
echo "[HELK-DOCKER-INSTALLATION-INFO] Checking distribution list and version"
# *********** Check distribution list ***************
lsb_dist="$(. /etc/os-release && echo "$ID")"
lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')"
if [ "$systemKernel" == "Linux" ]; then
# Reference: https://get.docker.com/
echo "[HELK-INSTALLATION-INFO] HELK identified Linux as the system kernel"
echo "[HELK-INSTALLATION-INFO] Checking distribution list and version"
# *********** Check distribution list ***************
lsb_dist="$(. /etc/os-release && echo "$ID")"
lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')"
# *********** Check distribution version ***************
case "$lsb_dist" in
ubuntu)
if [ -x "$(command -v lsb_release)" ]; then
dist_version="$(lsb_release --codename | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then
dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")"
fi
;;
debian|raspbian)
dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
case "$dist_version" in
9)
dist_version="stretch"
;;
8)
dist_version="jessie"
;;
7)
dist_version="wheezy"
;;
esac
;;
centos)
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
rhel|ol|sles)
ee_notice "$lsb_dist"
exit 1
;;
*)
if [ -x "$(command -v lsb_release)"]; then
dist_version="$(lsb_release --release | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
esac
echo "[HELK-DOCKER-INSTALLATION-INFO] You're using $lsb_dist version $dist_version"
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not verify distribution or version of the OS (Error Code: $ERROR)."
fi
# *********** Check if docker is installed ***************
if [ -x "$(command -v docker)" ]; then
echo "[HELK-DOCKER-INSTALLATION-INFO] Docker already installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Dockerizing HELK.."
else
echo "[HELK-DOCKER-INSTALLATION-INFO] Docker is not installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Checking if curl is installed first"
if [ -x "$(command -v curl)" ]; then
echo "[HELK-DOCKER-INSTALLATION-INFO] curl is already installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Ready to install Docker.."
else
echo "[HELK-DOCKER-INSTALLATION-INFO] curl is not installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Installing curl before installing docker.."
apt-get install -y curl >> $LOGFILE 2>&1
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not install curl (Error Code: $ERROR)."
exit 1
fi
# *********** Check distribution version ***************
case "$lsb_dist" in
ubuntu)
if [ -x "$(command -v lsb_release)" ]; then
dist_version="$(lsb_release --codename | cut -f2)"
fi
# ****** Installing via convenience script ***********
echo "[HELK-DOCKER-INSTALLATION-INFO] Installing docker via convenience script.."
curl -fsSL get.docker.com -o scripts/get-docker.sh >> $LOGFILE 2>&1
chmod +x scripts/get-docker.sh >> $LOGFILE 2>&1
scripts/get-docker.sh >> $LOGFILE 2>&1
if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then
dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")"
fi
;;
debian|raspbian)
dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
case "$dist_version" in
9)
dist_version="stretch"
;;
8)
dist_version="jessie"
;;
7)
dist_version="wheezy"
;;
esac
;;
centos)
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
rhel|ol|sles)
ee_notice "$lsb_dist"
exit 1
;;
*)
if [ -x "$(command -v lsb_release)"]; then
dist_version="$(lsb_release --release | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
esac
echo "[HELK-INSTALLATION-INFO] You're using $lsb_dist version $dist_version"
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not verify distribution or version of the OS (Error Code: $ERROR)."
fi
# *********** Check if docker is installed ***************
if [ -x "$(command -v docker)" ]; then
echo "[HELK-INSTALLATION-INFO] Docker already installed"
echo "[HELK-INSTALLATION-INFO] Dockerizing HELK.."
else
echo "[HELK-INSTALLATION-INFO] Docker is not installed"
echo "[HELK-INSTALLATION-INFO] Checking if curl is installed first"
if [ -x "$(command -v curl)" ]; then
echo "[HELK-INSTALLATION-INFO] curl is already installed"
echo "[HELK-INSTALLATION-INFO] Ready to install Docker.."
else
echo "[HELK-INSTALLATION-INFO] curl is not installed"
echo "[HELK-INSTALLATION-INFO] Installing curl before installing docker.."
apt-get install -y curl >> $LOGFILE 2>&1
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not install docker via convenience script (Error Code: $ERROR)."
echoerror "Could not install curl (Error Code: $ERROR)."
exit 1
fi
fi
else
# *********** Check if docker is installed ***************
if [ -x "$(command -v docker)" ]; then
echo "[HELK-DOCKER-INSTALLATION-INFO] Docker already installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Dockerizing HELK.."
else
echo "[HELK-DOCKER-INSTALLATION-INFO] Install docker for $systemKernel"
# ****** Installing via convenience script ***********
echo "[HELK-INSTALLATION-INFO] Installing docker via convenience script.."
curl -fsSL get.docker.com -o scripts/get-docker.sh >> $LOGFILE 2>&1
chmod +x scripts/get-docker.sh >> $LOGFILE 2>&1
scripts/get-docker.sh >> $LOGFILE 2>&1
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not install docker via convenience script (Error Code: $ERROR)."
exit 1
fi
# ****** Installing docker-compose ***********
echo "[HELK-INSTALLATION-INFO] Installing docker-compose .."
curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose >> $LOGFILE 2>&1
chmod +x /usr/local/bin/docker-compose >> $LOGFILE 2>&1
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not install docker-compose (Error Code: $ERROR)."
exit 1
fi
fi
else
# *********** Check if docker is installed ***************
if [ -x "$(command -v docker)" ]; then
echo "[HELK-INSTALLATION-INFO] Docker already installed"
echo "[HELK-INSTALLATION-INFO] Dockerizing HELK.."
else
echo "[HELK-INSTALLATION-INFO] Install docker for $systemKernel"
exit 1
fi
fi
echo "[HELK-INSTALLATION-INFO] Checking local vm.max_map_count variable and setting it to 262144"
@ -208,13 +172,6 @@ read_options(){
echoerror "Could not set vm.max_map_count to 262144 (Error Code: $ERROR)."
fi
fi
case $choice in
1) one ;;
2) two ;;
3) three ;;
4) exit 0;;
*) echo -e "[HELK-INSTALLATION-INFO] Wrong choice..." && exit 1
esac
}
get_host_ip(){
@ -244,8 +201,10 @@ get_host_ip(){
}
# *********** Running selected option ***************
show_menus
read_options
show_banner
prepare_helk
build_run
get_token
echo " "
echo " "
@ -261,8 +220,7 @@ echo "HELK KIBANA & ELASTICSEARCH USER: helk"
echo "HELK KIBANA & ELASTICSEARCH PASSWORD: hunting"
echo "HELK JUPYTER CURRENT TOKEN: ${jupyter_token}"
echo "HELK SPARK UI: http://${host_ip}:4040"
echo "HELK JUPYTER NOTEBOOK URI: http://${host_ip}:8880"
echo "${docker_access}"
echo "HELK JUPYTER LAB URL: http://${host_ip}:8880/lab"
echo " "
echo "IT IS HUNTING SEASON!!!!!"
echo " "

View File

@ -1,8 +0,0 @@
# HELK powershell-direct filter conf file
# HELK build version: 0.9 (Alpha)
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
filter {
if [type] == "powershell-direct"{
}
}

View File

@ -1,17 +0,0 @@
# HELK powershell_direct output conf file
# HELK build version: 0.9 (BETA)
# Author: Lee Christensen (@tifkin_)
# License: BSD 3-Clause
output {
if [type] == "powershell-direct"{
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "logs-endpoint-powershell-direct-%{+YYYY.MM.dd}"
template => "/opt/helk/output_templates/powershell-direct-template.json"
template_name => "logs-endpoint-powershell-direct"
template_overwrite => true
#document_id => "%{[@metadata][log_hash]}"
}
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 615 KiB

After

Width:  |  Height:  |  Size: 615 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 236 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 255 KiB

After

Width:  |  Height:  |  Size: 255 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 599 KiB

After

Width:  |  Height:  |  Size: 626 KiB

View File

@ -0,0 +1,270 @@
#!/bin/bash
# HELK script: helk_install.sh
# HELK script description: Start
# HELK build version: 0.9 (Alpha)
# HELK ELK version: 6.2.0
# Author: Roberto Rodriguez (@Cyb3rWard0g)
# License: BSD 3-Clause
# *********** Check if user is root ***************
if [[ $EUID -ne 0 ]]; then
echo "[HELK-INSTALLATION-INFO] YOU MUST BE ROOT TO RUN THIS SCRIPT!!!"
exit 1
fi
LOGFILE="/var/log/helk-install.log"
echoerror() {
printf "${RC} * ERROR${EC}: $@\n" 1>&2;
}
# *********** Check System Kernel Name ***************
systemKernel="$(uname -s)"
# *********** Getting Jupyter Token ***************
get_token(){
echo "[HELK-DOCKER-INSTALLATION-INFO] Waiting for HELK services and Jupyter Server to start.."
until curl -s localhost:8880 -o /dev/null; do
sleep 1
done
jupyter_token="$(docker exec -ti helk jupyter notebook list | grep -oP '(?<=token=).*(?= ::)' | awk '{$1=$1};1')" >> $LOGFILE 2>&1
docker_access="HELK DOCKER BASH ACCESS: sudo docker exec -ti helk bash"
}
# *********** Pulling latest HELK image from DockerHub ***************
one(){
echo "[HELK-DOCKER-INSTALLATION-INFO] Pulling the latest HELK image from Dockerhub.."
docker pull cyb3rward0g/helk >> $LOGFILE 2>&1
echo "[HELK-DOCKER-INSTALLATION-INFO] Running the HELK container in the background.."
docker run -d -p 80:80 -p 5044:5044 -p 8880:8880 -p 4040:4040 -p 2181:2181 -p 9092:9092 -p 9093:9093 -p 9094:9094 -p 9000:9000 -p 8082:8082 -e "bootstrap.memory_lock=true" -e ADVERTISED_LISTENER="${host_ip}" --ulimit memlock=-1:-1 --name helk cyb3rward0g/helk >> $LOGFILE 2>&1
# *********** Getting Jupyter Token ***************
get_token
}
# *********** Building HELK image from local Dockerfile ***************
two(){
echo "[HELK-DOCKER-INSTALLATION-INFO] Building the HELK container from local Dockerfile.."
docker build -t my_helk . >> $LOGFILE 2>&1
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not build HELK image from local Dockerfile (Error Code: $ERROR)."
exit 1
fi
echo "[HELK-DOCKER-INSTALLATION-INFO] Running the HELK container in the background.."
docker run -d -p 80:80 -p 5044:5044 -p 8880:8880 -p 4040:4040 -p 2181:2181 -p 9092:9092 -p 9093:9093 -p 9094:9094 -p 9000:9000 -p 8082:8082 -e "bootstrap.memory_lock=true" -e ADVERTISED_LISTENER="${host_ip}" --ulimit memlock=-1:-1 --name helk my_helk >> $LOGFILE 2>&1
# *********** Getting Jupyter Token ***************
get_token
}
# *********** Building the HELK from local bash script ***************
three(){
echo "[HELK-BASH-INSTALLATION-INFO] Installing the HELK from local bash script"
cd scripts/
./helk_debian_tar_install.sh
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not build HELK image from bash script (Error Code: $ERROR)."
exit 1
fi
# *********** Getting Jupyter Token ***************
echo "[HELK-BASH-INSTALLATION-INFO] Waiting for Jupyter Server to start.."
until curl -s localhost:8880 -o /dev/null; do
sleep 1
done
jupyter_token="$( cat /var/log/spark/spark_pyspark.log | grep -oP '(?<=token=).*(?=)' | sort -u)"
}
# *********** Showing HELK Docker menu options ***************
show_menus() {
echo " "
echo "**********************************************"
echo "** HELK - M E N U **"
echo "** **"
echo "** Author: Roberto Rodriguez (@Cyb3rWard0g) **"
echo "** HELK build version: 0.9 (Alpha) **"
echo "** HELK ELK version: 6.2.0 **"
echo "** License: BSD 3-Clause **"
echo "**********************************************"
echo " "
echo "1. Pull the latest HELK image from DockerHub"
echo "2. Build the HELK image from local Dockerfile"
echo "3. Install the HELK from local bash script"
echo "4. Exit"
echo " "
}
read_options(){
local choice
read -p "[HELK-INSTALLATION-INFO] Enter choice [ 1 - 4] " choice
get_host_ip
if [ $choice = "1" ] || [ $choice = "2" ]; then
if [ "$systemKernel" == "Linux" ]; then
# Reference: https://get.docker.com/
echo "[HELK-DOCKER-INSTALLATION-INFO] HELK identified Linux as the system kernel"
echo "[HELK-DOCKER-INSTALLATION-INFO] Checking distribution list and version"
# *********** Check distribution list ***************
lsb_dist="$(. /etc/os-release && echo "$ID")"
lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')"
# *********** Check distribution version ***************
case "$lsb_dist" in
ubuntu)
if [ -x "$(command -v lsb_release)" ]; then
dist_version="$(lsb_release --codename | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then
dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")"
fi
;;
debian|raspbian)
dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')"
case "$dist_version" in
9)
dist_version="stretch"
;;
8)
dist_version="jessie"
;;
7)
dist_version="wheezy"
;;
esac
;;
centos)
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
rhel|ol|sles)
ee_notice "$lsb_dist"
exit 1
;;
*)
if [ -x "$(command -v lsb_release)"]; then
dist_version="$(lsb_release --release | cut -f2)"
fi
if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then
dist_version="$(. /etc/os-release && echo "$VERSION_ID")"
fi
;;
esac
echo "[HELK-DOCKER-INSTALLATION-INFO] You're using $lsb_dist version $dist_version"
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not verify distribution or version of the OS (Error Code: $ERROR)."
fi
# *********** Check if docker is installed ***************
if [ -x "$(command -v docker)" ]; then
echo "[HELK-DOCKER-INSTALLATION-INFO] Docker already installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Dockerizing HELK.."
else
echo "[HELK-DOCKER-INSTALLATION-INFO] Docker is not installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Checking if curl is installed first"
if [ -x "$(command -v curl)" ]; then
echo "[HELK-DOCKER-INSTALLATION-INFO] curl is already installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Ready to install Docker.."
else
echo "[HELK-DOCKER-INSTALLATION-INFO] curl is not installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Installing curl before installing docker.."
apt-get install -y curl >> $LOGFILE 2>&1
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not install curl (Error Code: $ERROR)."
exit 1
fi
fi
# ****** Installing via convenience script ***********
echo "[HELK-DOCKER-INSTALLATION-INFO] Installing docker via convenience script.."
curl -fsSL get.docker.com -o scripts/get-docker.sh >> $LOGFILE 2>&1
chmod +x scripts/get-docker.sh >> $LOGFILE 2>&1
scripts/get-docker.sh >> $LOGFILE 2>&1
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not install docker via convenience script (Error Code: $ERROR)."
exit 1
fi
fi
else
# *********** Check if docker is installed ***************
if [ -x "$(command -v docker)" ]; then
echo "[HELK-DOCKER-INSTALLATION-INFO] Docker already installed"
echo "[HELK-DOCKER-INSTALLATION-INFO] Dockerizing HELK.."
else
echo "[HELK-DOCKER-INSTALLATION-INFO] Install docker for $systemKernel"
exit 1
fi
fi
fi
echo "[HELK-INSTALLATION-INFO] Checking local vm.max_map_count variable and setting it to 262144"
MAX_MAP_COUNT=262144
if [ -n "$MAX_MAP_COUNT" -a -f /proc/sys/vm/max_map_count ]; then
sysctl -q -w vm.max_map_count=$MAX_MAP_COUNT >> $LOGFILE 2>&1
ERROR=$?
if [ $ERROR -ne 0 ]; then
echoerror "Could not set vm.max_map_count to 262144 (Error Code: $ERROR)."
fi
fi
case $choice in
1) one ;;
2) two ;;
3) three ;;
4) exit 0;;
*) echo -e "[HELK-INSTALLATION-INFO] Wrong choice..." && exit 1
esac
}
get_host_ip(){
# *********** Getting Host IP ***************
# https://github.com/Invoke-IR/ACE/blob/master/ACE-Docker/start.sh
echo "[HELK-INSTALLATION-INFO] Obtaining current host IP.."
case "${systemKernel}" in
Linux*) host_ip=$(ip route get 1 | awk '{print $NF;exit}');;
Darwin*) host_ip=$(ifconfig en0 | grep inet | grep -v inet6 | cut -d ' ' -f2);;
*) host_ip="UNKNOWN:${unameOut}"
esac
# *********** Accepting Defaults or Allowing user to set HELK IP ***************
local ip_choice
local read_input
read -t 30 -p "[HELK-INSTALLATION-INFO] Set HELK IP. Default value is your current IP: " -e -i ${host_ip} ip_choice
read_input=$?
ip_choice="${ip_choice:-$host_ip}"
if [ $ip_choice != $host_ip ]; then
host_ip=$ip_choice
fi
if [ $read_input = 142 ]; then
echo -e "\n[HELK-INSTALLATION-INFO] HELK IP set to ${host_ip}"
else
echo "[HELK-INSTALLATION-INFO] HELK IP set to ${host_ip}"
fi
}
# *********** Running selected option ***************
show_menus
read_options
echo " "
echo " "
echo "***********************************************************************************"
echo "** [HELK-INSTALLATION-INFO] YOUR HELK IS READY **"
echo "** [HELK-INSTALLATION-INFO] USE THE FOLLOWING SETTINGS TO INTERACT WITH THE HELK **"
echo "***********************************************************************************"
echo " "
echo "HELK KIBANA URL: http://${host_ip}"
echo "HELK ELASTICSEARCH EXTERNAL URL: http://${host_ip}:8082"
echo "HELK CEREBRO URL: http://${host_ip}:9000"
echo "HELK KIBANA & ELASTICSEARCH USER: helk"
echo "HELK KIBANA & ELASTICSEARCH PASSWORD: hunting"
echo "HELK JUPYTER CURRENT TOKEN: ${jupyter_token}"
echo "HELK SPARK UI: http://${host_ip}:4040"
echo "HELK JUPYTER NOTEBOOK URI: http://${host_ip}:8880"
echo "${docker_access}"
echo " "
echo "IT IS HUNTING SEASON!!!!!"
echo " "
echo " "
echo " "

View File

@ -105,7 +105,7 @@ ERROR=$?
echo "[HELK-BASH-INSTALLATION-INFO] Creating needed folders for the HELK.."
mkdir -pv /opt/helk/{scripts,training,otx,es-hadoop,spark,output_templates,dashboards,kafka,elasticsearch,logstash,kibana,cerebro,ksql} >> $LOGFILE 2>&1
echo "[HELK-BASH-INSTALLATION-INFO] Copying HELK files over.."
cp -v helk_kibana_setup.sh /opt/helk/scripts/ >> $LOGFILE 2>&1
cp -v helk-elk-kibana-setup.sh /opt/helk/scripts/ >> $LOGFILE 2>&1
cp -v helk_otx.py /opt/helk/scripts/ >> $LOGFILE 2>&1
cp -vr ../training/* /opt/helk/training/ >> $LOGFILE 2>&1
ERROR=$?