Remove duplicate doc files, update root README and re-organize docs folder

main
William Beuil 2021-02-26 11:41:00 +01:00
parent 2c1ae57cef
commit fc64d24ad4
No known key found for this signature in database
GPG Key ID: BED2072C5C2BF537
22 changed files with 87 additions and 956 deletions

View File

@ -91,7 +91,7 @@ As you make your changes, you can re-run the above command to ensure that the te
go test ./pkg/iac/...
```
For more details on testing, check the [contributing guide](../doc/contributing/tests.md).
For more details on testing, check the [contributing guide](../docs/testing.md).
### Acceptance Tests: Testing interactions with external services
@ -104,7 +104,7 @@ We recommend focusing only on the specific package you are working on when enabl
Because the acceptance tests depend on services outside of the driftctl codebase, and because the acceptance tests are usually used only when making changes to the systems they cover, it is common and expected that drift in those external systems will cause test failures.
Because of this, prior to working on a system covered by acceptance tests it's important to run the existing tests for that system in an *unchanged* work tree first and respond to any test failures that preexist, to avoid misinterpreting such failures as bugs in your new changes.
More details on acceptance on the [contributing guide](../doc/contributing/README.md)
More details on acceptance on the [contributing guide](../docs/README.md)
## Generated Code

166
README.md
View File

@ -1,5 +1,5 @@
<p align="center">
<img width="201" src="assets/new_icon.svg" alt="Driftctl">
<img width="200" src="https://docs.driftctl.com/img/driftctl_dark.svg" alt="driftctl">
</p>
<p align="center">
@ -23,11 +23,11 @@
<p align="center">
Measures infrastructure as code coverage, and tracks infrastructure drift.<br>
<strong>IaC:</strong> Terraform, <strong>Cloud platform:</strong> AWS (Azure and GCP on the roadmap for 2021).<br>
<strong>IaC:</strong> Terraform, <strong>Cloud providers:</strong> AWS, GitHub (Azure and GCP on the roadmap for 2021).<br>
:warning: <strong>This tool is still in beta state and will evolve in the future with potential breaking changes</strong> :warning:
</p>
## Why ?
## Why driftctl ?
Infrastructure as code is awesome, but there are too many moving parts: codebase, state file, actual cloud state. Things tend to drift.
@ -40,169 +40,23 @@ driftctl tracks how well your IaC codebase covers your cloud configuration. drif
## Features
- **Scan** cloud provider and map resources with IaC code
- Analyze diff, and warn about drift and unwanted unmanaged resources
- Analyze diffs, and warn about drift and unwanted unmanaged resources
- Allow users to **ignore** resources
- Multiple output formats
## Documentation & support
---
- [Get started](https://driftctl.com/product/quick-tutorial/)
- [User guide](doc/README.md)
- [Discord](https://discord.gg/NMCBxtD7Nd)
**[Get Started](https://driftctl.com/product/quick-tutorial/)**
## Getting started
**[Documentation](https://docs.driftctl.com)**
### Installation
**[Discord](https://discord.gg/NMCBxtD7Nd)**
driftctl is available on Linux, macOS and Windows.
Binaries are available in the [release page](https://github.com/cloudskiff/driftctl/releases).
#### Homebrew for macOS
```bash
brew install driftctl
```
#### MacPorts for macOS
```bash
sudo port install driftctl
```
#### Docker
```bash
docker run -t --rm \
-v ~/.aws:/home/.aws:ro \
-v $(pwd):/app:ro \
-v ~/.driftctl:/home/.driftctl \
-e AWS_PROFILE=non-default-profile \
cloudskiff/driftctl scan
```
`-v ~/.aws:/home/.aws:ro` (optionally) mounts your `~/.aws` containing AWS credentials and profile
`-v $(pwd):/app:ro` (optionally) mounts your working dir containing the terraform state
`-v ~/.driftctl:/home/.driftctl` (optionally) prevents driftctl to download the provider at each run
`-e AWS_PROFILE=cloudskiff` (optionally) exports the non-default AWS profile name to use
`cloudskiff/driftctl:<VERSION_TAG>` run a specific driftctl tagged release
#### Manual
- **Linux**
This is an example using `curl`. If you don't have `curl`, install it, or use `wget`.
```bash
# x64
curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl
# x86
curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_linux_386 -o driftctl
```
Make the binary executable:
```bash
chmod +x driftctl
```
Optionally install driftctl to a central location in your `PATH`:
```bash
# use any path that suits you, this is just a standard example. Install sudo if needed.
sudo mv driftctl /usr/local/bin/
```
- **macOS**
```bash
# x64
curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_darwin_amd64 -o driftctl
```
Make the binary executable:
```bash
chmod +x driftctl
```
Optionally install driftctl to a central location in your `PATH`:
```bash
# use any path that suits you, this is just a standard example. Install sudo if needed.
sudo mv driftctl /usr/local/bin/
```
- **Windows**
```bash
# x64
curl https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_windows_amd64.exe -o driftctl.exe
# x86
curl https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_windows_386.exe -o driftctl.exe
```
#### Verify digital signatures
Cloudskiff releases are signed using PGP key (ed25519) with ID `ACC776A79C824EBD` and fingerprint `2776 6600 5A7F 01D4 84F6 376D ACC7 76A7 9C82 4EBD`
Our key can be retrieved from common keyservers.
```shell
# Download binary, checksums and signature
$ curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_linux_amd64 -o driftctl_linux_amd64
$ curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_SHA256SUMS -o driftctl_SHA256SUMS
$ curl -L https://github.com/cloudskiff/driftctl/releases/latest/download/driftctl_SHA256SUMS.gpg -o driftctl_SHA256SUMS.gpg
# Import key
$ gpg --keyserver hkps.pool.sks-keyservers.net --recv-keys 0xACC776A79C824EBD
gpg: key ACC776A79C824EBD: public key "Cloudskiff <security@cloudskiff.com>" imported
gpg: Total number processed: 1
gpg: imported: 1
# Verify signature (optionally trust the key from gnupg to avoid any warning)
$ gpg --verify driftctl_SHA256SUMS.gpg
gpg: Signature made jeu. 04 févr. 2021 14:58:06 CET
gpg: using EDDSA key 277666005A7F01D484F6376DACC776A79C824EBD
gpg: issuer "security@cloudskiff.com"
gpg: Good signature from "Cloudskiff <security@cloudskiff.com>" [ultimate]
# Verify checksum
$ sha256sum --ignore-missing -c driftctl_SHA256SUMS
driftctl_linux_amd64: OK
```
### Run
Be sure to have [configured](doc/cmd/scan/supported_resources/aws.md#authentication) your AWS credentials.
You will need to assign [proper permissions](doc/cmd/scan/supported_resources/aws.md#least-privileged-policy) to allow driftctl to scan your account.
```bash
# With a local state
$ driftctl scan
# Same as
$ driftctl scan --from tfstate://terraform.tfstate
# To specify AWS credentials
$ AWS_ACCESS_KEY_ID=XXX AWS_SECRET_ACCESS_KEY=XXX driftctl scan
# or using a profile
$ AWS_PROFILE=profile_name driftctl scan
# With state stored on a s3 backend
$ driftctl scan --from tfstate+s3://my-bucket/path/to/state.tfstate
# With multiples states
$ driftctl scan --from tfstate://terraform_S3.tfstate --from tfstate://terraform_VPC.tfstate
```
---
## Contribute
To learn more about compiling driftctl and contributing, please refer to the [contribution guidelines](.github/CONTRIBUTING.md) and [contributing guide](doc/contributing/README.md) for technical details.
To learn more about compiling driftctl and contributing, please refer to the [contribution guidelines](.github/CONTRIBUTING.md) and the [contributing guide](docs/README.md) for technical details.
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification and is brought to you by these [awesome contributors](CONTRIBUTORS.md).

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.8 KiB

View File

@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 201 170.44"><g id="Calque_2" data-name="Calque 2"><g id="Calque_1-2" data-name="Calque 1"><path d="M99.43,28.37a56.93,56.93,0,0,1,55.46,44.14H183.7A85.22,85.22,0,0,0,17.27,62.58h30A56.93,56.93,0,0,1,99.43,28.37Z"></path><path d="M155,97.54A56.92,56.92,0,0,1,47.14,107.78H17.25a85.22,85.22,0,0,0,166.5-10.24Z"></path><path d="M0,66.91v36.62H201V66.91Zm118.29,33.55H3V70H118.29Zm9.1-7.85a5.35,5.35,0,0,0,3-1.15L132.21,94a7.78,7.78,0,0,1-5.05,1.79c-4.42,0-7-2.9-7-7.16s2.72-7.37,7.11-7.37a7.37,7.37,0,0,1,5,1.72l-1.74,2.44a5.36,5.36,0,0,0-3.06-1c-1.85,0-3,1.16-3,4.24S125.64,92.61,127.39,92.61Zm16,3.21c-3.57,0-5.26-1.95-5.26-5.08V84.55h-2.85v-2.8h2.85v-3l4.06-.49v3.44h4.39l-.41,2.8h-4v6.19c0,1.36.59,1.87,1.95,1.87a4.86,4.86,0,0,0,2.39-.61l1.33,2.59A8.26,8.26,0,0,1,143.35,95.82Zm15.43,0c-3.28,0-4.77-2-4.77-5.23V79.11h-4V76.34h8.06V91.1c0,1.1.67,1.51,1.75,1.51a5.21,5.21,0,0,0,1.95-.41l1,2.72A8.1,8.1,0,0,1,158.78,95.82Z"></path><path d="M47.37,95.82a4.45,4.45,0,0,0,3.77-1.92l.21,1.49h3.59v-19l-4-.44v6.68a4.17,4.17,0,0,0-3.08-1.29c-3.37,0-5.45,3-5.45,7.24C42.36,92.92,44,95.82,47.37,95.82Zm1.51-11.55a2.46,2.46,0,0,1,2,1.13v5.88a2.68,2.68,0,0,1-2.24,1.56c-1.23,0-2.1-.92-2.1-4.28C46.55,85.5,47.47,84.27,48.88,84.27Z"></path><path d="M67.21,92.66H64.52v-3.9c.61-2,1.82-3.54,3.36-3.8v2.49h2.57l.74-5.7a5.93,5.93,0,0,0-2.26-.41c-2.18,0-3.67,1.13-4.64,3.47l-.72-3.06h-4.9v2.7h1.79v8.21H58.67v2.73h8.54Z"></path><path d="M79.82,79.19a2.29,2.29,0,1,0,0-4.57,2.29,2.29,0,1,0,0,4.57Z"></path><polygon points="85.75 92.61 82.26 92.61 82.26 81.75 74.48 81.75 74.48 84.53 78.2 84.53 78.2 92.61 74.35 92.61 74.35 95.39 85.75 95.39 85.75 92.61"></polygon><path d="M92.37,95.39h4v-10h4l.46-2.82H96.4V80.78c0-1.34.64-1.88,2.26-1.88a8,8,0,0,1,2.9.52l1.13-2.65A11.63,11.63,0,0,0,98,75.9c-3.7,0-5.65,2-5.65,4.75V82.6H89.5v2.82h2.87Z"></path><path d="M107.29,90.74c0,3.13,1.69,5.08,5.26,5.08A8.28,8.28,0,0,0,117,94.59L115.68,92a4.83,4.83,0,0,1-2.39.61c-1.36,0-2-.51-2-1.87V84.55h4l.41-2.8h-4.39V78.31l-4,.49v3h-2.85v2.8h2.85Z"></path></g></g></svg>

Before

Width:  |  Height:  |  Size: 2.0 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 5.0 KiB

View File

@ -1,31 +0,0 @@
# Known Issues and Limitations
## AWS Regions & Credentials Limits
- The user needs to use the same AWS region and credentials for both the scanned infrastructure and the S3 bucket where the Terraform state is stored (for example, a Terraform state stored on S3 on us-east-1 for an infrastructure to be scanned on us-west-1 won't work). Think `AWS_PROFILE` for the underlying reason. See the related [GitHub Discussion](https://github.com/cloudskiff/driftctl/discussions/130).
- Driftctl currently doesn't support multiple aliased providers in a single Terraform state (like a single account but multiple regions). This will be implemented soon.
## Terraform & Providers Support
- Terraform version >= 0.12 is supported
- Terraform AWS provider version >= 3.x is supported
## Terraform Resources
### AWS
- aws_security_group and aws_security_group_rule:
For security group that has in-line egress or ingress rules, driftctl will output an alert message at the end of the scan to warn you that those rules are falsely unmanaged. The explanation is that we can't distinct, based only on the Terraform state, rules created in the console and rules created in-line in either egress or ingress blocks.
### Github
- github_branch_protection_v3:
- This resource is not supported and will probably never be as it overlaps with github_branch_protection.
`github_branch_protection` is more suitable for performance purpose.
We cannot support these two resources as we don't have any way to discriminate them by enumerating resources from
remote side. They represent the same notion but are from two different APIs (REST vs GraphQL).
driftctl team recommends you to use the newer `github_branch_protection`, or at least ignore all your `github_branch_protection_v3` in driftignore.
- github_branch_protection:
- Branch protection resources are not returned as unmanaged if the branch protection pattern does not match at least one branch.
- We cannot show the related repository name in driftctl output as the terraform provider does not retrieve this information.

View File

@ -1,45 +0,0 @@
# User guide
### Global flags
#### Version check
By default, driftctl checks for a new version remotely. To disable this behavior, either use the flag `--no-version-check` or define the environment variable `DCTL_NO_VERSION_CHECK`.
#### Error reporting
When a crash occurs in driftctl, we do not send any crash reports.
For debugging purposes, you can add `--error-reporting` when running driftctl and crash data will be sent to us via [Sentry](https://sentry.io)
Details of reported data can be found [here](./cmd/flags/error-reporting.md)
#### Log level
By default driftctl logger only displays warning and error messages. You can set `LOG_LEVEL` environment variable to change the default level.
Valid values are : trace,debug,info,warn,error,fatal,panic.
**Note:** In trace level, terraform provider logs will be shown.
Example
```shell
$ LOG_LEVEL=debug driftctl scan
DEBU[0000] New provider library created
DEBU[0000] Found existing provider path=/home/driftctl/.driftctl/plugins/linux_amd64/terraform-provider-aws_v3.19.0_x5
DEBU[0000] Starting gRPC client alias=us-east-1
DEBU[0001] New gRPC client started alias=us-east-1
...
```
### Usage
- Commands
- Scan
- [Output format](cmd/scan/output.md)
- [Filtering resources](cmd/scan/filter.md)
- [Supported remotes](cmd/scan/supported_resources/README.md)
- [Iac sources](cmd/scan/iac_source.md)
- [Completion](cmd/completion/script.md)
## Issues
- [Known Issues & Limitations](LIMITATIONS.md)

View File

@ -1,63 +0,0 @@
# Driftctl completion script
Driftctl can output completion script (also known as *tab completion*) for you to use on your shell. Currently `bash`, `zsh`, `fish` and `powershell` shells are supported.
### Before you start
In order to generate the completion script required to make the completion work, you have to install driftctl CLI first.
### Generate the completion file
To generate the completion script you can use:
```shell
$ driftctl completion [bash|zsh|fish|powershell]
```
By default, this command will print on the standard output the content of the completion script. To make the completion work you will need to redirect it to the completion folder of your shell.
### Bash
```shell
# Linux:
$ driftctl completion bash | sudo tee /etc/bash_completion.d/driftctl
# MacOS:
$ driftctl completion bash > /usr/local/etc/bash_completion.d/driftctl
```
Remember to open a new shell to test the functionality.
### Zsh
If shell completion is not already enabled in your environment, you will need to enable it. You can execute the following once:
```shell
$ echo "autoload -U compinit; compinit" >> ~/.zshrc
```
At this point you can generate and place the completion script in your completion folder listed in your `fpath` if it already exists. Otherwise, you can create a directory, add it to your `fpath` and copy the file in it:
```shell
$ driftctl completion zsh > fpath/completion_folder/_driftctl
```
#### Oh-My-Zsh
```shell
$ mkdir -p ~/.oh-my-zsh/completions
$ driftctl completion zsh > ~/.oh-my-zsh/completions/_driftctl
```
You will need to start a new shell for this setup to take effect.
### Fish
```shell
$ driftctl completion fish > ~/.config/fish/completions/driftctl.fish
```
Remember to create the directory if it's not already there `mkdir -p ~/.config/fish/completions/`.
Remember to open a new shell to test the functionality.
### Powershell
```shell
$ driftctl completion powershell > driftctl.ps1
```
You will need to source this file from your powershell profile for this to work as expected.

View File

@ -1,31 +0,0 @@
# Error reporting
Below is a list of data we retrieve when error reporting is enabled.
* **date**: Event date
* **os name**: Operating System (string, e.g. : "linux | mac | windows")
* **architecture**: Architecture of your CPU (string, e.g. : "amd64 | i389")
* **num_cpu**: Number of cores of your CPU (int, e.g. : 8)
* **release**: driftctl version (string, e.g. : "v0.2.2")
* **server_name**: Your computer hostname (string, e.g. : "yourhostname")
* **runtime version**: Golang version (string, e.g. : "go1.15.2")
* **runtime infos**: Variables go_maxprocs, go_numcgocalls, go_numroutines
* **packages**: Golang used packages and their versions
* **stacktrace**: The error stack
## Example
Below is a full example of a nil pointer crash report
![Sentry](./img/sentry.png)
The RAW stack for this example is
```
runtime.errorString: runtime error: invalid memory address or nil pointer dereference
File "/go/src/app/pkg/parallel_runner.go", line 93, in (*ParallelRunner).Run.func1.1
File "/go/src/app/pkg/remote/aws/s3_bucket_supplier.go", line 71, in readBucketRegion
File "/go/src/app/pkg/remote/aws/s3_bucket_inventory_supplier.go", line 42, in (*S3BucketInventorySupplier).Resources
File "/go/src/app/pkg/scanner.go", line 28, in (*Scanner).Resources.func1
File "/go/src/app/pkg/parallel_runner.go", line 97, in (*ParallelRunner).Run.func1
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 245 KiB

View File

@ -1,81 +0,0 @@
# Filtering resources
Driftctl offers two ways to filter resources
- Driftignore
- Filter rules
**Driftignore** Is a simple way to ignore resources, you put resources in a `.driftignore` file like a `.gitignore`.
**Filter rules** Allow you to build complex expression to include and exclude a set of resources in your workflow.
Powered by expression language JMESPath you could build a complex include and exclude expression.
If you need only to exclude a set of resources you should use .driftignore, if you need something more advanced, check filter rules.
## Driftignore
Create the .driftignore file where you launch driftctl (usually the root of your IaC repo).
Each line must be of kind
- `resource_type.resource_id`, resource_id could be a wildcard to exclude all resources of a given type.
- `resource_type.resource_id.path.to.FieldName`, resource_id can be wildcard to ignore a drift on given field for a given type, path could also contain wildcards.
**N.B.** Fields are not case-sensitive.
If your resource id or the path of a field contains dot or backslash you can escape them with backslashes:
```ignore
resource_type.resource\.id\.containing\.dots.path.to.dotted\.FieldName
resource_type.resource_id_containing\\backslash.path.to.backslash\\FieldName
```
### Example
```ignore
# Will ignore S3 bucket called my-bucket
aws_s3_bucket.my-buckey
# Will ignore every aws_instance resource
aws_instance.*
# Will ignore environement for all lambda functions
aws_lambda_function.*.Environment
# Will ignore lastModified for my-lambda-name lambda function
aws_lambda_function.my-lambda-name.LastModified
```
## Filter rules
Filter rules could be passed to `scan` cmd with `--filter` flag.
You could also use the environment variable `DCTL_FILTER`
Filter rules syntax in use is actually [JMESPath](https://jmespath.org/specification.html).
Filter are applied on a normalized struct which contains the following fields
- **Type**: Type of the resource, e.g. : `aws_s3_bucket`
- **Id**: Id of the resource, e.g. : `my-bucket-name`
- **Attr**: Contains every resource attributes (check `pkg/resource/aws/aws_s3_bucket.go` for a full list of supported attributes for a bucket for example).
### Example
```shell script
# Will include only S3 bucket in the search
driftctl scan --filter "Type=='aws_s3_bucket'"
# OR (beware of escape your shell special chars between double quotes)
driftctl scan --filter $'Type==\'aws_s3_bucket\''
# Excludes only s3 bucket named 'my-bucket-name'
driftctl scan --filter $'Type==\'aws_s3_bucket\' && Id!=\'my-bucket-name\''
# Ignore buckets that have tags terraform equal to 'false'
driftctl scan --filter $'!(Type==\'aws_s3_bucket\' && Attr.Tags.terraform==\'false\')'
# Ignore buckets that don't have tag terraform
driftctl scan --filter $'!(Type==\'aws_s3_bucket\' && Attr.Tags != null && !contains(keys(Attr.Tags), \'terraform\'))'
# Ignore buckets with an ID prefix of 'terraform-'
driftctl scan --filter $'!(Type==\'aws_s3_bucket\' && starts_with(Id, \'terraform-\'))'
# Ignore buckets with an ID suffix of '-test'
driftctl scan --filter $'!(Type==\'aws_s3_bucket\' && ends_with(Id, \'-test\'))'
# Ignore github archived repositories
driftctl scan --to github+tf --filter '!(Attr.Archived)'
```

View File

@ -1,53 +0,0 @@
# IaC source
Currently, driftctl only supports reading IaC from a Terraform state.
We are investigating to support the Terraform code as well, as a state does not represent an intention.
Multiple states can be read by passing `--from` flags
Example:
```shell
# I want to read a local state and a state stored in an S3 bucket :
driftctl scan \
--from tfstate+s3://statebucketdriftctl/terraform.tfstate \
--from tfstate://terraform_toto.tfstate
# You can also use every file under a given prefix for S3
driftctl scan --from tfstate+s3://statebucketdriftctl/states
# ... or in a given local folder
# driftctl will recursively use all files under this folder.
#
# N.B. Symlinks under the root folder will be ignored.
# If the folder itself is a symlink it will be followed.
driftctl scan --from tfstate://my-states/directory
```
## Supported IaC sources
* Terraform state
* Local: `--from tfstate://terraform.tfstate`
* S3: `--from tfstate+s3://my-bucket/path/to/state.tfstate`
### S3
driftctl needs read-only access so you could use the policy below to ensure minimal access to your state file
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/path/to/my/key"
}
]
}
```

View File

@ -1,98 +0,0 @@
# Output format
Driftctl supports multiple kinds of output formats and by default uses the standard output (console).
## Console
Environment: `DCTL_OUTPUT`
### Usage
```
$ driftctl scan
$ driftctl scan --output console://
$ DCTL_OUTPUT=console:// driftctl scan
```
### Structure
```
Found deleted resources:
aws_s3_bucket:
- driftctl-bucket-test-2
Found unmanaged resources:
aws_s3_bucket:
- driftctl-bucket-test-3
Found drifted resources:
- driftctl-bucket-test-1 (aws_s3_bucket):
~ Versioning.0.Enabled: false => true
Found 3 resource(s)
- 33% coverage
- 1 covered by IaC
- 1 not covered by IaC
- 1 deleted on cloud provider
- 1/1 drifted from IaC
```
## JSON
### Usage
```
$ driftctl scan --output json:///tmp/result.json # Will output results to /tmp/result.json
$ driftctl scan --output json://result.json # Will output results to ./result.json
$ DCTL_OUTPUT=json://result.json driftctl scan
```
### Structure
```json5
{
"summary": {
"total_resources": 3,
"total_drifted": 1,
"total_unmanaged": 1,
"total_deleted": 1,
"total_managed": 1
},
"managed": [ // list of resources found in IaC and in sync with remote
{
"id": "driftctl-bucket-test-1",
"type": "aws_s3_bucket"
}
],
"unmanaged": [ // list of resources found in remote but not in IaC
{
"id": "driftctl-bucket-test-3",
"type": "aws_s3_bucket"
}
],
"deleted": [ // list of resources found in IaC but not on remote
{
"id": "driftctl-bucket-test-2",
"type": "aws_s3_bucket"
}
],
"differences": [ // A list of changes on managed resources
{
"res": {
"id": "driftctl-bucket-test-1",
"type": "aws_s3_bucket"
},
"changelog": [
{
"type": "update", // Kind of change, could be one of update, create, delete
"path": [ // Path of the change, sorted from root to leaf
"Versioning",
"0",
"Enabled"
],
"from": false, // Mixed type
"to": true // Mixed type
}
]
}
],
"coverage": 33
}
```

View File

@ -1,4 +0,0 @@
# Supported remotes
- [AWS](aws.md)
- [Github](github.md)

View File

@ -1,303 +0,0 @@
# AWS Supported resources
## Authentication
To use driftctl, we need credentials to make authenticated requests to AWS. Just like the AWS CLI, we use [credentials and configuration](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html) settings declared as user environment variables, or in local AWS configuration files.
Driftctl supports [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html). By default, the CLI uses the settings found in the profile named `default`. You can override an individual setting by declaring the supported environment variables such as `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_PROFILE` ...
If you are using an [IAM role](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html) as an authorization tool, which is considered a good practice, please be aware that you can still use driftctl by defining a profile for the role in your `~/.aws/config` file.
```bash
[profile driftctlrole]
role_arn = arn:aws:iam::123456789012:role/<NAMEOFTHEROLE>
source_profile = user # profile to assume the role
region = eu-west-3
```
You can now use driftctl by overriding the profile setting.
```bash
$ AWS_PROFILE=driftctlrole driftctl scan
```
## CloudFormation template
Deploy this CloudFormation template to create our limited permission role that you can use as per our above authentication guide.
[![Launch Stack](https://cdn.rawgit.com/buildkite/cloudformation-launch-stack-button-svg/master/launch-stack.svg)](https://console.aws.amazon.com/cloudformation/home?#/stacks/quickcreate?stackName=driftctl-stack&templateURL=https://driftctl-cfn-templates.s3.eu-west-3.amazonaws.com/driftctl-role.yml)
### Update the CloudFormation template
It does not exist an automatic way to update the CloudFormation template from our side because you launched this template on your AWS account. That's why you must be the one to update the template to be on the most recent driftctl role.
Find below two ways to update the CloudFormation template:
1. With the AWS console
- In the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation), from the list of stacks, select the driftctl stack
- In the stack details pane, choose **Update**
- Select **Replace current template** and specify our **Amazon S3 URL** `https://driftctl-cfn-templates.s3.eu-west-3.amazonaws.com/driftctl-role.yml`, click **Next**
- On the **Specify stack details** and the **Configure stack options** pages, click **Next**
- In the **Change set preview** section, check that AWS CloudFormation will indeed make changes
- Since our template contains one IAM resource, select **I acknowledge that this template may create IAM resources**
- Finally, click **Update stack**
2. With the AWS CLI
```console
$ aws cloudformation update-stack --stack-name DRIFTCTL_STACK_NAME --template-url https://driftctl-cfn-templates.s3.eu-west-3.amazonaws.com/driftctl-role.yml --capabilities CAPABILITY_NAMED_IAM
```
## Least privileged policy
Driftctl needs access to your cloud provider account so that it can list resources on your behalf.
As AWS documentation recommends, the below policy is granting only the permissions required to perform driftctl's tasks.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "*",
"Action": [
"cloudfront:GetDistribution",
"cloudfront:ListDistributions",
"cloudfront:ListTagsForResource",
"ec2:DescribeAddresses",
"ec2:DescribeImages",
"ec2:DescribeInstanceAttribute",
"ec2:DescribeInstances",
"ec2:DescribeInstanceCreditSpecifications",
"ec2:DescribeInternetGateways",
"ec2:DescribeKeyPairs",
"ec2:DescribeNetworkAcls",
"ec2:DescribeRouteTables",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DescribeVpcs",
"ec2:DescribeVpcAttribute",
"ec2:DescribeVpcClassicLink",
"ec2:DescribeVpcClassicLinkDnsSupport",
"ec2:DescribeSubnets",
"ec2:DescribeNatGateways",
"ecr:DescribeRepositories",
"ecr:ListTagsForResource",
"iam:GetPolicy",
"iam:GetPolicyVersion",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:GetUser",
"iam:GetUserPolicy",
"iam:ListAccessKeys",
"iam:ListAttachedRolePolicies",
"iam:ListAttachedUserPolicies",
"iam:ListPolicies",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:ListUserPolicies",
"iam:ListUsers",
"kms:DescribeKey",
"kms:GetKeyPolicy",
"kms:GetKeyRotationStatus",
"kms:ListAliases",
"kms:ListKeys",
"kms:ListResourceTags",
"lambda:GetEventSourceMapping",
"lambda:GetFunction",
"lambda:GetFunctionCodeSigningConfig",
"lambda:ListEventSourceMappings",
"lambda:ListFunctions",
"lambda:ListVersionsByFunction",
"rds:DescribeDBInstances",
"rds:DescribeDBSubnetGroups",
"rds:ListTagsForResource",
"route53:GetHostedZone",
"route53:ListHostedZones",
"route53:ListResourceRecordSets",
"route53:ListTagsForResource",
"route53:ListHealthChecks",
"route53:GetHealthCheck",
"s3:GetAccelerateConfiguration",
"s3:GetAnalyticsConfiguration",
"s3:GetBucketAcl",
"s3:GetBucketCORS",
"s3:GetBucketLocation",
"s3:GetBucketLogging",
"s3:GetBucketNotification",
"s3:GetBucketObjectLockConfiguration",
"s3:GetBucketPolicy",
"s3:GetBucketRequestPayment",
"s3:GetBucketTagging",
"s3:GetBucketVersioning",
"s3:GetBucketWebsite",
"s3:GetEncryptionConfiguration",
"s3:GetInventoryConfiguration",
"s3:GetLifecycleConfiguration",
"s3:GetMetricsConfiguration",
"s3:GetReplicationConfiguration",
"s3:ListAllMyBuckets",
"s3:ListBucket",
"sqs:GetQueueAttributes",
"sqs:ListQueueTags",
"sqs:ListQueues",
"sns:ListTopics",
"sns:GetTopicAttributes",
"sns:ListTagsForResource",
"sns:ListSubscriptions",
"sns:ListSubscriptionsByTopic",
"sns:GetSubscriptionAttributes",
"dynamodb:ListTables",
"dynamodb:DescribeTable",
"dynamodb:DescribeGlobalTable",
"dynamodb:ListTagsOfResource",
"dynamodb:DescribeTimeToLive",
"dynamodb:DescribeTableReplicaAutoScaling",
"dynamodb:DescribeContinuousBackups"
]
}
]
}
```
## S3
- [x] aws_s3_bucket
- [x] aws_s3_bucket_analytics_configuration
- [x] aws_s3_bucket_inventory
- [x] aws_s3_bucket_metric
- [x] aws_s3_bucket_notification
- [x] aws_s3_bucket_policy
- [ ] aws_s3_access_point
- [ ] aws_s3_account_public_access_block
- [ ] aws_s3_bucket_object
- [ ] aws_s3_bucket_public_access_block
## EC2
- [x] aws_instance
- [x] aws_key_pair
- [x] aws_ami
- [x] aws_ebs_snapshot
- [x] aws_ebs_volume
- [x] aws_eip
- [x] aws_eip_association
## Lambda
- [x] aws_lambda_function
- [ ] aws_lambda_alias
- [x] aws_lambda_event_source_mapping
- [ ] aws_lambda_function_event_invoke_config
- [ ] aws_lambda_layer_version
- [ ] aws_lambda_permission
- [ ] aws_lambda_provisioned_concurrency_config
## RDS
- [x] aws_db_instance
- [x] aws_db_subnet_group
- [ ] aws_rds_cluster
- [ ] aws_rds_cluster_endpoint
- [ ] aws_rds_cluster_instance
- [ ] aws_db_cluster_snapshot
- [ ] aws_db_event_subscription
- [ ] aws_db_instance_role_association
- [ ] aws_db_option_group
- [ ] aws_db_parameter_group
- [ ] aws_db_proxy
- [ ] aws_db_proxy_default_target_group
- [ ] aws_db_snapshot
- [ ] aws_rds_cluster_endpoint
- [ ] aws_rds_cluster_parameter_group
- [ ] aws_rds_global_cluster
- [ ] aws_db_security_group
## Route53
- [x] aws_route53_record
- [x] aws_route53_zone
- [ ] aws_route53_delegation_set
- [x] aws_route53_health_check
- [ ] aws_route53_query_log
- [ ] aws_route53_vpc_association_authorization
- [ ] aws_route53_zone_association
## IAM
- [x] aws_iam_access_key
- [ ] aws_iam_instance_profile
- [x] aws_iam_policy
- [x] aws_iam_policy_attachment
- [ ] aws_iam_group
- [ ] aws_iam_group_membership
- [ ] aws_iam_group_policy
- [ ] aws_iam_group_policy_attachment
- [x] aws_iam_role
- [x] aws_iam_role_policy
- [x] aws_iam_role_policy_attachment
- [x] aws_iam_user
- [ ] aws_iam_user_group_membership
- [x] aws_iam_user_policy
- [x] aws_iam_user_policy_attachment
- [ ] aws_iam_user_ssh_key
- [ ] aws_iam_account_alias
- [ ] aws_iam_account_password_policy
- [ ] aws_iam_openid_connect_provider
- [ ] aws_iam_saml_provider
- [ ] aws_iam_server_certificate
- [ ] aws_iam_service_linked_role
- [ ] aws_iam_user_login_profile
## VPC
- [x] aws_default_subnet
- [x] aws_subnet
- [x] aws_default_vpc
- [x] aws_vpc
- [x] aws_default_security_group
- [x] aws_security_group
- [x] aws_security_group_rule
- [x] aws_route_table
- [x] aws_default_route_table
- [x] aws_route
- [x] aws_route_table_association
- [x] aws_nat_gateway
- [x] aws_internet_gateway
## SQS
- [x] aws_sqs_queue
- [x] aws_sqs_queue_policy
## SNS
- [x] aws_sns_topic
- [x] aws_sns_topic_policy
- [x] aws_sns_topic_subscription
- [ ] aws_sns_platform_application
- [ ] aws_sns_sms_preferences
## DynamoDB
- [x] aws_dynamodb_table
- [ ] aws_dynamodb_global_table
- [ ] aws_dynamodb_table_item
## Cloudfront
- [x] aws_cloudfront_distribution
## ECR
- [x] aws_ecr_repository
## KMS
- [x] aws_kms_key
- [x] aws_kms_alias
- [ ] aws_kms_external_key

View File

@ -1,28 +0,0 @@
# Github
## Authentication
To use driftctl, we need credentials to make authenticated requests to github. Just like the terraform provider, we retrieve config from [environment variables](https://registry.terraform.io/providers/integrations/github/latest/docs#argument-reference).
```bash
$ GITHUB_TOKEN=14758f1afd44c09b7992073ccf00b43d GITHUB_ORGANIZATION=my-org driftctl scan --to github+tf
```
## Least privileged policy
Below you can find the minimal scope required for driftctl to be able to scan every github supported resources.
```shell
repo # Required to enumerate public and private repos
read:org # Used to list your organization teams
```
**⚠️ Beware that if you don't set correct permissions for your token, you won't see any errors and all resources will appear as deleted from remote**
## Supported resources
- [x] github_repository
- [x] github_team
- [x] github_membership
- [x] github_team_membership
- [x] github_branch_protection

View File

@ -1,16 +0,0 @@
# Developer guide
- [How to add new resource](./adding_a_new_resource.md)
- [Tests](./tests.md)
## Core concepts
Driftctl uses Terraform providers besides cloud providers SDK to retrieve data.
Resource listing is done using cloud providers SDK, then resource details retrieval is done by calling the terraform provider with gRPC.
## Terminology
- `Scanner` Scanner is used to scan multiples cloud providers and return a set of resources. It calls every declared `Supplier`.
- `Remote` A remote is a representation of a cloud provider
- `Resource` A resource is an abstract representation of a cloud provider resource (e.g. S3 bucket, EC2 instance, etc ...)
- `ResourceSupplier` It should exist only one ResourceSupplier per resource. A ResourceSupplier is used to list resources from a given type on a given remote and return a resource list.

19
docs/README.md Normal file
View File

@ -0,0 +1,19 @@
# Developer guide
This directory contains some documentation about the driftctl codebase, aimed at readers who are interested in making code contributions.
- [Add new resources](new-resource.md)
- [Testing](testing.md)
## Core concepts
driftctl uses Terraform providers besides cloud providers SDK to retrieve data.
Resource listing is done using cloud providers SDK. Resource details retrieval is done by calling terraform providers with gRPC.
## Terminology
- `Scanner` is used to scan multiple cloud providers and return a set of resources, it calls all declared `Supplier`
- `Remote` is a representation of a cloud provider
- `Resource` is an abstract representation of a cloud provider resource (e.g. S3 bucket, EC2 instance, etc ...)
- `ResourceSupplier` is used to list resources from a given type on a given remote and return a resource list, it should exist only one ResourceSupplier per resource

View File

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 31 KiB

View File

@ -1,10 +1,12 @@
# Adding a new resource type to driftctl
# Add new resources
![Diagram](media/resource.png)
## 1 Defining the resource
## Defining the resource
First step is to implement a new resource will be to define a go struct representing all fields that needs to be monitored for this kind of resource.
You can find example in already implemented resource like aws.S3Bucket
First step is to implement a new resource. To do that you need to define a go struct representing all fields that need to be monitored for this kind of resource.
You can find several examples in already implemented resources like aws.S3Bucket:
```go
type AwsS3Bucket struct {
@ -45,6 +47,7 @@ func (s S3Bucket) NormalizeForProvider() (resource.Resource, error) {
err := normalizePolicy(&s)
return &s, err
}
func normalizePolicy(s *S3Bucket) error {
if s.Policy.Policy != nil {
jsonString, err := structure.NormalizeJsonString(*s.Policy.Policy)
@ -59,19 +62,21 @@ func normalizePolicy(s *S3Bucket) error {
You can implement different normalization for the state representation and the supplier one.
## 2 Supplier and Deserializer
## Supplier and Deserializer
Then you will have to implement two interfaces:
- `resource.supplier` is used to read resources list. It will call the cloud provider sdk to get the list of resources, and
the terraform provider to get the details for each of these resources.
- `resource.supplier` is used to read resources list. It will call the cloud provider SDK to get the list of resources, and
the terraform provider to get the details for each of these resources
- `remote.CTYDeserializer` is used to transform terraform cty output into your resource
### Supplier
This is used to read resources list. It will call the cloud provider sdk to get the list of resources, and the
This is used to read resources list. It will call the cloud provider SDK to get the list of resources, and the
terraform provider to get the details for each of these resources.
You can use an already implemented resource as example.
Supplier constructor could use these arguments:
- an instance of `ParallelRunner` that you will use to parallelize your call to the supplier:
```go
@ -98,7 +103,7 @@ if err != nil {
appendValueIntoMap(results, aws.AwsS3BucketResourceType, s3Bucket)
```
- an instance of the cloud provider sdk that you will use to retrieve resources list.
- an instance of the cloud provider SDK that you will use to retrieve resources list
### Deserializer
@ -107,14 +112,15 @@ The interface contains a `Deserialize(values []cty.Value) ([]resource.Resource,
You should then deserialize the obtained cty values into your resource and return the list.
Example: [aws_s3_bucket_deserializer.go](https://github.com/cloudskiff/driftctl/blob/master/pkg/resource/aws/deserializer/s3_bucket_deserializer.go)
Example: [aws_s3_bucket_deserializer.go](https://github.com/cloudskiff/driftctl/blob/main/pkg/resource/aws/deserializer/s3_bucket_deserializer.go)
## 3 Adding your resource
## Adding your resource
There are two files you are going to edit to make driftctl aware of your new resource.
For the state reader you will need to add your `CTYDeserializer` implementation into `iac/deserializers.go`
For the state reader you will need to add your `CTYDeserializer` implementation into `iac/deserializers.go`.
Just add an instance in the list:
```go
func Deserializers() []remote.CTYDeserializer {
return []remote.CTYDeserializer{
@ -124,7 +130,8 @@ func Deserializers() []remote.CTYDeserializer {
}
```
Then in the cloud provider's init file (e.g. in `remote/aws/init.go`) add your new implementation for `resource.Supplier`:
Then in the cloud provider's init file (e.g. in `remote/aws/init.go`), add your new implementation for `resource.Supplier`:
```go
func Init() error {
provider, err := NewTerraFormProvider()
@ -139,4 +146,4 @@ func Init() error {
```
Don't forget to add unit tests after adding a new resource.
You can also add acceptance test if you think it makes sense.
You can also add acceptance tests if you think it makes sense.

View File

@ -1,15 +1,16 @@
# Testing
driftctl uses both **unit test** and **acceptance test**.
Acceptance test are not required, but at least a good unit test coverage is required for a PR to be merged.
driftctl uses both **unit tests** and **acceptance tests**.
Acceptance tests are not required, but at least a good unit test coverage is required for a PR to be merged.
This documentation section's goal is about how we manage our test suite in driftctl.
driftctl uses gotestsum to wrap `go test`, you can install required tools to run test with `make install-tools`
driftctl uses gotestsum to wrap `go test`, you can install required tools to run test with `make install-tools`.
To run unit test simply run
```shell script
make install-tools
make test
To run unit test simply run.
```shell
$ make install-tools
$ make test
```
Before the test suite starts, we run `golangci-lint`.
@ -17,19 +18,20 @@ If there are any linter issues, you have to fix them first.
For the driftctl team, code coverage is very important as it helps show which part of your code is not covered.
We kindly ask you to check your coverage to ensure every important part of your code is tested.
We do not expect 100% coverage for every line of code, but at least every critical part of your code should be covered.
We do not expect 100% coverage for each line of code, but at least every critical part of your code should be covered.
For example, we don't care about covering `NewStruct()` constructors if there is no big logic inside.
Remember, a covered code does not mean that every condition is tested and asserted, so be careful to test the right things.
Remember, a covered code does not mean that all conditions are tested and asserted, so be careful to test the right things.
A bug can still happen in a covered part of your code.
We use the golden file pattern to assert on results. Golden files could be updated with `-update flag`.
For example, I've made some modifications to s3 bucket policy, I could update golden files with the following command:
For example, I've made modifications to s3 bucket policy, I could update golden files with the following command:
```shell script
go test ./pkg/remote/aws/ --update s3_bucket_policy_no_policy
```shell
$ go test ./pkg/remote/aws/ --update s3_bucket_policy_no_policy
```
⚠️ Beware that updating golden files may call external services.
In the example above, as we are using mocked AWS responses in json golden files, you should have to configure proper resources on AWS side before running an update.
For convenience, we try to put, as much as possible, terraform files used to generate golden files in test folders.
@ -37,15 +39,16 @@ For convenience, we try to put, as much as possible, terraform files used to gen
## Unit testing
Unit testing should not use any external dependency, so we mock every call to the cloud provider's SDK (see below for more details on mocking)
Unit testing should not use any external dependency, so we mock all calls to the cloud provider's SDK (see below for more details on mocking).
### Mocks
In driftctl unit test suite, every call to the cloud provider's SDK should be mocked.
In driftctl unit test suite, each call to the cloud provider's SDK should be mocked.
We use mocks generated by mockery in our tests.
See below each step to create a mock for a new AWS service (e.g. EC2).
1. Create a mock interface in `test/aws/ec2.go`
```go
package aws
@ -57,6 +60,7 @@ type FakeEC2 interface {
ec2iface.EC2API
}
```
2. Use mockery to generate a full mocked struct `mockery --name FakeS3 --dir ./test/aws`
3. Mock a response in your test (list IAM users for example)
@ -108,6 +112,7 @@ client.On("ListUsersPages",
🙏 We are still looking for a better way to handle this, contributions are welcome.
References:
- https://github.com/stretchr/testify/issues/504
- https://github.com/stretchr/testify/issues/1017
@ -118,40 +123,41 @@ The goal here is to apply some terraform code, and then run a series of **Check*
A **Check** consists of running driftctl and checking for results using json output.
driftctl uses assertion struct to help you check output results. See below for more details.
Each acceptance test should be prefixed by `TestAcc_` and should be run using env var `DRIFTCTL_ACC=true`.
Each acceptance test should be prefixed by `TestAcc_` and should be run using the environment var `DRIFTCTL_ACC=true`.
```shell script
DRIFTCTL_ACC=true go test -run=TestAcc_ ./pkg/resource/aws/aws_instance_test.go
```shell
$ DRIFTCTL_ACC=true go test -run=TestAcc_ ./pkg/resource/aws/aws_instance_test.go
```
### Credentials
Acceptance tests need credentials to perform real world action on cloud providers:
- Read/write access are required to perform terraform action
- Read only access is required for driftctl execution
- Read only access is required to execute driftctl
Recommended way to run acc tests is to use two distinct credentials:
one for terraform related actions, and one for driftctl scan.
Recommended way to run acceptance tests is to use two distinct credentials:
In our acceptance tests, we may need read/write permissions during specific contexts
(e.g. terraform init, apply, destroy)or lifecycle (PreExec and PostExec).
If needed, you can override environment variables in those contexts by adding `ACC_` prefix on env variables.
- One for terraform related actions
- One for driftctl scan
In our acceptance tests, we may need read/write permissions during specific contexts (e.g. terraform init, apply, destroy) or lifecycle (PreExec and PostExec).
If needed, you can override environment variables in those contexts by adding `ACC_` prefix on environment variables.
#### AWS
You can use `ACC_AWS_PROFILE` to override AWS named profile used for terraform operations.
```shell script
ACC_AWS_PROFILE=read-write-profile AWS_PROFILE=read-only-profile DRIFTCTL_ACC=true go test -run=TestAcc_ ./pkg/resource/aws/aws_instance_test.go
```shell
$ ACC_AWS_PROFILE=read-write-profile AWS_PROFILE=read-only-profile DRIFTCTL_ACC=true go test -run=TestAcc_ ./pkg/resource/aws/aws_instance_test.go
```
In the example below, the `driftctl` AWS profile must have read/write permissions and will be used
for both terraform operations and driftctl run.
In the example below, the `driftctl` AWS profile must have read/write permissions and will be used for both terraform operations and driftctl run.
This is **not** the recommended way to run tests as it may hide permissions issues.
```shell script
AWS_PROFILE=driftctl DRIFTCTL_ACC=true go test -run=TestAcc_ ./pkg/resource/aws/aws_instance_test.go
```shell
$ AWS_PROFILE=driftctl DRIFTCTL_ACC=true go test -run=TestAcc_ ./pkg/resource/aws/aws_instance_test.go
```
### Workflow