# Testing ## Table of Content - [Golden files](#golden-files) - [Unit testing](#unit-testing) - [Mocks](#mocks) - [Mocking repositories](#mocking-repositories) - [Acceptance testing](#acceptance-testing) - [Credentials](#credentials) - [AWS](#aws) - [Workflow](#workflow) - [Example](#example) driftctl uses **unit tests**, **functional tests** and **acceptance tests**. - A **unit test** tests only a very specific part of code - Pros: - Very quick to develop, run and maintain - Cons: - Does not ensure that we do not break integration with other part of the code - A **functional test** covers a larger part of the code than unit tests, but it mocks external dependencies - Pros: - Ensures that multiple components work well together - Still quick to develop and run - Cons: - Mocking every external dependencies can be complicated - Can be complicated to maintain since it is not scoped to a specific part of the code - An **acceptance test** or **integration test** is the closest of the end-user behavior - Pros: - Very close to a real product usage - Can cover regressions very efficiently - Cons: - Can be long to develop - They require real world resources - Long execution time - They require a lot of maintenance - Unstable due to third party services (something wrong or inconsistent on cloud provider side will make the test to fail) **Acceptance tests are not required**, but at least a good unit test coverage is required for a PR to be merged. This documentation section's goal is about how we manage our test suite in driftctl. driftctl uses gotestsum to wrap `go test`, you can install required tools to run test with `make install-tools`. To run unit test simply run. ```shell $ make install-tools $ make test ``` For the driftctl team, code coverage is very important as it helps show which part of your code is not covered. We kindly ask you to check your coverage to ensure every important part of your code is tested. We do not expect 100% coverage for each line of code, but at least every critical part of your code should be covered. For example, we don't care about covering `NewStruct()` constructors if there is no big logic inside. Remember, a covered code does not mean that all conditions are tested and asserted, so be careful to test the right things. A bug can still happen in a covered part of your code. ## Golden files We use the golden file pattern to assert on results. Golden files could be updated with `-update flag`. For example, I've made modifications to s3 bucket policy, I could update golden files with the following command: ```shell $ go test ./pkg/remote/aws/ --update s3_bucket_policy_no_policy ``` ⚠️ Beware that updating golden files may call external services. In the example above, as we are using mocked AWS responses in json golden files, you should have to configure proper resources on AWS side before running an update. For convenience, we try to put, as much as possible, terraform files used to generate golden files in test folders. **A quick way to get started is to copy/paste an existing test and adapt it to your needs.** ## Unit testing Unit testing should not use any external dependency, so we mock all calls to the cloud provider's SDK (see below for more details on mocking). ### Mocks In driftctl unit test suite, each call to the cloud provider's SDK should be mocked. We use mocks generated by mockery in our tests. See below each step to create a mock for a new AWS service (e.g. EC2). 1. Create a mock interface in `test/aws/ec2.go` ```go package aws import ( "github.com/aws/aws-sdk-go/service/ec2/ec2iface" ) type FakeEC2 interface { ec2iface.EC2API } ``` 2. Use mockery to generate a full mocked struct ``` $ mockery --name FakeEC2 --dir ./test/aws ``` 3. Mock a response in your test (list IAM users for example) ```go client := mocks.FakeIAM{} client.On("ListUsersPages", &iam.ListUsersInput{}, mock.MatchedBy(func(callback func(res *iam.ListUsersOutput, lastPage bool) bool) bool { callback(&iam.ListUsersOutput{Users: []*iam.User{ { UserName: aws.String("test-driftctl"), }, { UserName: aws.String("test-driftctl2"), }, }}, true) return true }) ).Once().Return(nil) ``` ⚠️ If you have several mocks on the same method, the "mock" library will evaluate code in your `MatchedBy` multiple times even if the first parameter does not match. It means your callback will always be called, this is an unwanted behaviour most of the time! A workaround is to manage flags but this is an ugly solution, here is an example using a boolean flag: ```go client := awstest.MockFakeIAM{} shouldSkipfirst := false shouldSkipSecond := false client.On("ListAttachedRolePoliciesPages", &iam.ListAttachedRolePoliciesInput{ RoleName: aws.String("test-role"), }, mock.MatchedBy(func(callback func(res *iam.ListAttachedRolePoliciesOutput, lastPage bool) bool) bool { // This will be evaluated every time, that's why we set this bool to true after the call if shouldSkipfirst { return false } callback(&iam.ListAttachedRolePoliciesOutput{AttachedPolicies: []*iam.AttachedPolicy{ { PolicyArn: aws.String("arn:aws:iam::526954929923:policy/test-policy"), PolicyName: aws.String("policy"), }, }}, true) shouldSkipfirst = true return true }), ).Return(nil).Once() client.On("ListAttachedRolePoliciesPages", &iam.ListAttachedRolePoliciesInput{ RoleName: aws.String("test-role2"), }, mock.MatchedBy(func(callback func(res *iam.ListAttachedRolePoliciesOutput, lastPage bool) bool) bool { if shouldSkipSecond { return false } callback(&iam.ListAttachedRolePoliciesOutput{AttachedPolicies: []*iam.AttachedPolicy{ { PolicyArn: aws.String("arn:aws:iam::526954929923:policy/test-policy"), PolicyName: aws.String("policy"), }, }}, true) shouldSkipSecond = true return true }), ).Return(nil).Once() ``` #### Mocking repositories Repositories are an abstraction layer for data retrival. They're used by enumerators to retrieve data from a cloud provider through its SDK. For example, each AWS service has a repository attached. We only implement repositories and methods we need. Mocking repositories is almost the same process than mocking the cloud provider's SDK. Since there's an interface for each repository, generating a mock for it is quick and easy. Note the difference between the interface and struct here. Remember a struct cannot be mocked in Go. ```go type ECRRepository interface { ListAllRepositories() ([]*ecr.Repository, error) } type ecrRepository struct { client ecriface.ECRAPI cache cache.Cache } ``` Here's an example that will create a mock for the ECR repository : ``` $ mockery --name=ECRRepository --dir pkg/remote/aws/repository/ ``` `ECRRepository` is the name of the interface present in the `pkg/remote/aws/repository/` directory. ---- 🙏 We are still looking for a better way to handle this, contributions are welcome. References: - https://github.com/stretchr/testify/issues/504 - https://github.com/stretchr/testify/issues/1017 ## Acceptance testing driftctl provides a kind of acceptance test framework (`test/acceptance`) to help you run those tests. The goal here is to apply some terraform code, and then run a series of **Check**. A **Check** consists of running driftctl and checking for results using json output. driftctl uses assertion struct to help you check output results. See below for more details. Each acceptance test should be prefixed by `TestAcc_` and should be run using the environment var `DRIFTCTL_ACC=true`. ```shell $ DRIFTCTL_ACC=true go test -run=TestAcc_ ./pkg/resource/aws/aws_instance_test.go ``` ### Credentials Acceptance tests need credentials to perform real world action on cloud providers: - Read/write access are required to perform terraform action - Read only access is required to execute driftctl Recommended way to run acceptance tests is to use two distinct credentials: - One for terraform related actions - One for driftctl scan In our acceptance tests, we may need read/write permissions during specific contexts (e.g. terraform init, apply, destroy) or lifecycle (PreExec and PostExec). If needed, you can override environment variables in those contexts by adding `ACC_` prefix on environment variables. #### AWS You can use `ACC_AWS_PROFILE` to override AWS named profile used for terraform operations. ```shell $ ACC_AWS_PROFILE=read-write-profile AWS_PROFILE=read-only-profile DRIFTCTL_ACC=true go test -run=TestAcc_ ./pkg/resource/aws/aws_instance_test.go ``` In the example below, the `driftctl` AWS profile must have read/write permissions and will be used for both terraform operations and driftctl run. This is **not** the recommended way to run tests as it may hide permissions issues. ```shell $ AWS_PROFILE=driftctl DRIFTCTL_ACC=true go test -run=TestAcc_ ./pkg/resource/aws/aws_instance_test.go ``` ### Workflow - **`OnStart`** You may run some code before everything - **terraform apply** - For each declared check loop - **`PreExec`** - **driftctl scan** - **check results** - **`PostExec`** - **`OnEnd`** - **terraform destroy** ⚠️ **driftctl tests handle terraform resources removal, but it is up to you to remove potential unmanaged resources added in `PreExec` step !** ### Example The following test runs terraform to create an EC2 instance. Then, we add a new tag (ENV: production) to the instance. Finally, we check the drift. ```go func TestAcc_AwsInstance_WithBlockDevices(t *testing.T) { var mutatedInstanceId string acceptance.Run(t, acceptance.AccTestCase{ // This path should contain terraform files Path: "./testdata/acc/aws_instance", // Pass args to driftctl execution // DO NOT PASS --output flag as it is handled automatically by test runner // You may use a .driftignore file in your test directory or use filters to limit driftctl scope // Try to be minimalist as possible as test will be easier to maintain over time Args: []string{"scan"}, // TODO add filter to limit scan scope to aws_instances Checks: []acceptance.AccCheck{ { // First check does not have any PreExec or PostExec Check: func(result *acceptance.ScanResult, stdout string, err error) { if err != nil { t.Fatal(err) } // Assert that no drift are detected result.AssertDriftCountTotal(0) // We could assert on analysis object directly // Below we check for infra strictly in sync, beware that this check should fail // if you run your acceptance test on a messy cloud provider state (existing dangling resources for example) // without using filter or driftignore // // Note that the result struct is composed of analysis result AND assertion library // You could use result.Equal() directly for example result.True(result.Analysis.IsSync()) }, }, { // In this PreExec, we retrieve the created instance ID and add a new tag // using AWS SDK // We store the instance ID in a var to assert on it after driftctl run PreExec: func() { client := ec2.New(awsutils.Session()) response, err := client.DescribeInstances(&ec2.DescribeInstancesInput{ Filters: []*ec2.Filter{ { Name: aws.String("instance-state-name"), Values: []*string{ aws.String("running"), }, }, { Name: aws.String("tag:Name"), Values: []*string{ aws.String("test_instance_1"), }, }, }, }) if err != nil { t.Fatal(err) } if len(response.Reservations[0].Instances) != 1 { t.Fatal("Error, unexpected number of instances found, manual check required") } mutatedInstanceId = *response.Reservations[0].Instances[0].InstanceId _, _ = client.CreateTags(&ec2.CreateTagsInput{ Resources: []*string{&mutatedInstanceId}, Tags: []*ec2.Tag{ { Key: aws.String("Env"), Value: aws.String("Production"), }, }, }) }, // Check that driftctl detected a drift on manually modified instances Check: func(result *acceptance.ScanResult, stdout string, err error) { if err != nil { t.Fatal(err) } result.AssertResourceHasDrift( mutatedInstanceId, awsresources.AwsInstanceResourceType, analyser.Change{ Change: diff.Change{ Type: diff.CREATE, Path: []string{"Tags", "Env"}, From: nil, To: "Production", }, }, ) }, }, }, }) } ```