massive doc updates

* examples/kubernetes: newly added
* docs/rootless.md: cleaned up for better readability
* examples/README.md: split out from the main README.md
* examples/build-using-dockerfile/README.md: split out from the main README.md
* README.md: add TOC using https://github.com/thlorenz/doctoc
* README.md: add mTLS configuration (relates to #1074)
* README.md: add more adoptions
* README.md: add inline cache (fix #976)

Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
v0.7
Akihiro Suda 2019-10-12 02:20:52 +09:00
parent 170ab6fad1
commit 1bde5d99d5
21 changed files with 745 additions and 296 deletions

View File

@ -1,2 +1,3 @@
bin
.certs
.tmp

View File

@ -61,6 +61,11 @@ make && sudo make install
You can also use `make binaries-all` to prepare `buildkitd.containerd_only` and `buildkitd.oci_only`.
To build containerized `moby/buildkit:local` and `moby/buildkit:local-rootless` images:
```bash
make images
```
### Run the unit- and integration-tests
Running tests:

3
.gitignore vendored
View File

@ -1,3 +1,4 @@
bin
release-out
.certs
.tmp
release-out

360
README.md
View File

@ -1,6 +1,6 @@
[![asciicinema example](https://asciinema.org/a/gPEIEo1NzmDTUu2bEPsUboqmU.png)](https://asciinema.org/a/gPEIEo1NzmDTUu2bEPsUboqmU)
## BuildKit
# BuildKit
[![GoDoc](https://godoc.org/github.com/moby/buildkit?status.svg)](https://godoc.org/github.com/moby/buildkit/client/llb)
[![Build Status](https://travis-ci.org/moby/buildkit.svg?branch=master)](https://travis-ci.org/moby/buildkit)
@ -25,49 +25,107 @@ Read the proposal from https://github.com/moby/moby/issues/32925
Introductory blog post https://blog.mobyproject.org/introducing-buildkit-17e056cc5317
Join `#buildkit` channel on [Docker Community Slack](http://dockr.ly/slack)
:information_source: If you are visiting this repo for the usage of experimental Dockerfile features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`, please refer to [`frontend/dockerfile/docs/experimental.md`](frontend/dockerfile/docs/experimental.md).
### Used by
:information_source: [BuildKit has been integrated to `docker build` since Docker 18.06 .](https://docs.docker.com/develop/develop-images/build_enhancements/)
You don't need to read this document unless you want to use the full-featured standalone version of BuildKit.
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
- [Used by](#used-by)
- [Quick start](#quick-start)
- [Starting the `buildkitd` daemon:](#starting-the-buildkitd-daemon)
- [Exploring LLB](#exploring-llb)
- [Exploring Dockerfiles](#exploring-dockerfiles)
- [Building a Dockerfile with `buildctl`](#building-a-dockerfile-with-buildctl)
- [Building a Dockerfile using external frontend:](#building-a-dockerfile-using-external-frontend)
- [Building a Dockerfile with experimental features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`](#building-a-dockerfile-with-experimental-features-like-run---mounttypebindcachetmpfssecretssh)
- [Output](#output)
- [Registry](#registry)
- [Local directory](#local-directory)
- [Docker tarball](#docker-tarball)
- [OCI tarball](#oci-tarball)
- [containerd image store](#containerd-image-store)
- [Cache](#cache)
- [Garbage collection](#garbage-collection)
- [Export cache](#export-cache)
- [Inline (push image and cache together)](#inline-push-image-and-cache-together)
- [Registry (push image and cache separately)](#registry-push-image-and-cache-separately)
- [Local directory](#local-directory-1)
- [`--export-cache` options](#--export-cache-options)
- [`--import-cache` options](#--import-cache-options)
- [Consistent hashing](#consistent-hashing)
- [Expose BuildKit as a TCP service](#expose-buildkit-as-a-tcp-service)
- [Load balancing](#load-balancing)
- [Containerizing BuildKit](#containerizing-buildkit)
- [Kubernetes](#kubernetes)
- [Daemonless](#daemonless)
- [Opentracing support](#opentracing-support)
- [Running BuildKit without root privileges](#running-buildkit-without-root-privileges)
- [Building multi-platform images](#building-multi-platform-images)
- [Contributing](#contributing)
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
## Used by
BuildKit is used by the following projects:
- [Moby & Docker](https://github.com/moby/moby/pull/37151)
- [Moby & Docker](https://github.com/moby/moby/pull/37151) (`DOCKER_BUILDKIT=1 docker build`)
- [img](https://github.com/genuinetools/img)
- [OpenFaaS Cloud](https://github.com/openfaas/openfaas-cloud)
- [container build interface](https://github.com/containerbuilding/cbi)
- [Knative Build Templates](https://github.com/knative/build-templates)
- [Tekton Pipelines](https://github.com/tektoncd/catalog) (formerly [Knative Build Templates](https://github.com/knative/build-templates))
- [the Sanic build tool](https://github.com/distributed-containers-inc/sanic)
- [vab](https://github.com/stellarproject/vab)
- [Rio](https://github.com/rancher/rio)
- [PouchContainer](https://github.com/alibaba/pouch)
- [Docker buildx](https://github.com/docker/buildx)
### Quick start
## Quick start
Dependencies:
:information_source: For Kubernetes deployments, see [`examples/kubernetes`](./examples/kubernetes).
BuildKit is composed of the `buildkitd` daemon and the `buildctl` client.
While the `buildctl` client is available for Linux, macOS, and Windows, the `buildkitd` daemon is only available for Linux currently.
The `buildkitd` daemon requires the following components to be installed:
- [runc](https://github.com/opencontainers/runc)
- [containerd](https://github.com/containerd/containerd) (if you want to use containerd worker)
The following command installs `buildkitd` and `buildctl` to `/usr/local/bin`:
The latest binaries of BuildKit are available [here](https://github.com/moby/buildkit/releases) for Linux, macOS, and Windows.
```bash
$ make && sudo make install
[Homebrew package](https://formulae.brew.sh/formula/buildkit) (unofficial) is available for macOS.
```console
$ brew install buildkit
```
You can also use `make binaries-all` to prepare `buildkitd.containerd_only` and `buildkitd.oci_only`.
To build BuildKit from source, see [`.github/CONTRIBUTING.md`](./.github/CONTRIBUTING.md).
#### Starting the buildkitd daemon:
### Starting the `buildkitd` daemon:
You need to run `buildkitd` as the root user on the host.
```bash
buildkitd --debug --root /var/lib/buildkit
$ sudo buildkitd
```
To run `buildkitd` as a non-root user, see [`docs/rootless.md`](docs/rootless.md).
The buildkitd daemon supports two worker backends: OCI (runc) and containerd.
By default, the OCI (runc) worker is used. You can set `--oci-worker=false --containerd-worker=true` to use the containerd worker.
We are open to adding more backends.
#### Exploring LLB
The buildkitd daemon listens gRPC API on `/run/buildkit/buildkitd.sock` by default, but you can also use TCP sockets.
See [Expose BuildKit as a TCP service](#expose-buildkit-as-a-tcp-service).
### Exploring LLB
BuildKit builds are based on a binary intermediate format called LLB that is used for defining the dependency graph for processes running part of your build. tl;dr: LLB is to Dockerfile what LLVM IR is to C.
@ -76,49 +134,23 @@ BuildKit builds are based on a binary intermediate format called LLB that is use
- Efficiently cacheable
- Vendor-neutral (i.e. non-Dockerfile languages can be easily implemented)
See [`solver/pb/ops.proto`](./solver/pb/ops.proto) for the format definition.
See [`solver/pb/ops.proto`](./solver/pb/ops.proto) for the format definition, and see [`./examples/README.md`](./examples/README.md) for example LLB applications.
Currently, following high-level languages has been implemented for LLB:
Currently, the following high-level languages has been implemented for LLB:
- Dockerfile (See [Exploring Dockerfiles](#exploring-dockerfiles))
- [Buildpacks](https://github.com/tonistiigi/buildkit-pack)
- [Mockerfile](https://matt-rickard.com/building-a-new-dockerfile-frontend/)
- [Gockerfile](https://github.com/po3rin/gockerfile)
- (open a PR to add your own language)
For understanding the basics of LLB, `examples/buildkit*` directory contains scripts that define how to build different configurations of BuildKit itself and its dependencies using the `client` package. Running one of these scripts generates a protobuf definition of a build graph. Note that the script itself does not execute any steps of the build.
### Exploring Dockerfiles
You can use `buildctl debug dump-llb` to see what data is in this definition. Add `--dot` to generate dot layout.
Frontends are components that run inside BuildKit and convert any build definition to LLB. There is a special frontend called gateway (`gateway.v0`) that allows using any image as a frontend.
```bash
go run examples/buildkit0/buildkit.go \
| buildctl debug dump-llb \
| jq .
```
During development, Dockerfile frontend (`dockerfile.v0`) is also part of the BuildKit repo. In the future, this will be moved out, and Dockerfiles can be built using an external image.
To start building use `buildctl build` command. The example script accepts `--with-containerd` flag to choose if containerd binaries and support should be included in the end result as well.
```bash
go run examples/buildkit0/buildkit.go \
| buildctl build
```
`buildctl build` will show interactive progress bar by default while the build job is running. If the path to the trace file is specified, the trace file generated will contain all information about the timing of the individual steps and logs.
Different versions of the example scripts show different ways of describing the build definition for this project to show the capabilities of the library. New versions have been added when new features have become available.
- `./examples/buildkit0` - uses only exec operations, defines a full stage per component.
- `./examples/buildkit1` - cloning git repositories has been separated for extra concurrency.
- `./examples/buildkit2` - uses git sources directly instead of running `git clone`, allowing better performance and much safer caching.
- `./examples/buildkit3` - allows using local source files for separate components eg. `./buildkit3 --runc=local | buildctl build --local runc-src=some/local/path`
- `./examples/dockerfile2llb` - can be used to convert a Dockerfile to LLB for debugging purposes
- `./examples/gobuild` - shows how to use nested invocation to generate LLB for Go package internal dependencies
#### Exploring Dockerfiles
Frontends are components that run inside BuildKit and convert any build definition to LLB. There is a special frontend called gateway (gateway.v0) that allows using any image as a frontend.
During development, Dockerfile frontend (dockerfile.v0) is also part of the BuildKit repo. In the future, this will be moved out, and Dockerfiles can be built using an external image.
##### Building a Dockerfile with `buildctl`
#### Building a Dockerfile with `buildctl`
```bash
buildctl build \
@ -136,22 +168,7 @@ buildctl build \
`--local` exposes local source files from client to the builder. `context` and `dockerfile` are the names Dockerfile frontend looks for build context and Dockerfile location.
##### build-using-dockerfile utility
For people familiar with `docker build` command, there is an example wrapper utility in `./examples/build-using-dockerfile` that allows building Dockerfiles with BuildKit using a syntax similar to `docker build`.
```bash
go build ./examples/build-using-dockerfile \
&& sudo install build-using-dockerfile /usr/local/bin
build-using-dockerfile -t myimage .
build-using-dockerfile -t mybuildkit -f ./hack/dockerfiles/test.Dockerfile .
# build-using-dockerfile will automatically load the resulting image to Docker
docker inspect myimage
```
##### Building a Dockerfile using [external frontend](https://hub.docker.com/r/docker/dockerfile/tags/):
#### Building a Dockerfile using external frontend:
External versions of the Dockerfile frontend are pushed to https://hub.docker.com/r/docker/dockerfile-upstream and https://hub.docker.com/r/docker/dockerfile and can be used with the gateway frontend. The source for the external frontend is currently located in `./frontend/dockerfile/cmd/dockerfile-frontend` but will move out of this repository in the future ([#163](https://github.com/moby/buildkit/issues/163)). For automatic build from master branch of this repository `docker/dockerfile-upsteam:master` or `docker/dockerfile-upstream:master-experimental` image can be used.
@ -168,7 +185,7 @@ buildctl build \
--opt build-arg:APT_MIRROR=cdn-fastly.deb.debian.org
```
##### Building a Dockerfile with experimental features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`
#### Building a Dockerfile with experimental features like `RUN --mount=type=(bind|cache|tmpfs|secret|ssh)`
See [`frontend/dockerfile/docs/experimental.md`](frontend/dockerfile/docs/experimental.md).
@ -176,24 +193,26 @@ See [`frontend/dockerfile/docs/experimental.md`](frontend/dockerfile/docs/experi
By default, the build result and intermediate cache will only remain internally in BuildKit. An output needs to be specified to retrieve the result.
##### Exporting resulting image to containerd
The containerd worker needs to be used
```bash
buildctl build ... --output type=image,name=docker.io/username/image
ctr --namespace=buildkit images ls
```
##### Push resulting image to registry
#### Registry
```bash
buildctl build ... --output type=image,name=docker.io/username/image,push=true
```
If credentials are required, `buildctl` will attempt to read Docker configuration file.
To export and import the cache along with the image, you need to specify `--export-cache type=inline` and `--import-cache type=registry,ref=...`.
See [Export cache](#export-cache).
##### Exporting build result back to client
```bash
buildctl build ...\
--output type=image,name=docker.io/username/image,push=true \
--export-cache type=inline \
--import-cache type=registry,ref=docker.io/username/image
```
If credentials are required, `buildctl` will attempt to read Docker configuration file `$DOCKER_CONFIG/config.json`.
`$DOCKER_CONFIG` defaults to `~/.docker`.
#### Local directory
The local client will copy the files directly to the client. This is useful if BuildKit is being used for building something else than container images.
@ -222,70 +241,150 @@ buildctl build ... --output type=tar,dest=out.tar
buildctl build ... --output type=tar > out.tar
```
##### Exporting built image to Docker
#### Docker tarball
```bash
# exported tarball is also compatible with OCI spec
buildctl build ... --output type=docker,name=myimage | docker load
```
##### Exporting [OCI Image Format](https://github.com/opencontainers/image-spec) tarball to client
#### OCI tarball
```bash
buildctl build ... --output type=oci,dest=path/to/output.tar
buildctl build ... --output type=oci > output.tar
```
#### containerd image store
### Exporting/Importing build cache (not image itself)
#### To/From registry
The containerd worker needs to be used
```bash
buildctl build ... --export-cache type=registry,ref=localhost:5000/myrepo:buildcache
buildctl build ... --import-cache type=registry,ref=localhost:5000/myrepo:buildcache
buildctl build ... --output type=image,name=docker.io/username/image
ctr --namespace=buildkit images ls
```
#### To/From local filesystem
To change the containerd namespace, you need to change `worker.containerd.namespace` in [`/etc/buildkit/buildkitd.toml`](./docs/buildkitd.toml.md).
```bash
buildctl build ... --export-cache type=local,dest=path/to/output-dir
buildctl build ... --import-cache type=local,src=path/to/input-dir
```
The directory layout conforms to OCI Image Spec v1.0.
## Cache
#### `--export-cache` options
- `mode=min` (default): only export layers for the resulting image
- `mode=max`: export all the layers of all intermediate steps
- `ref=docker.io/user/image:tag`: reference for `registry` cache exporter
- `dest=path/to/output-dir`: directory for `local` cache exporter
#### `--import-cache` options
- `ref=docker.io/user/image:tag`: reference for `registry` cache importer
- `src=path/to/input-dir`: directory for `local` cache importer
- `digest=sha256:deadbeef`: digest of the manifest list to import for `local` cache importer. Defaults to the digest of "latest" tag in `index.json`
### Other
#### View build cache
To show local build cache (`/var/lib/buildkit`):
```bash
buildctl du -v
```
#### Show enabled workers
To prune local build cache:
```bash
buildctl debug workers -v
buildctl prune
```
### Running containerized buildkit
### Garbage collection
BuildKit can also be used by running the `buildkitd` daemon inside a Docker container and accessing it remotely. The client tool `buildctl` is also available for Mac and Windows.
See [`./docs/buildkitd.toml.md`](./docs/buildkitd.toml.md).
We provide `buildkitd` container images as [`moby/buildkit`](https://hub.docker.com/r/moby/buildkit/tags/):
### Export cache
BuildKit supports the following cache exporters:
* `inline`: embed the cache into the image, and push them to the registry together
* `registry`: push the image and the cache separately
* `local`: export to a local directory
In most case you want to use the `inline` cache exporter.
However, note that the `inline` cache exporter only supports `min` cache mode.
To enable `max` cache mode, push the image and the cache separately by using `registry` cache exporter.
#### Inline (push image and cache together)
```bash
buildctl build ... \
--output type=image,name=docker.io/username/image,push=true \
--export-cache type=inline \
--import-cache type=registry,ref=docker.io/username/image
```
Note that the inline cache is not imported unless `--import-cache type=registry,ref=...` is provided.
:information_source: Docker-integrated BuildKit (`DOCKER_BUILDKIT=1 docker build`) and `docker buildx`requires
`--build-arg BUILDKIT_INLINE_CACHE=1` to be specified to enable the `inline` cache exporter.
However, the standalone `buildctl` does NOT require `--opt build-arg:BUILDKIT_INLINE_CACHE=1` and the build-arg is simply ignored.
#### Registry (push image and cache separately)
```bash
buildctl build ... \
--output type=image,name=localhost:5000/myrepo:image,push=true \
--export-cache type=registry,ref=localhost:5000/myrepo:buildcache \
--import-cache type=registry,ref=localhost:5000/myrepo:buildcache \
```
#### Local directory
```bash
buildctl build ... --export-cache type=local,dest=path/to/output-dir
buildctl build ... --import-cache type=local,src=path/to/input-dir,digest=sha256:deadbeef
```
The directory layout conforms to OCI Image Spec v1.0.
Currently, you need to specify the `digest` of the manifest list to import for `local` cache importer.
This is planned to default to the digest of "latest" tag in `index.json` in future.
#### `--export-cache` options
- `type`: `inline`, `registry`, or `local`
- `mode=min` (default): only export layers for the resulting image
- `mode=max`: export all the layers of all intermediate steps. Not supported for `inline` cache exporter.
- `ref=docker.io/user/image:tag`: reference for `registry` cache exporter
- `dest=path/to/output-dir`: directory for `local` cache exporter
#### `--import-cache` options
- `type`: `registry` or `local`. Use `registry` to import `inline` cache.
- `ref=docker.io/user/image:tag`: reference for `registry` cache importer
- `src=path/to/input-dir`: directory for `local` cache importer
- `digest=sha256:deadbeef`: digest of the manifest list to import for `local` cache importer.
### Consistent hashing
If you have multiple BuildKit daemon instances but you don't want to use registry for sharing cache across the cluster,
consider client-side load balancing using consistent hashing.
See [`./examples/kubernetes/consistenthash`](./examples/kubernetes/consistenthash).
## Expose BuildKit as a TCP service
The `buildkitd` daemon can listen the gRPC API on a TCP socket.
It is highly recommended to create TLS certificates for both the daemon and the client (mTLS).
Enabling TCP without mTLS is dangerous because the executor containers (aka Dockerfile `RUN` containers) can call BuildKit API as well.
```bash
buildkitd \
--addr tcp://0.0.0.0:1234 \
--tlscacert /path/to/ca.pem \
--tlscert /path/to/cert.pem \
--tlskey /path/to/key.pem
```
```bash
buildctl \
--addr tcp://example.com:1234 \
--tlscacert /path/to/ca.pem \
--tlscert /path/to/clientcert.pem \
--tlskey /path/to/clientkey.pem \
build ...
```
### Load balancing
`buildctl build` can be called against randomly load balanced the `buildkitd` daemon.
See also [Consistent hashing](#consistenthashing) for client-side load balancing.
## Containerizing BuildKit
BuildKit can also be used by running the `buildkitd` daemon inside a Docker container and accessing it remotely.
We provide the container images as [`moby/buildkit`](https://hub.docker.com/r/moby/buildkit/tags/):
- `moby/buildkit:latest`: built from the latest regular [release](https://github.com/moby/buildkit/releases)
- `moby/buildkit:rootless`: same as `latest` but runs as an unprivileged user, see [`docs/rootless.md`](docs/rootless.md)
@ -295,11 +394,17 @@ We provide `buildkitd` container images as [`moby/buildkit`](https://hub.docker.
To run daemon in a container:
```bash
docker run -d --privileged -p 1234:1234 moby/buildkit:latest --addr tcp://0.0.0.0:1234
export BUILDKIT_HOST=tcp://0.0.0.0:1234
docker run -d --name buildkitd --privileged moby/buildkit:latest
export BUILDKIT_HOST=docker-container://buildkitd
buildctl build --help
```
### Kubernetes
For Kubernetes deployments, see [`examples/kubernetes`](./examples/kubernetes).
### Daemonless
To run client and an ephemeral daemon in a single container ("daemonless mode"):
```bash
@ -335,21 +440,7 @@ docker run \
--local dockerfile=/tmp/work
```
The images can be also built locally using `./hack/dockerfiles/test.Dockerfile` (or `./hack/dockerfiles/test.buildkit.Dockerfile` if you already have BuildKit). Run `make images` to build the images as `moby/buildkit:local` and `moby/buildkit:local-rootless`.
#### Connection helpers
If you are running `moby/buildkit:master` or `moby/buildkit:master-rootless` as a Docker/Kubernetes container, you can use special `BUILDKIT_HOST` URL for connecting to the BuildKit daemon in the container:
```bash
export BUILDKIT_HOST=docker-container://<container>
```
```bash
export BUILDKIT_HOST=kube-pod://<pod>
```
### Opentracing support
## Opentracing support
BuildKit supports opentracing for buildkitd gRPC API and buildctl commands. To capture the trace to [Jaeger](https://github.com/jaegertracing/jaeger), set `JAEGER_TRACE` environment variable to the collection address.
@ -360,14 +451,15 @@ export JAEGER_TRACE=0.0.0.0:6831
# any buildctl command should be traced to http://127.0.0.1:16686/
```
### Supported runc version
During development, BuildKit is tested with the version of runc that is being used by the containerd repository. Please refer to [runc.md](https://github.com/containerd/containerd/blob/v1.2.1/RUNC.md) for more information.
### Running BuildKit without root privileges
## Running BuildKit without root privileges
Please refer to [`docs/rootless.md`](docs/rootless.md).
### Contributing
## Building multi-platform images
See [`docker buildx` documentation](https://github.com/docker/buildx#building-multi-platform-images)
## Contributing
Want to contribute to BuildKit? Awesome! You can find information about contributing to this project in the [CONTRIBUTING.md](/.github/CONTRIBUTING.md)

View File

@ -1,81 +1,77 @@
# Rootless mode (Experimental)
Requirements:
- runc `a00bf0190895aa465a5fbed0268888e2c8ddfe85` (Oct 15, 2018) or later
- Some distros such as Debian (excluding Ubuntu) and Arch Linux require `sudo sh -c "echo 1 > /proc/sys/kernel/unprivileged_userns_clone"`.
- RHEL/CentOS 7 requires `sudo sh -c "echo 28633 > /proc/sys/user/max_user_namespaces"`. You may also need `sudo grubby --args="namespace.unpriv_enable=1 user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"`.
- `newuidmap` and `newgidmap` need to be installed on the host. These commands are provided by the `uidmap` package. For RHEL/CentOS 7, RPM is not officially provided but available at https://copr.fedorainfracloud.org/coprs/vbatts/shadow-utils-newxidmap/ .
- `/etc/subuid` and `/etc/subgid` should contain >= 65536 sub-IDs. e.g. `penguin:231072:65536`.
Rootless mode allows running BuildKit daemon as a non-root user.
## Distribution-specific hint
Using Ubuntu kernel is recommended.
## Set up
### Ubuntu
* No preparation is needed.
* `overlayfs` snapshotter is enabled by default ([Ubuntu-specific kernel patch](https://kernel.ubuntu.com/git/ubuntu/ubuntu-bionic.git/commit/fs/overlayfs?id=3b7da90f28fe1ed4b79ef2d994c81efbc58f1144)).
Setting up rootless mode also requires some bothersome steps as follows, but you can also use [`rootlesskit`](https://github.com/rootless-containers/rootlesskit) for automating these steps.
### Debian GNU/Linux
* Add `kernel.unprivileged_userns_clone=1` to `/etc/sysctl.conf` (or `/etc/sysctl.d`) and run `sudo sysctl -p`
* To use `overlayfs` snapshotter (recommended), run `sudo modprobe overlay permit_mounts_in_userns=1` ([Debian-specific kernel patch, introduced in Debian 10](https://salsa.debian.org/kernel-team/linux/blob/283390e7feb21b47779b48e0c8eb0cc409d2c815/debian/patches/debian/overlayfs-permit-mounts-in-userns.patch)). Put the configuration to `/etc/modprobe.d` for persistence.
### Terminal 1:
### Arch Linux
* Add `kernel.unprivileged_userns_clone=1` to `/etc/sysctl.conf` (or `/etc/sysctl.d`) and run `sudo sysctl -p`
```
$ unshare -U -m
unshared$ echo $$ > /tmp/pid
### Fedora 31 and later
* Run `sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"` and reboot.
### Fedora 30
* No preparation is needed
### RHEL/CentOS 8
* No preparation is needed
### RHEL/CentOS 7
* Add `user.max_user_namespaces=28633` to `/etc/sysctl.conf` (or `/etc/sysctl.d`) and run `sudo sysctl -p`
* Old releases (<= 7.6) require [extra configuration steps](https://github.com/moby/moby/pull/40076).
### Container-Optimized OS from Google
* :warning: Currently unsupported. See [#879](https://github.com/moby/buildkit/issues/879).
## Known limitations
* No support for `overlayfs` snapshotter, except on Ubuntu and Debian kernels. We are planning to support `fuse-overlayfs` snapshotter instead for other kernels.
* Network mode is always set to `network.host`.
* No support for `containerd` worker
## Running BuildKit in Rootless mode
[RootlessKit](https://github.com/rootless-containers/rootlesskit/) needs to be installed.
```console
$ rootlesskit buildkitd
```
Unsharing mountns (and userns) is required for mounting filesystems without real root privileges.
### Terminal 2:
```
$ id -u
1001
$ grep $(whoami) /etc/subuid
penguin:231072:65536
$ grep $(whoami) /etc/subgid
penguin:231072:65536
$ newuidmap $(cat /tmp/pid) 0 1001 1 1 231072 65536
$ newgidmap $(cat /tmp/pid) 0 1001 1 1 231072 65536
```console
$ buildctl --addr unix:///run/user/$UID/buildkit/buildkitd.sock build ...
```
### Terminal 1:
## Containerized deployment
### Kubernetes
See [`../examples/kubernetes`](../examples/kubernetes).
### Docker
```console
$ docker run --name buildkitd -d --security-opt seccomp=unconfined --security-opt apparmor=unconfined moby/buildkit:rootless --oci-worker-no-process-sandbox
$ buildctl --addr docker-container://buildkitd build ...
```
unshared# buildkitd
```
#### About `--oci-worker-no-process-sandbox`
* The data dir will be set to `/home/penguin/.local/share/buildkit`
* The address will be set to `unix:///run/user/1001/buildkit/buildkitd.sock`
* `overlayfs` snapshotter is not supported except Ubuntu-flavored kernel: http://kernel.ubuntu.com/git/ubuntu/ubuntu-artful.git/commit/fs/overlayfs?h=Ubuntu-4.13.0-25.29&id=0a414bdc3d01f3b61ed86cfe3ce8b63a9240eba7
* containerd worker is not supported ( pending PR: https://github.com/containerd/containerd/pull/2006 )
* Network namespace is not used at the moment.
* Cgroups is disabled.
By adding `--oci-worker-no-process-sandbox` to the `buildkitd` arguments, BuildKit can be executed in a container without adding `--privileged` to `docker run` arguments.
However, you still need to pass `--security-opt seccomp=unconfined --security-opt apparmor=unconfined` to `docker run`.
### Terminal 2:
Note that `--oci-worker-no-process-sandbox` allows build executor containers to `kill` (and potentially `ptrace` depending on the seccomp configuration) an arbitrary process in the BuildKit daemon container.
```
$ go get ./examples/build-using-dockerfile
$ build-using-dockerfile --buildkit-addr unix:///run/user/1001/buildkit/buildkitd.sock -t foo /path/to/somewhere
```
To allow running rootless `buildkitd` without `--oci-worker-no-process-sandbox`, run `docker run` with `--security-opt systempaths=unconfined`. (For Kubernetes, set `securityContext.procMount` to `Unmasked`.)
## Set up (using a container)
The `--security-opt systempaths=unconfined` flag disables the masks for the `/proc` mount in the container and potentially allows reading and writing dangerous kernel files, but it is safe when you are running `buildkitd` as non-root.
Docker image is available as [`moby/buildkit:rootless`](https://hub.docker.com/r/moby/buildkit/tags/).
```
$ docker run --name buildkitd -d --privileged -p 1234:1234 moby/buildkit:rootless --addr tcp://0.0.0.0:1234
```
```
$ go get ./examples/build-using-dockerfile
$ build-using-dockerfile --buildkit-addr tcp://127.0.0.1:1234 -t foo /path/to/somewhere
```
### Security consideration
Although `moby/buildkit:rootless` executes the BuildKit daemon as a normal user, `docker run` still requires `--privileged`.
This is to allow build executor containers to mount `/proc`, by providing "unmasked" `/proc` to the BuildKit daemon container.
See [`docker/cli#1347`](https://github.com/docker/cli/pull/1347) for the ongoing work to remove this requirement.
See also [Disabling process sandbox](#disabling-process-sandbox).
#### UID/GID
### Change UID/GID
The `moby/buildkit:rootless` image has the following UID/GID configuration:
@ -101,85 +97,8 @@ user:100000:65536
To change the UID/GID configuration, you need to modify and build the BuildKit image manually.
```
$ vi hack/dockerfiles/test.Dockerfile
$ docker build -t buildkit-rootless-custom --target rootless -f hack/dockerfiles/test.Dockerfile .
$ vi hack/dockerfiles/test.buildkit.Dockerfile
$ make images
$ docker run ... moby/buildkit:local-rootless ...
```
#### Disabling process sandbox
By passing `--oci-worker-no-process-sandbox` to the `buildkitd` arguments, BuildKit can be executed in a container without `--privileged`.
However, you still need to pass `--security-opt seccomp=unconfined --security-opt apparmor=unconfined` to `docker run`.
```
$ docker run --name buildkitd -d --security-opt seccomp=unconfined --security-opt apparmor=unconfined -p 1234:1234 moby/buildkit:rootless --addr tcp://0.0.0.0:1234 --oci-worker-no-process-sandbox
```
Note that `--oci-worker-no-process-sandbox` allows build executor containers to `kill` (and potentially `ptrace` depending on the seccomp configuration) an arbitrary process in the BuildKit daemon container.
## Set up (using Kubernetes)
### With `securityContext`
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: buildkitd
name: buildkitd
spec:
selector:
matchLabels:
app: buildkitd
template:
metadata:
labels:
app: buildkitd
spec:
containers:
- image: moby/buildkit:rootless
args:
- --addr
- tcp://0.0.0.0:1234
name: buildkitd
ports:
- containerPort: 1234
securityContext:
privileged: true
```
This configuration requires privileged containers to be enabled.
If you are using Kubernetes v1.12+ with either Docker v18.06+, containerd v1.2+, or CRI-O v1.12+ as the CRI runtime, you can replace `privileged: true` with `procMount: Unmasked`.
### Without `securityContext` but with `--oci-worker-no-process-sandbox`
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: buildkitd
name: buildkitd
spec:
selector:
matchLabels:
app: buildkitd
template:
metadata:
labels:
app: buildkitd
spec:
containers:
- image: moby/buildkit:rootless
args:
- --addr
- tcp://0.0.0.0:1234
- --oci-worker-no-process-sandbox
name: buildkitd
ports:
- containerPort: 1234
```
See [Disabling process sandbox](#disabling-process-sandbox) for security notice.

39
examples/README.md Normal file
View File

@ -0,0 +1,39 @@
# BuildKit Examples
## Kubernetes manifests
- [`./kubernetes`](./kubernetes): Kubernetes manifests (`Pod`, `Deployment`, `StatefulSet`, `Job`)
## CLI examples
- [`./buildctl-daemonless`](./buildctl-daemonless): buildctl without daemon
- [`./build-using-dockerfile`](./build-using-dockerfile): an example BuildKit client with `docker build`-style CLI
## LLB examples
For understanding the basics of LLB, `buildkit*` directory contains scripts that define how to build different configurations of BuildKit itself and its dependencies using the `client` package. Running one of these scripts generates a protobuf definition of a build graph. Note that the script itself does not execute any steps of the build.
You can use `buildctl debug dump-llb` to see what data is in this definition. Add `--dot` to generate dot layout.
```bash
go run examples/buildkit0/buildkit.go \
| buildctl debug dump-llb \
| jq .
```
To start building use `buildctl build` command. The example script accepts `--with-containerd` flag to choose if containerd binaries and support should be included in the end result as well.
```bash
go run examples/buildkit0/buildkit.go \
| buildctl build
```
`buildctl build` will show interactive progress bar by default while the build job is running. If the path to the trace file is specified, the trace file generated will contain all information about the timing of the individual steps and logs.
Different versions of the example scripts show different ways of describing the build definition for this project to show the capabilities of the library. New versions have been added when new features have become available.
- `./buildkit0` - uses only exec operations, defines a full stage per component.
- `./buildkit1` - cloning git repositories has been separated for extra concurrency.
- `./buildkit2` - uses git sources directly instead of running `git clone`, allowing better performance and much safer caching.
- `./buildkit3` - allows using local source files for separate components eg. `./buildkit3 --runc=local | buildctl build --local runc-src=some/local/path`
- `./dockerfile2llb` - can be used to convert a Dockerfile to LLB for debugging purposes
- `./nested-llb` - shows how to use nested invocation to generate LLB
- `./gobuild` - shows how to use nested invocation to generate LLB for Go package internal dependencies

View File

@ -0,0 +1,15 @@
# `build-using-dockerfile` example
:information_source: [BuildKit has been integrated to `docker build` since Docker 18.06.](https://docs.docker.com/develop/develop-images/build_enhancements/)
The `build-using-dockerfile` CLI is just provided as an example for writing a BuildKit client application.
For people familiar with `docker build` command, `build-using-dockerfile` is provided as an example for building Dockerfiles with BuildKit using a syntax similar to `docker build`.
```bash
go get .
build-using-dockerfile -t myimage /path/to/dir
# build-using-dockerfile will automatically load the resulting image to Docker
docker inspect myimage
```

View File

@ -0,0 +1 @@
kubernetes/consistenthash

View File

@ -0,0 +1,79 @@
# Kubernetes manifests for BuildKit
This directory contains Kubernetes manifests for `Pod`, `Deployment` (with `Service`), `StatefulSet`, and `Job`.
* `Pod`: good for quick-start
* `Deployment` + `Service`: good for random load balancing with registry-side cache
* `StateFulset`: good for client-side load balancing, without registry-side cache
* `Job`: good if you don't want to have daemon pods
Using Rootless mode (`*.rootless.yaml`) is recommended because Rootless mode image is executed as non-root user (UID 1000) and doesn't need `securityContext.privileged`.
:warning: Rootless mode may not work on some host kernels. See [`../../docs/rootless.md`](../../docs/rootless.md).
See also ["Building Images Efficiently And Securely On Kubernetes With BuildKit" (KubeCon EU 2019)](https://kccnceu19.sched.com/event/MPX5).
## `Pod`
```console
$ kubectl apply -f pod.rootless.yaml
$ buildctl \
--addr kube-pod://buildkitd \
build --frontend dockerfile.v0 --local context=/path/to/dir --local dockerfile=/path/to/dir
```
If rootless mode doesn't work, try `pod.privileged.yaml`.
:warning: `kube-pod://` connection helper requires Kubernetes role that can access `pods/exec` resources. If `pods/exec` is not accessible, use `Service` instead (See below).
## `Deployment` + `Service`
Setting up mTLS is highly recommended.
`./create-certs.sh SAN [SAN...]` can be used for creating certificates.
```console
$ ./create-certs.sh 127.0.0.1
```
The daemon certificates is created as `Secret` manifest named `buildkit-daemon-certs`.
```console
$ kubectl apply -f .certs/buildkit-daemon-certs.yaml
```
Apply the `Deployment` and `Service` manifest:
```console
$ kubectl apply -f deployment+service.rootless.yaml
$ kubectl scale --replicas=10 deployment/buildkitd
```
Run `buildctl` with TLS client certificates:
```console
$ kubectl port-forward service/buildkitd 1234
$ buildctl \
--addr tcp://127.0.0.1:1234 \
--tlscacert .certs/client/ca.pem \
--tlscert .certs/client/cert.pem \
--tlskey .certs/client/key.pem \
build --frontend dockerfile.v0 --local context=/path/to/dir --local dockerfile=/path/to/dir
```
## `StatefulSet`
`StatefulSet` is useful for consistent hash mode.
```console
$ kubectl apply -f statefulset.rootless.yaml
$ kubectl scale --replicas=10 statefulset/buildkitd
$ buildctl \
--addr kube-pod://buildkitd-4 \
build --frontend dockerfile.v0 --local context=/path/to/dir --local dockerfile=/path/to/dir
```
See `[./consistenthash`](./consistenthash) for how to use consistent hashing.
## `Job`
```console
$ kubectl apply -f job.rootless.yaml
```
To push the image to the registry, you also need to mount `~/.docker/config.json`
and set `$DOCKER_CONFIG` to `/path/to/.docker` directory.

View File

@ -5,7 +5,7 @@ Demo for efficiently using BuildKit daemon-local cache with multi-node cluster
## Deploy
```console
$ kubectl apply -f buildkitd-rootless-statefulset.yaml
$ kubectl apply -f ../statefulset.rootless.yaml
$ kubectl scale --replicas=10 statefulset/buildkitd
```
@ -19,7 +19,7 @@ For example, the key can be defined as `<REPO NAME>:<CONTEXT PATH>`, e.g.
Then determine the pod that corresponds to the key:
```console
$ go get ./consistenthash
$ go build -o consistenthash .
$ pod=$(./show-running-pods.sh | consistenthash $key)
```

View File

@ -0,0 +1,42 @@
#!/bin/bash
set -o errexit
set -o nounset
set -o pipefail
set -o errtrace
PRODUCT=buildkit
DIR=./.certs
if [[ "$#" -lt 1 ]]; then
echo "Usage: $0 SAN [SAN...]"
echo
echo "Example: $0 buildkitd.default.svc 127.0.0.1"
echo
echo "The following iles will be created under ${DIR}"
echo "- daemon/{ca.pem,cert.pem,key.pem}"
echo "- client/{ca.pem,cert.pem,key.pem}"
echo "- ${PRODUCT}-daemon-certs.yaml"
echo "- ${PRODUCT}-client-certs.yaml"
echo "- SAN"
exit 1
fi
if ! command -v mkcert >/dev/null; then
echo "Missing mkcert (https://github.com/FiloSottile/mkcert)"
exit 1
fi
SAN=$@
SAN_CLIENT=client
mkdir -p $DIR ${DIR}/daemon ${DIR}/client
(
cd $DIR
echo $SAN | tr " " "\n" >SAN
CAROOT=$(pwd) mkcert -cert-file daemon/cert.pem -key-file daemon/key.pem ${SAN} >/dev/null 2>&1
CAROOT=$(pwd) mkcert -client -cert-file client/cert.pem -key-file client/key.pem ${SAN_CLIENT} >/dev/null 2>&1
cp -f rootCA.pem daemon/ca.pem
cp -f rootCA.pem client/ca.pem
rm -f rootCA.pem rootCA-key.pem
kubectl create secret generic ${PRODUCT}-daemon-certs --dry-run -o yaml --from-file=./daemon >${PRODUCT}-daemon-certs.yaml
kubectl create secret generic ${PRODUCT}-client-certs --dry-run -o yaml --from-file=./client >${PRODUCT}-client-certs.yaml
)

View File

@ -0,0 +1,56 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: buildkitd
name: buildkitd
spec:
replicas: 1
selector:
matchLabels:
app: buildkitd
template:
metadata:
labels:
app: buildkitd
spec:
containers:
- name: buildkitd
image: moby/buildkit:master
args:
- --addr
- unix:///run/buildkit/buildkitd.sock
- --addr
- tcp://0.0.0.0:1234
- --tlscacert
- /certs/ca.pem
- --tlscert
- /certs/cert.pem
- --tlskey
- /certs/key.pem
securityContext:
privileged: true
ports:
- containerPort: 1234
volumeMounts:
- name: certs
readOnly: true
mountPath: /certs
volumes:
# buildkit-daemon-certs must contain ca.pem, cert.pem, and key.pem
- name: certs
secret:
secretName: buildkit-daemon-certs
---
apiVersion: v1
kind: Service
metadata:
labels:
app: buildkitd
name: buildkitd
spec:
ports:
- port: 1234
protocol: TCP
selector:
app: buildkitd

View File

@ -0,0 +1,63 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: buildkitd
name: buildkitd
spec:
replicas: 1
selector:
matchLabels:
app: buildkitd
template:
metadata:
labels:
app: buildkitd
annotations:
container.apparmor.security.beta.kubernetes.io/buildkitd: unconfined
container.seccomp.security.alpha.kubernetes.io/buildkitd: unconfined
# see buildkit/docs/rootless.md for caveats of rootless mode
spec:
containers:
- name: buildkitd
image: moby/buildkit:master-rootless
args:
- --addr
- unix:///run/user/1000/buildkit/buildkitd.sock
- --addr
- tcp://0.0.0.0:1234
- --tlscacert
- /certs/ca.pem
- --tlscert
- /certs/cert.pem
- --tlskey
- /certs/key.pem
- --oci-worker-no-process-sandbox
securityContext:
# To change UID/GID, you need to rebuild the image
runAsUser: 1000
runAsGroup: 1000
ports:
- containerPort: 1234
volumeMounts:
- name: certs
readOnly: true
mountPath: /certs
volumes:
# buildkit-daemon-certs must contain ca.pem, cert.pem, and key.pem
- name: certs
secret:
secretName: buildkit-daemon-certs
---
apiVersion: v1
kind: Service
metadata:
labels:
app: buildkitd
name: buildkitd
spec:
ports:
- port: 1234
protocol: TCP
selector:
app: buildkitd

View File

@ -0,0 +1,44 @@
apiVersion: batch/v1
kind: Job
metadata:
name: buildkit
spec:
template:
spec:
restartPolicy: Never
initContainers:
- name: prepare
image: alpine:3.10
command:
- sh
- -c
- "echo FROM hello-world > /workspace/Dockerfile"
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: buildkit
image: moby/buildkit:master
command:
- buildctl-daemonless.sh
args:
- build
- --frontend
- dockerfile.v0
- --local
- context=/workspace
- --local
- dockerfile=/workspace
# To push the image to a registry, add
# `--output type=image,name=docker.io/username/image,push=true`
securityContext:
privileged: true
volumeMounts:
- name: workspace
readOnly: true
mountPath: /workspace
# To push the image, you also need to create `~/.docker/config.json` secret
# and set $DOCKER_CONFIG to `/path/to/.docker` directory.
volumes:
- name: workspace
emptyDir: {}

View File

@ -0,0 +1,57 @@
apiVersion: batch/v1
kind: Job
metadata:
name: buildkit
spec:
template:
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/buildkit: unconfined
container.seccomp.security.alpha.kubernetes.io/buildkit: unconfined
# see buildkit/docs/rootless.md for caveats of rootless mode
spec:
restartPolicy: Never
initContainers:
- name: prepare
image: alpine:3.10
command:
- sh
- -c
- "echo FROM hello-world > /workspace/Dockerfile"
securityContext:
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: workspace
mountPath: /workspace
containers:
- name: buildkit
image: moby/buildkit:master-rootless
env:
- name: BUILDKITD_FLAGS
value: --oci-worker-no-process-sandbox
command:
- buildctl-daemonless.sh
args:
- build
- --frontend
- dockerfile.v0
- --local
- context=/workspace
- --local
- dockerfile=/workspace
# To push the image to a registry, add
# `--output type=image,name=docker.io/username/image,push=true`
securityContext:
# To change UID/GID, you need to rebuild the image
runAsUser: 1000
runAsGroup: 1000
volumeMounts:
- name: workspace
readOnly: true
mountPath: /workspace
# To push the image, you also need to create `~/.docker/config.json` secret
# and set $DOCKER_CONFIG to `/path/to/.docker` directory.
volumes:
- name: workspace
emptyDir: {}

View File

@ -0,0 +1,10 @@
apiVersion: v1
kind: Pod
metadata:
name: buildkitd
spec:
containers:
- name: buildkitd
image: moby/buildkit:master
securityContext:
privileged: true

View File

@ -0,0 +1,18 @@
apiVersion: v1
kind: Pod
metadata:
name: buildkitd
annotations:
container.apparmor.security.beta.kubernetes.io/buildkitd: unconfined
container.seccomp.security.alpha.kubernetes.io/buildkitd: unconfined
# see buildkit/docs/rootless.md for caveats of rootless mode
spec:
containers:
- name: buildkitd
image: moby/buildkit:master-rootless
args:
- --oci-worker-no-process-sandbox
securityContext:
# To change UID/GID, you need to rebuild the image
runAsUser: 1000
runAsGroup: 1000

View File

@ -0,0 +1,22 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: buildkitd
name: buildkitd
spec:
serviceName: buildkitd
replicas: 1
selector:
matchLabels:
app: buildkitd
template:
metadata:
labels:
app: buildkitd
spec:
containers:
- name: buildkitd
image: moby/buildkit:master
securityContext:
privileged: true

View File

@ -23,23 +23,8 @@ spec:
- name: buildkitd
image: moby/buildkit:master-rootless
args:
- --addr
- unix:///run/user/1000/buildkit/buildkitd.sock
- --addr
- tcp://0.0.0.0:1234
- --oci-worker-no-process-sandbox
ports:
- containerPort: 1234
---
apiVersion: v1
kind: Service
metadata:
labels:
app: buildkitd
name: buildkitd
spec:
ports:
- port: 1234
protocol: TCP
selector:
app: buildkitd
securityContext:
# To change UID/GID, you need to rebuild the image
runAsUser: 1000
runAsGroup: 1000