concurrent, cache-efficient, and Dockerfile-agnostic builder toolkit
 
 
 
Go to file
Tõnis Tiigi e7304e9a51
Merge pull request #1283 from jeffreyhuang23/issue-1230
Fix issue #1230 (add --oci-worker-binary)
2019-12-12 20:57:59 -08:00
.github CONTRIBUTING.md: fix broken link 2019-11-13 04:39:27 +09:00
api vendor: udpate fsutil to b4281fa 2019-03-05 17:34:01 -08:00
cache exporter: support compression for layer blob data 2019-12-09 23:51:38 +08:00
client exporter: support compression for layer blob data 2019-12-09 23:51:38 +08:00
cmd Merge pull request #1283 from jeffreyhuang23/issue-1230 2019-12-12 20:57:59 -08:00
control correct int64 alignment 2019-08-05 17:20:23 -07:00
docs Oci worker binary documentation and add missing constructor arg in runc_test 2019-12-12 13:25:56 -06:00
examples add readiness and liveness probe as example 2019-11-22 18:20:28 +01:00
executor executor/oci: fix panic when resolv.conf does not exist 2019-10-10 23:24:17 +00:00
exporter exporter: support compression for layer blob data 2019-12-09 23:51:38 +08:00
frontend frontend/dockerfile: show original stagename in error-message 2019-11-26 13:50:40 +01:00
hack Fix update generated files via docker buildkit 2019-11-01 10:37:00 -07:00
identity identity: add pkg for random id generation 2017-06-19 14:32:50 -07:00
session Disallow Extensions for now 2019-11-21 21:57:04 +01:00
snapshot cache: fix image share checker ref leak 2019-10-16 10:34:53 -07:00
solver Fixup doc strings for solver types 2019-11-07 10:00:20 -08:00
source leaseutil: mark temporary leases with timestamps 2019-10-16 10:35:50 -07:00
util push: update dist repo source for mountFrom 2019-12-07 00:37:19 +08:00
vendor all: bump the systemd dep to use Go Modules 2019-11-11 10:45:43 -05:00
version hack: embed git revison into binaries 2018-05-21 20:00:45 +09:00
worker Merge pull request #1283 from jeffreyhuang23/issue-1230 2019-12-12 20:57:59 -08:00
.dockerignore massive doc updates 2019-10-16 18:55:27 +09:00
.gitignore massive doc updates 2019-10-16 18:55:27 +09:00
.mailmap Update AUTHORS and mailmap 2019-04-17 16:49:54 +02:00
.travis.yml travis: print main buildkit logs on job failure 2019-08-29 15:29:26 -07:00
AUTHORS Update AUTHORS and mailmap 2019-04-17 16:49:54 +02:00
Dockerfile hack: rename Dockerfiles 2019-10-18 17:21:48 +09:00
LICENSE Add license 2017-06-01 09:58:33 -07:00
MAINTAINERS MAINTAINERS: update Akihiro Suda's email address 2019-04-17 18:26:55 +09:00
Makefile Makefile: new target: images 2019-03-02 15:42:10 +09:00
README.md exporter: support compression for layer blob data 2019-12-09 23:51:38 +08:00
doc.go Add a go file on buildkit root folder 2017-10-26 11:36:09 +02:00
go.mod all: bump the systemd dep to use Go Modules 2019-11-11 10:45:43 -05:00
go.sum all: bump the systemd dep to use Go Modules 2019-11-11 10:45:43 -05:00
gometalinter.json Add deadcode to the linter 2018-02-01 17:59:04 -08:00

README.md

asciicinema example

BuildKit

GoDoc Build Status Go Report Card

BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.

Key features:

  • Automatic garbage collection
  • Extendable frontend formats
  • Concurrent dependency resolution
  • Efficient instruction caching
  • Build cache import/export
  • Nested build job invocations
  • Distributable workers
  • Multiple output formats
  • Pluggable architecture
  • Execution without root privileges

Read the proposal from https://github.com/moby/moby/issues/32925

Introductory blog post https://blog.mobyproject.org/introducing-buildkit-17e056cc5317

Join #buildkit channel on Docker Community Slack

If you are visiting this repo for the usage of experimental Dockerfile features like RUN --mount=type=(bind|cache|tmpfs|secret|ssh), please refer to frontend/dockerfile/docs/experimental.md.

BuildKit has been integrated to docker build since Docker 18.06 . You don't need to read this document unless you want to use the full-featured standalone version of BuildKit.

Used by

BuildKit is used by the following projects:

Quick start

For Kubernetes deployments, see examples/kubernetes.

BuildKit is composed of the buildkitd daemon and the buildctl client. While the buildctl client is available for Linux, macOS, and Windows, the buildkitd daemon is only available for Linux currently.

The buildkitd daemon requires the following components to be installed:

The latest binaries of BuildKit are available here for Linux, macOS, and Windows.

Homebrew package (unofficial) is available for macOS.

$ brew install buildkit

To build BuildKit from source, see .github/CONTRIBUTING.md.

Starting the buildkitd daemon:

You need to run buildkitd as the root user on the host.

$ sudo buildkitd

To run buildkitd as a non-root user, see docs/rootless.md.

The buildkitd daemon supports two worker backends: OCI (runc) and containerd.

By default, the OCI (runc) worker is used. You can set --oci-worker=false --containerd-worker=true to use the containerd worker.

We are open to adding more backends.

The buildkitd daemon listens gRPC API on /run/buildkit/buildkitd.sock by default, but you can also use TCP sockets. See Expose BuildKit as a TCP service.

Notice to Fedora 31 users:

  • As runc still does not work on cgroup v2 environment like Fedora 31, you need to substitute runc with crun. Run rm -f $(which buildkit-runc) && ln -s $(which crun) /usr/local/bin/buildkit-runc .
  • If you want to use runc, you need to configure the system to use cgroup v1. Run sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0" and reboot.

Exploring LLB

BuildKit builds are based on a binary intermediate format called LLB that is used for defining the dependency graph for processes running part of your build. tl;dr: LLB is to Dockerfile what LLVM IR is to C.

  • Marshaled as Protobuf messages
  • Concurrently executable
  • Efficiently cacheable
  • Vendor-neutral (i.e. non-Dockerfile languages can be easily implemented)

See solver/pb/ops.proto for the format definition, and see ./examples/README.md for example LLB applications.

Currently, the following high-level languages has been implemented for LLB:

Exploring Dockerfiles

Frontends are components that run inside BuildKit and convert any build definition to LLB. There is a special frontend called gateway (gateway.v0) that allows using any image as a frontend.

During development, Dockerfile frontend (dockerfile.v0) is also part of the BuildKit repo. In the future, this will be moved out, and Dockerfiles can be built using an external image.

Building a Dockerfile with buildctl

buildctl build \
    --frontend=dockerfile.v0 \
    --local context=. \
    --local dockerfile=.
# or
buildctl build \
    --frontend=dockerfile.v0 \
    --local context=. \
    --local dockerfile=. \
    --opt target=foo \
    --opt build-arg:foo=bar

--local exposes local source files from client to the builder. context and dockerfile are the names Dockerfile frontend looks for build context and Dockerfile location.

Building a Dockerfile using external frontend:

External versions of the Dockerfile frontend are pushed to https://hub.docker.com/r/docker/dockerfile-upstream and https://hub.docker.com/r/docker/dockerfile and can be used with the gateway frontend. The source for the external frontend is currently located in ./frontend/dockerfile/cmd/dockerfile-frontend but will move out of this repository in the future (#163). For automatic build from master branch of this repository docker/dockerfile-upsteam:master or docker/dockerfile-upstream:master-experimental image can be used.

buildctl build \
    --frontend gateway.v0 \
    --opt source=docker/dockerfile \
    --local context=. \
    --local dockerfile=.
buildctl build \
    --frontend gateway.v0 \
    --opt source=docker/dockerfile \
    --opt context=git://github.com/moby/moby \
    --opt build-arg:APT_MIRROR=cdn-fastly.deb.debian.org

Building a Dockerfile with experimental features like RUN --mount=type=(bind|cache|tmpfs|secret|ssh)

See frontend/dockerfile/docs/experimental.md.

Output

By default, the build result and intermediate cache will only remain internally in BuildKit. An output needs to be specified to retrieve the result.

Image/Registry

buildctl build ... --output type=image,name=docker.io/username/image,push=true

To export and import the cache along with the image, you need to specify --export-cache type=inline and --import-cache type=registry,ref=.... See Export cache.

buildctl build ...\
  --output type=image,name=docker.io/username/image,push=true \
  --export-cache type=inline \
  --import-cache type=registry,ref=docker.io/username/image

Keys supported by image output:

  • name=[value]: image name
  • push=true: push after creating the image
  • push-by-digest=true: push unnamed image
  • registry.insecure=true: push to insecure HTTP registry
  • oci-mediatypes=true: use OCI mediatypes in configuration JSON instead of Docker's
  • unpack=true: unpack image after creation (for use with containerd)
  • dangling-name-prefix=[value]: name image with prefix@<digest> , used for anonymous images
  • name-canonical=true: add additional canonical name name@<digest>
  • compression=[uncompressed,gzip]: choose compression type for layer, gzip is default value

If credentials are required, buildctl will attempt to read Docker configuration file $DOCKER_CONFIG/config.json. $DOCKER_CONFIG defaults to ~/.docker.

Local directory

The local client will copy the files directly to the client. This is useful if BuildKit is being used for building something else than container images.

buildctl build ... --output type=local,dest=path/to/output-dir

To export specific files use multi-stage builds with a scratch stage and copy the needed files into that stage with COPY --from.

...
FROM scratch as testresult

COPY --from=builder /usr/src/app/testresult.xml .
...
buildctl build ... --opt target=testresult --output type=local,dest=path/to/output-dir

Tar exporter is similar to local exporter but transfers the files through a tarball.

buildctl build ... --output type=tar,dest=out.tar
buildctl build ... --output type=tar > out.tar

Docker tarball

# exported tarball is also compatible with OCI spec
buildctl build ... --output type=docker,name=myimage | docker load

OCI tarball

buildctl build ... --output type=oci,dest=path/to/output.tar
buildctl build ... --output type=oci > output.tar

containerd image store

The containerd worker needs to be used

buildctl build ... --output type=image,name=docker.io/username/image
ctr --namespace=buildkit images ls

To change the containerd namespace, you need to change worker.containerd.namespace in /etc/buildkit/buildkitd.toml.

Cache

To show local build cache (/var/lib/buildkit):

buildctl du -v

To prune local build cache:

buildctl prune

Garbage collection

See ./docs/buildkitd.toml.md.

Export cache

BuildKit supports the following cache exporters:

  • inline: embed the cache into the image, and push them to the registry together
  • registry: push the image and the cache separately
  • local: export to a local directory

In most case you want to use the inline cache exporter. However, note that the inline cache exporter only supports min cache mode. To enable max cache mode, push the image and the cache separately by using registry cache exporter.

Inline (push image and cache together)

buildctl build ... \
  --output type=image,name=docker.io/username/image,push=true \
  --export-cache type=inline \
  --import-cache type=registry,ref=docker.io/username/image

Note that the inline cache is not imported unless --import-cache type=registry,ref=... is provided.

Docker-integrated BuildKit (DOCKER_BUILDKIT=1 docker build) and docker buildxrequires --build-arg BUILDKIT_INLINE_CACHE=1 to be specified to enable the inline cache exporter. However, the standalone buildctl does NOT require --opt build-arg:BUILDKIT_INLINE_CACHE=1 and the build-arg is simply ignored.

Registry (push image and cache separately)

buildctl build ... \
  --output type=image,name=localhost:5000/myrepo:image,push=true \
  --export-cache type=registry,ref=localhost:5000/myrepo:buildcache \
  --import-cache type=registry,ref=localhost:5000/myrepo:buildcache \

Local directory

buildctl build ... --export-cache type=local,dest=path/to/output-dir
buildctl build ... --import-cache type=local,src=path/to/input-dir

The directory layout conforms to OCI Image Spec v1.0.

--export-cache options

  • type: inline, registry, or local
  • mode=min (default): only export layers for the resulting image
  • mode=max: export all the layers of all intermediate steps. Not supported for inline cache exporter.
  • ref=docker.io/user/image:tag: reference for registry cache exporter
  • dest=path/to/output-dir: directory for local cache exporter

--import-cache options

  • type: registry or local. Use registry to import inline cache.
  • ref=docker.io/user/image:tag: reference for registry cache importer
  • src=path/to/input-dir: directory for local cache importer
  • digest=sha256:deadbeef: digest of the manifest list to import for local cache importer. Defaults to the digest of "latest" tag in index.json

Consistent hashing

If you have multiple BuildKit daemon instances but you don't want to use registry for sharing cache across the cluster, consider client-side load balancing using consistent hashing.

See ./examples/kubernetes/consistenthash.

Expose BuildKit as a TCP service

The buildkitd daemon can listen the gRPC API on a TCP socket.

It is highly recommended to create TLS certificates for both the daemon and the client (mTLS). Enabling TCP without mTLS is dangerous because the executor containers (aka Dockerfile RUN containers) can call BuildKit API as well.

buildkitd \
  --addr tcp://0.0.0.0:1234 \
  --tlscacert /path/to/ca.pem \
  --tlscert /path/to/cert.pem \
  --tlskey /path/to/key.pem
buildctl \
  --addr tcp://example.com:1234 \
  --tlscacert /path/to/ca.pem \
  --tlscert /path/to/clientcert.pem \
  --tlskey /path/to/clientkey.pem \
  build ...

Load balancing

buildctl build can be called against randomly load balanced the buildkitd daemon.

See also Consistent hashing for client-side load balancing.

Containerizing BuildKit

BuildKit can also be used by running the buildkitd daemon inside a Docker container and accessing it remotely.

We provide the container images as moby/buildkit:

  • moby/buildkit:latest: built from the latest regular release
  • moby/buildkit:rootless: same as latest but runs as an unprivileged user, see docs/rootless.md
  • moby/buildkit:master: built from the master branch
  • moby/buildkit:master-rootless: same as master but runs as an unprivileged user, see docs/rootless.md

To run daemon in a container:

docker run -d --name buildkitd --privileged moby/buildkit:latest
export BUILDKIT_HOST=docker-container://buildkitd
buildctl build --help

Kubernetes

For Kubernetes deployments, see examples/kubernetes.

Daemonless

To run client and an ephemeral daemon in a single container ("daemonless mode"):

docker run \
    -it \
    --rm \
    --privileged \
    -v /path/to/dir:/tmp/work \
    --entrypoint buildctl-daemonless.sh \
    moby/buildkit:master \
        build \
        --frontend dockerfile.v0 \
        --local context=/tmp/work \
        --local dockerfile=/tmp/work

or

docker run \
    -it \
    --rm \
    --security-opt seccomp=unconfined \
    --security-opt apparmor=unconfined \
    -e BUILDKITD_FLAGS=--oci-worker-no-process-sandbox \
    -v /path/to/dir:/tmp/work \
    --entrypoint buildctl-daemonless.sh \
    moby/buildkit:master-rootless \
        build \
        --frontend \
        dockerfile.v0 \
        --local context=/tmp/work \
        --local dockerfile=/tmp/work

Opentracing support

BuildKit supports opentracing for buildkitd gRPC API and buildctl commands. To capture the trace to Jaeger, set JAEGER_TRACE environment variable to the collection address.

docker run -d -p6831:6831/udp -p16686:16686 jaegertracing/all-in-one:latest
export JAEGER_TRACE=0.0.0.0:6831
# restart buildkitd and buildctl so they know JAEGER_TRACE
# any buildctl command should be traced to http://127.0.0.1:16686/

Running BuildKit without root privileges

Please refer to docs/rootless.md.

Building multi-platform images

See docker buildx documentation

Contributing

Want to contribute to BuildKit? Awesome! You can find information about contributing to this project in the CONTRIBUTING.md