a6fc85f163
diffApply: set dir opaque when overwriting whiteout |
||
---|---|---|
.github | ||
api | ||
cache | ||
client | ||
cmd | ||
control | ||
docs | ||
examples | ||
executor | ||
exporter | ||
frontend | ||
hack | ||
identity | ||
session | ||
snapshot | ||
solver | ||
source | ||
util | ||
vendor | ||
version | ||
worker | ||
.dockerignore | ||
.gitignore | ||
.golangci.yml | ||
.mailmap | ||
.yamllint.yml | ||
AUTHORS | ||
Dockerfile | ||
LICENSE | ||
MAINTAINERS | ||
Makefile | ||
README.md | ||
codecov.yml | ||
doc.go | ||
go.mod | ||
go.sum |
README.md
BuildKit
BuildKit is a toolkit for converting source code to build artifacts in an efficient, expressive and repeatable manner.
Key features:
- Automatic garbage collection
- Extendable frontend formats
- Concurrent dependency resolution
- Efficient instruction caching
- Build cache import/export
- Nested build job invocations
- Distributable workers
- Multiple output formats
- Pluggable architecture
- Execution without root privileges
Read the proposal from https://github.com/moby/moby/issues/32925
Introductory blog post https://blog.mobyproject.org/introducing-buildkit-17e056cc5317
Join #buildkit
channel on Docker Community Slack
ℹ️ If you are visiting this repo for the usage of BuildKit-only Dockerfile features like RUN --mount=type=(bind|cache|tmpfs|secret|ssh)
, please refer to frontend/dockerfile/docs/syntax.md
.
ℹ️ BuildKit has been integrated to docker build
since Docker 18.06 .
You don't need to read this document unless you want to use the full-featured standalone version of BuildKit.
- Used by
- Quick start
- Cache
- Metadata
- Systemd socket activation
- Expose BuildKit as a TCP service
- Containerizing BuildKit
- Opentracing support
- Running BuildKit without root privileges
- Building multi-platform images
- Contributing
Used by
BuildKit is used by the following projects:
- Moby & Docker (
DOCKER_BUILDKIT=1 docker build
) - img
- OpenFaaS Cloud
- container build interface
- Tekton Pipelines (formerly Knative Build Templates)
- the Sanic build tool
- vab
- Rio
- kim
- PouchContainer
- Docker buildx
- Okteto Cloud
- Earthly earthfiles
- Gitpod
- Dagger
Quick start
ℹ️ For Kubernetes deployments, see examples/kubernetes
.
BuildKit is composed of the buildkitd
daemon and the buildctl
client.
While the buildctl
client is available for Linux, macOS, and Windows, the buildkitd
daemon is only available for Linux currently.
The buildkitd
daemon requires the following components to be installed:
- runc or crun
- containerd (if you want to use containerd worker)
The latest binaries of BuildKit are available here for Linux, macOS, and Windows.
Homebrew package (unofficial) is available for macOS.
$ brew install buildkit
To build BuildKit from source, see .github/CONTRIBUTING.md
.
Starting the buildkitd
daemon:
You need to run buildkitd
as the root user on the host.
$ sudo buildkitd
To run buildkitd
as a non-root user, see docs/rootless.md
.
The buildkitd daemon supports two worker backends: OCI (runc) and containerd.
By default, the OCI (runc) worker is used. You can set --oci-worker=false --containerd-worker=true
to use the containerd worker.
We are open to adding more backends.
To start the buildkitd daemon using systemd socket activiation, you can install the buildkit systemd unit files. See Systemd socket activation
The buildkitd daemon listens gRPC API on /run/buildkit/buildkitd.sock
by default, but you can also use TCP sockets.
See Expose BuildKit as a TCP service.
Exploring LLB
BuildKit builds are based on a binary intermediate format called LLB that is used for defining the dependency graph for processes running part of your build. tl;dr: LLB is to Dockerfile what LLVM IR is to C.
- Marshaled as Protobuf messages
- Concurrently executable
- Efficiently cacheable
- Vendor-neutral (i.e. non-Dockerfile languages can be easily implemented)
See solver/pb/ops.proto
for the format definition, and see ./examples/README.md
for example LLB applications.
Currently, the following high-level languages has been implemented for LLB:
- Dockerfile (See Exploring Dockerfiles)
- Buildpacks
- Mockerfile
- Gockerfile
- bldr (Pkgfile)
- HLB
- Earthfile (Earthly)
- Cargo Wharf (Rust)
- Nix
- (open a PR to add your own language)
Exploring Dockerfiles
Frontends are components that run inside BuildKit and convert any build definition to LLB. There is a special frontend called gateway (gateway.v0
) that allows using any image as a frontend.
During development, Dockerfile frontend (dockerfile.v0
) is also part of the BuildKit repo. In the future, this will be moved out, and Dockerfiles can be built using an external image.
Building a Dockerfile with buildctl
buildctl build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=.
# or
buildctl build \
--frontend=dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt target=foo \
--opt build-arg:foo=bar
--local
exposes local source files from client to the builder. context
and dockerfile
are the names Dockerfile frontend looks for build context and Dockerfile location.
Building a Dockerfile using external frontend:
External versions of the Dockerfile frontend are pushed to https://hub.docker.com/r/docker/dockerfile-upstream and https://hub.docker.com/r/docker/dockerfile and can be used with the gateway frontend. The source for the external frontend is currently located in ./frontend/dockerfile/cmd/dockerfile-frontend
but will move out of this repository in the future (#163). For automatic build from master branch of this repository docker/dockerfile-upstream:master
or docker/dockerfile-upstream:master-labs
image can be used.
buildctl build \
--frontend gateway.v0 \
--opt source=docker/dockerfile \
--local context=. \
--local dockerfile=.
buildctl build \
--frontend gateway.v0 \
--opt source=docker/dockerfile \
--opt context=https://github.com/moby/moby.git \
--opt build-arg:APT_MIRROR=cdn-fastly.deb.debian.org
Building a Dockerfile with experimental features like RUN --mount=type=(bind|cache|tmpfs|secret|ssh)
See frontend/dockerfile/docs/experimental.md
.
Output
By default, the build result and intermediate cache will only remain internally in BuildKit. An output needs to be specified to retrieve the result.
Image/Registry
buildctl build ... --output type=image,name=docker.io/username/image,push=true
To export the cache embed with the image and pushing them to registry together, type registry
is required to import the cache, you should specify --export-cache type=inline
and --import-cache type=registry,ref=...
. To export the cache to a local directy, you should specify --export-cache type=local
.
Details in Export cache.
buildctl build ...\
--output type=image,name=docker.io/username/image,push=true \
--export-cache type=inline \
--import-cache type=registry,ref=docker.io/username/image
Keys supported by image output:
name=[value]
: image namepush=true
: push after creating the imagepush-by-digest=true
: push unnamed imageregistry.insecure=true
: push to insecure HTTP registryoci-mediatypes=true
: use OCI mediatypes in configuration JSON instead of Docker'sunpack=true
: unpack image after creation (for use with containerd)dangling-name-prefix=[value]
: name image withprefix@<digest>
, used for anonymous imagesname-canonical=true
: add additional canonical namename@<digest>
compression=[uncompressed,gzip,estargz,zstd]
: choose compression type for layers newly created and cached, gzip is default value. estargz should be used withoci-mediatypes=true
.compression-level=[value]
: compression level for gzip, estargz (0-9) and zstd (0-22)force-compression=true
: forcefully applycompression
option to all layers (including already existing layers).buildinfo=[all,imageconfig,metadata,none]
: choose build dependency version to export (defaultall
).
If credentials are required, buildctl
will attempt to read Docker configuration file $DOCKER_CONFIG/config.json
.
$DOCKER_CONFIG
defaults to ~/.docker
.
Local directory
The local client will copy the files directly to the client. This is useful if BuildKit is being used for building something else than container images.
buildctl build ... --output type=local,dest=path/to/output-dir
To export specific files use multi-stage builds with a scratch stage and copy the needed files into that stage with COPY --from
.
...
FROM scratch as testresult
COPY --from=builder /usr/src/app/testresult.xml .
...
buildctl build ... --opt target=testresult --output type=local,dest=path/to/output-dir
Tar exporter is similar to local exporter but transfers the files through a tarball.
buildctl build ... --output type=tar,dest=out.tar
buildctl build ... --output type=tar > out.tar
Docker tarball
# exported tarball is also compatible with OCI spec
buildctl build ... --output type=docker,name=myimage | docker load
OCI tarball
buildctl build ... --output type=oci,dest=path/to/output.tar
buildctl build ... --output type=oci > output.tar
containerd image store
The containerd worker needs to be used
buildctl build ... --output type=image,name=docker.io/username/image
ctr --namespace=buildkit images ls
To change the containerd namespace, you need to change worker.containerd.namespace
in /etc/buildkit/buildkitd.toml
.
Cache
To show local build cache (/var/lib/buildkit
):
buildctl du -v
To prune local build cache:
buildctl prune
Garbage collection
Export cache
BuildKit supports the following cache exporters:
inline
: embed the cache into the image, and push them to the registry togetherregistry
: push the image and the cache separatelylocal
: export to a local directorygha
: export to GitHub Actions cache
In most case you want to use the inline
cache exporter.
However, note that the inline
cache exporter only supports min
cache mode.
To enable max
cache mode, push the image and the cache separately by using registry
cache exporter.
inline
and registry
exporters both store the cache in the registry. For importing the cache, type=registry
is sufficient for both, as specifying the cache format is not necessary.
Inline (push image and cache together)
buildctl build ... \
--output type=image,name=docker.io/username/image,push=true \
--export-cache type=inline \
--import-cache type=registry,ref=docker.io/username/image
Note that the inline cache is not imported unless --import-cache type=registry,ref=...
is provided.
Inline cache embeds cache metadata into the image config. The layers in the image will be left untouched compared to the image with no cache information.
ℹ️ Docker-integrated BuildKit (DOCKER_BUILDKIT=1 docker build
) and docker buildx
requires
--build-arg BUILDKIT_INLINE_CACHE=1
to be specified to enable the inline
cache exporter.
However, the standalone buildctl
does NOT require --opt build-arg:BUILDKIT_INLINE_CACHE=1
and the build-arg is simply ignored.
Registry (push image and cache separately)
buildctl build ... \
--output type=image,name=localhost:5000/myrepo:image,push=true \
--export-cache type=registry,ref=localhost:5000/myrepo:buildcache \
--import-cache type=registry,ref=localhost:5000/myrepo:buildcache
--export-cache
options:
type=registry
mode=min
(default): only export layers for the resulting imagemode=max
: export all the layers of all intermediate steps.ref=docker.io/user/image:tag
: referenceoci-mediatypes=true|false
: whether to use OCI mediatypes in exported manifests. Since BuildKitv0.8
defaults to true.
--import-cache
options:
type=registry
ref=docker.io/user/image:tag
: reference
Local directory
buildctl build ... --export-cache type=local,dest=path/to/output-dir
buildctl build ... --import-cache type=local,src=path/to/input-dir
The directory layout conforms to OCI Image Spec v1.0.
--export-cache
options:
type=local
mode=min
(default): only export layers for the resulting imagemode=max
: export all the layers of all intermediate steps.dest=path/to/output-dir
: destination directory for cache exporteroci-mediatypes=true|false
: whether to use OCI mediatypes in exported manifests. Since BuildKitv0.8
defaults to true.
--import-cache
options:
type=local
src=path/to/input-dir
: source directory for cache importerdigest=sha256:deadbeef
: digest of the manifest list to import.tag=customtag
: custom tag of image. Defaults "latest" tag digest inindex.json
is for digest, not for tag
GitHub Actions cache (experimental)
buildctl build ... \
--output type=image,name=docker.io/username/image,push=true \
--export-cache type=gha \
--import-cache type=gha
Github Actions cache saves both cache metadata and layers to GitHub's Cache service. This cache currently has a size limit of 10GB that is shared accross different caches in the repo. If you exceed this limit, GitHub will save your cache but will begin evicting caches until the total size is less than 10 GB. Recycling caches too often can result in slower runtimes overall.
Similarly to using actions/cache, caches are scoped by branch, with the default and target branches being available to every branch.
Following attributes are required to authenticate against the Github Actions Cache service API:
url
: Cache server URL (default$ACTIONS_CACHE_URL
)token
: Access token (default$ACTIONS_RUNTIME_TOKEN
)
ℹ️ This type of cache can be used with Docker Build Push Action
where url
and token
will be automatically set. To use this backend in a inline run
step, you have to include crazy-max/ghaction-github-runtime
in your workflow to expose the runtime.
--export-cache
options:
type=gha
mode=min
(default): only export layers for the resulting imagemode=max
: export all the layers of all intermediate steps.scope=buildkit
: which scope cache object belongs to (defaultbuildkit
)
--import-cache
options:
type=gha
scope=buildkit
: which scope cache object belongs to (defaultbuildkit
)
Consistent hashing
If you have multiple BuildKit daemon instances but you don't want to use registry for sharing cache across the cluster, consider client-side load balancing using consistent hashing.
See ./examples/kubernetes/consistenthash
.
Metadata
To output build metadata such as the image digest, pass the --metadata-file
flag.
The metadata will be written as a JSON object to the specified file.
The directory of the specified file must already exist and be writable.
buildctl build ... --metadata-file metadata.json
{"containerimage.digest": "sha256:ea0cfb27fd41ea0405d3095880c1efa45710f5bcdddb7d7d5a7317ad4825ae14",...}
Systemd socket activation
On Systemd based systems, you can communicate with the daemon via Systemd socket activation, use buildkitd --addr fd://
.
You can find examples of using Systemd socket activation with BuildKit and Systemd in ./examples/systemd
.
Expose BuildKit as a TCP service
The buildkitd
daemon can listen the gRPC API on a TCP socket.
It is highly recommended to create TLS certificates for both the daemon and the client (mTLS).
Enabling TCP without mTLS is dangerous because the executor containers (aka Dockerfile RUN
containers) can call BuildKit API as well.
buildkitd \
--addr tcp://0.0.0.0:1234 \
--tlscacert /path/to/ca.pem \
--tlscert /path/to/cert.pem \
--tlskey /path/to/key.pem
buildctl \
--addr tcp://example.com:1234 \
--tlscacert /path/to/ca.pem \
--tlscert /path/to/clientcert.pem \
--tlskey /path/to/clientkey.pem \
build ...
Load balancing
buildctl build
can be called against randomly load balanced the buildkitd
daemon.
See also Consistent hashing for client-side load balancing.
Containerizing BuildKit
BuildKit can also be used by running the buildkitd
daemon inside a Docker container and accessing it remotely.
We provide the container images as moby/buildkit
:
moby/buildkit:latest
: built from the latest regular releasemoby/buildkit:rootless
: same aslatest
but runs as an unprivileged user, seedocs/rootless.md
moby/buildkit:master
: built from the master branchmoby/buildkit:master-rootless
: same as master but runs as an unprivileged user, seedocs/rootless.md
To run daemon in a container:
docker run -d --name buildkitd --privileged moby/buildkit:latest
export BUILDKIT_HOST=docker-container://buildkitd
buildctl build --help
Podman
To connect to a BuildKit daemon running in a Podman container, use podman-container://
instead of docker-container://
.
podman run -d --name buildkitd --privileged moby/buildkit:latest
buildctl --addr=podman-container://buildkitd build --frontend dockerfile.v0 --local context=. --local dockerfile=. --output type=oci | podman load foo
sudo
is not required.
Kubernetes
For Kubernetes deployments, see examples/kubernetes
.
Daemonless
To run the client and an ephemeral daemon in a single container ("daemonless mode"):
docker run \
-it \
--rm \
--privileged \
-v /path/to/dir:/tmp/work \
--entrypoint buildctl-daemonless.sh \
moby/buildkit:master \
build \
--frontend dockerfile.v0 \
--local context=/tmp/work \
--local dockerfile=/tmp/work
or
docker run \
-it \
--rm \
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \
-e BUILDKITD_FLAGS=--oci-worker-no-process-sandbox \
-v /path/to/dir:/tmp/work \
--entrypoint buildctl-daemonless.sh \
moby/buildkit:master-rootless \
build \
--frontend \
dockerfile.v0 \
--local context=/tmp/work \
--local dockerfile=/tmp/work
Opentracing support
BuildKit supports opentracing for buildkitd gRPC API and buildctl commands. To capture the trace to Jaeger, set JAEGER_TRACE
environment variable to the collection address.
docker run -d -p6831:6831/udp -p16686:16686 jaegertracing/all-in-one:latest
export JAEGER_TRACE=0.0.0.0:6831
# restart buildkitd and buildctl so they know JAEGER_TRACE
# any buildctl command should be traced to http://127.0.0.1:16686/
Running BuildKit without root privileges
Please refer to docs/rootless.md
.
Building multi-platform images
Please refer to docs/multi-platform.md
.
Contributing
Want to contribute to BuildKit? Awesome! You can find information about contributing to this project in the CONTRIBUTING.md