Skip to content
\n

Context

\n

For context, I'm building a couple of multi-arch images with a db.tar.gz artifact that I'm adding into each of the images. The db.tar.gz artifact is around 60MiB compressed, but over 700MiB once it's been decompressed, so I'm trying to share the decompressed files everywhere I can. There's nothing platform specific about these files either, so it should be fine to only add one cache layer to build cache. The setup looks something like this.

\n
FROM alpine\n\nADD ./db.tar.gz /home/gitlab/.state
\n

Unfortunately, when I build the image I end up with two separate copies of the extracted tar archive totaling at 1.5GB per image for this one ADD instruction.

\n
ID\t\t\t\t\t\tRECLAIMABLE\tSIZE\t\tLAST ACCESSED\nspn6p45bt5dqc7nli19o0ypm4               \ttrue \t\t730.1MB*  \t5 seconds ago\n1ilj1w2u763t3ud4d7xis4lji               \ttrue \t\t730.1MB*  \t5 seconds ago\n8wudq9jlp03gr9pu0fnhfr7zx*              \ttrue \t\t65.48MB   \t5 seconds ago\nct3ew7tlb0f0jb36tzw30mw89               \ttrue \t\t12.15MB*  \t5 seconds ago\nbt0rz9syzlxhvgeox3rswuvvc               \ttrue \t\t3.993MB*  \t5 seconds ago\ngpbmiwerdds0jdfihfyp35e5m*              \ttrue \t\t8.192kB   \t5 seconds ago\nxjiw9k64x9afdf4m9qb6u983z*              \ttrue \t\t4.096kB   \t5 seconds ago\nShared:\t\t1.476GB\nPrivate:\t65.49MB\nReclaimable:\t1.542GB\nTotal:\t\t1.542GB\n
\n

Interestingly enough, when I switch to FROM scratch, this behavior no longer exists.

\n
FROM scratch\n\nADD ./db.tar.gz /home/user/.local/db/state
\n

Output of docker buildx du.

\n
ID\t\t\t\t\t\tRECLAIMABLE\tSIZE\t\tLAST ACCESSED\npie3a3z1pan546kfhjc7escrr               \ttrue \t\t730.1MB*  \t12 seconds ago\np1wq4dxhco6g0kqio8ot9j9bh*              \ttrue \t\t65.48MB   \t12 seconds ago\nh8jdp9vp4xuz3hbty44th0fnw*              \ttrue \t\t8.192kB   \t12 seconds ago\nye9ozzskeim4c39qevy44vddu*              \ttrue \t\t4.096kB   \t12 seconds ago\nShared:\t\t730.1MB\nPrivate:\t65.49MB\nReclaimable:\t795.5MB\nTotal:\t\t795.5MB\n
\n

I'd really like to share this layer across all the images if possible, even if it's only shared in the build cache. I had to upgrade the runner size so that this job could succeed, so if I can achieve this somehow, I'll be able to save on some resources.

","upvoteCount":1,"answerCount":2,"acceptedAnswer":{"@type":"Answer","text":"

I think this would work if you use the --link option (see ADD --link and COPY --link).

\n

With that option, the ADD is copied in am implicit FROM scratch layer, which allows the layer to be shared, even if the \"parent\" layers differ. This option should only be used if the location where the content is added does not depend on any parent state (i.e., empty directory, and not expected for that directory to have content that's based on the parent image).

\n

This is a quick try with this, but probably best to try for yourself if I'm correct.

\n

👇 worth noting that I'm using docker with the containerd image-store enabled so that I can build multi-platform images with the default builder (not requiring a custom builder);

\n\n

Pulling an image to export as tar (as \"content\");

\n
docker pull docker:latest\nlatest: Pulling from library/docker\nDigest: sha256:f49e1c71b5d9f8ebe53715f78996ce42b8be4b1ec03875d187dfe3c03de1dc00\nStatus: Image is up to date for docker:latest\ndocker.io/library/docker:latest
\n

Saving it to a file;

\n
docker image save -o ./content.tar docker:latest\nls -lah content.tar\n-rw-------    1 root     root      126.2M Apr 24 23:30 content.tar
\n

Build a multi-arch image with 3 architectures, all adding the same file, using the --link option;

\n
docker build --platform linux/amd64,linux/arm64,linux/s390x -f- .<<'EOF'\n# syntax=docker/dockerfile:1\n\nFROM alpine\nADD --link ./content.tar /home/user/.local/db/state\nEOF
\n
[+] Building 7.6s (19/19) FINISHED                                                                                                           docker:default\n => [internal] load build definition from Dockerfile                                                                                                   0.0s\n => => transferring dockerfile: 131B                                                                                                                   0.0s\n => resolve image config for docker-image://docker.io/docker/dockerfile:1                                                                              0.4s\n => CACHED docker-image://docker.io/docker/dockerfile:1@sha256:4c68376a702446fc3c79af22de146a148bc3367e73c25a5803d453b6b3f722fb                        0.0s\n => => resolve docker.io/docker/dockerfile:1@sha256:4c68376a702446fc3c79af22de146a148bc3367e73c25a5803d453b6b3f722fb                                   0.0s\n => [linux/arm64 internal] load metadata for docker.io/library/alpine:latest                                                                           2.4s\n => [linux/amd64 internal] load metadata for docker.io/library/alpine:latest                                                                           2.5s\n => [linux/s390x internal] load metadata for docker.io/library/alpine:latest                                                                           2.5s\n => [internal] load .dockerignore                                                                                                                      0.0s\n => => transferring context: 2B                                                                                                                        0.0s\n => [internal] load build context                                                                                                                      0.3s\n => => transferring context: 132.32MB                                                                                                                  0.3s\n => [linux/s390x 1/2] FROM docker.io/library/alpine:latest@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c                     0.6s\n => => resolve docker.io/library/alpine:latest@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c                                 0.0s\n => => sha256:c1a599607158512214777614f916f8193d29fd34b656d47dfc26314af01e2af4 3.47MB / 3.47MB                                                         0.5s\n => [linux/arm64 1/2] FROM docker.io/library/alpine:latest@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c                     0.0s\n => => resolve docker.io/library/alpine:latest@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c                                 0.0s\n => [linux/amd64 1/2] FROM docker.io/library/alpine:latest@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c                     0.9s\n => => resolve docker.io/library/alpine:latest@sha256:a8560b36e8b8210634f77d9f7f9efd7ffa463e380b75e2e74aff4511df3ef88c                                 0.0s\n => => sha256:f18232174bc91741fdf3da96d85011092101a032a93a388b79e99e69c2d5c870 3.64MB / 3.64MB                                                         0.9s\n => [linux/s390x 2/2] ADD --link ./content.tar /home/user/.local/db/state                                                                              0.1s\n => [linux/arm64 2/2] ADD --link ./content.tar /home/user/.local/db/state                                                                              0.0s\n => [linux/amd64 2/2] ADD --link ./content.tar /home/user/.local/db/state                                                                              0.0s\n => exporting to image                                                                                                                                 4.1s\n => => exporting layers                                                                                                                                2.8s\n => => exporting manifest sha256:1e3d128e5164ba7501e70ca1173e066346240f3bdc312142a8631a689100fa9b                                                      0.0s\n => => exporting config sha256:d68dadf9c442ddd8bb47bf6d3d61594a5541e87b2a04167f6e912266ba878d26                                                        0.0s\n => => exporting attestation manifest sha256:4391c33e547075b2103e40e9bf8384a17cd45d0d321d0196b29ed199914a66fe                                          0.0s\n => => exporting manifest sha256:11df05f8103507ec96a0d1974790278635494dc92c337ac96032fb2feff00351                                                      0.0s\n => => exporting config sha256:1f27853335f3c02d0f68ae48fe22ea61425663bb9e6a47d5da4d89902b6925e8                                                        0.0s\n => => exporting attestation manifest sha256:b8d00bb9e19d72b3a82aa09ff7575d7bb65ba3721151bd7e9e5b0609aaeeafbe                                          0.0s\n => => exporting manifest sha256:44e790ee7e180edb5204b4f84b81d65930166fa7f748ea29ff47579f939a9caa                                                      0.0s\n => => exporting config sha256:47813f37e50c569ba1abd16005a80d991f727b4f2b15d0ff45a34f6bf29a837a                                                        0.0s\n => => exporting attestation manifest sha256:80af267def7cff2c3e5b19a0b6003b1ea82a5d787bc8dc979d1a884c59f0af6a                                          0.0s\n => => exporting manifest list sha256:26b0b36c895978d0cfa611d790a168d55a7e74b7f62110da7268a4510bd27a4d                                                 0.0s\n => => naming to moby-dangling@sha256:26b0b36c895978d0cfa611d790a168d55a7e74b7f62110da7268a4510bd27a4d                                                 0.0s\n => => unpacking to moby-dangling@sha256:26b0b36c895978d0cfa611d790a168d55a7e74b7f62110da7268a4510bd27a4d                                              0.4s
\n

Image huilt;

\n
docker image ls --tree\nIMAGE                   ID             DISK USAGE   CONTENT SIZE   EXTRA\n<untagged>              26b0b36c8959        548MB          406MB\n├─ linux/amd64          1e3d128e5164        135MB          135MB\n├─ linux/arm64          11df05f81035        277MB          136MB\n└─ linux/s390x          44e790ee7e18        135MB          135MB
\n
docker buildx du\nID\t\t\t\t\t\tRECLAIMABLE\tSIZE\t\tLAST ACCESSED\nilwlv3up0ko6hf6rbte9ui9wl               \ttrue \t\t264.1MB*  \t45 seconds ago\nvkj7p0cdulb0od9mhnna4pt6i*              \ttrue \t\t132.3MB   \t45 seconds ago\nynwtwzuwfepczwuoxx3i2qo28               \ttrue \t\t37.54MB   \t45 seconds ago\nw8r4bfgupifuhu68ii2sxirfz               \ttrue \t\t3.993MB*  \t45 seconds ago\n6cqonmk41d6tl21yt91jpj5go               \ttrue \t\t3.642MB*  \t45 seconds ago\n2o1taudcfcgfzq7al4ccg9h6y               \ttrue \t\t3.468MB*  \t45 seconds ago\nxw0sd9sy7a8urox5ghw9tmhiy*              \ttrue \t\t8.192kB   \t45 seconds ago\nvp8a4ygnqkqzdg5cydonktbzo*              \ttrue \t\t4.096kB   \t45 seconds ago\nz8ixtkjvpnne6aqnt2ih4fovd               \ttrue \t\t0B*       \t45 seconds ago\ns4szzze2pnb8py277vweleb4z               \ttrue \t\t0B*       \t45 seconds ago\n6rt3w0wr1hjvjbq5x7x9ixvsv               \ttrue \t\t0B*       \t45 seconds ago\nShared:\t\t275.2MB\nPrivate:\t169.9MB\nReclaimable:\t445.1MB\nTotal:\t\t445.1MB
","upvoteCount":2,"url":"https://github.com/moby/moby/discussions/49864#discussioncomment-12940716"}}}

Is there any way for me to share a layer in a multi-arch build? #49864

Answered by thaJeztah
oatovar asked this question in Q&A
Discussion options

You must be logged in to vote

I think this would work if you use the --link option (see ADD --link and COPY --link).

With that option, the ADD is copied in am implicit FROM scratch layer, which allows the layer to be shared, even if the "parent" layers differ. This option should only be used if the location where the content is added does not depend on any parent state (i.e., empty directory, and not expected for that directory to have content that's based on the parent image).

This is a quick try with this, but probably best to try for yourself if I'm correct.

👇 worth noting that I'm using docker with the containerd image-store enabled so that I can build multi-platform images with the default builder (not requiring …

Replies: 2 comments 1 reply

Comment options

You must be logged in to vote
1 reply
@oatovar
Comment options

Answer selected by oatovar
Comment options

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
area/builder area/builder/buildkit Issues affecting buildkit
2 participants