[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r0r6ywri.fsf@redhat.com>
Date: Wed, 24 May 2023 10:11:13 +0200
From: Giuseppe Scrivano <gscrivan@...hat.com>
To: Gao Xiang <hsiangkao@...ux.alibaba.com>
Cc: Alexander Larsson <alexl@...hat.com>,
Mike Snitzer <snitzer@...nel.org>,
Du Rui <durui@...ux.alibaba.com>, dm-devel@...hat.com,
linux-kernel@...r.kernel.org, Alasdair Kergon <agk@...hat.com>
Subject: Re: dm overlaybd: targets mapping OverlayBD image
Gao Xiang <hsiangkao@...ux.alibaba.com> writes:
> On 2023/5/24 23:43, Alexander Larsson wrote:
>> On Tue, May 23, 2023 at 7:29 PM Mike Snitzer <snitzer@...nel.org> wrote:
>>>
>>> On Fri, May 19 2023 at 6:27P -0400,
>>> Du Rui <durui@...ux.alibaba.com> wrote:
>>>
>>>> OverlayBD is a novel layering block-level image format, which is design
>>>> for container, secure container and applicable to virtual machine,
>>>> published in USENIX ATC '20
>>>> https://www.usenix.org/system/files/atc20-li-huiba.pdf
>>>>
>>>> OverlayBD already has a ContainerD non-core sub-project implementation
>>>> in userspace, as an accelerated container image service
>>>> https://github.com/containerd/accelerated-container-image
>>>>
>>>> It could be much more efficient when do decompressing and mapping works
>>>> in the kernel with the framework of device-mapper, in many circumstances,
>>>> such as secure container runtime, mobile-devices, etc.
>>>>
>>>> This patch contains a module, dm-overlaybd, provides two kinds of targets
>>>> dm-zfile and dm-lsmt, to expose a group of block-devices contains
>>>> OverlayBD image as a overlaid read-only block-device.
>>>>
>>>> Signed-off-by: Du Rui <durui@...ux.alibaba.com>
>>>
>>> <snip, original patch here: [1] >
>> A long long time ago I wrote a docker container image based on
>> dm-snapshot that is vaguely similar to this one. It is still
>> available, but nobody really uses it. It has several weaknesses. First
>> of all the container image is an actual filesystem, so you need to
>> pre-allocate a fixed max size for images at construction time.
>> Secondly, all the lvm volume changes and mounts during runtime caused
>> weird behaviour (especially at scale) that was painful to manage (just
>> search the docker issue tracker for devmapper backend). In the end
>> everyone moved to a filesystem based implementation (overlayfs based).
>
> Yeah, and I think reproducibility issue is another problem, which means
> it's quite hard to select a random fs without some change to get the
> best result. I do find these guys work on e2fsprogs again and again.
>
> I've already told them internally again and again, but.. They only focus
> on some minor points such as how to do I/O and CPU prefetch to get
> (somewhat) better performance and beat EROFS. I don't know, I have no
> enough time to even look into that whether this new kernel stuffs is
> fine: because of a very simplist idea:
>
> stacked storage overhead generally takes double runtime/memory
> footprints:
> filesystem + block drivers
>
>>
>>> I appreciate that this work is being done with an eye toward
>>> containerd "community" and standardization but based on my limited
>>> research it appears that this format of OCI image storage/use is only
>>> used by Alibaba? (but I could be wrong...)
>>>
>>> But you'd do well to explain why the userspace solution isn't
>>> acceptable. Are there security issues that moving the implementation
>>> to kernel addresses?
>>>
>>> I also have doubts that this solution is _actually_ more performant
>>> than a proper filesystem based solution that allows page cache sharing
>>> of container image data across multiple containers.
>> This solution doesn't even allow page cache sharing between shared
>> layers (like current containers do), much less between independent
>> layers.
>>
>>> There is an active discussion about, and active development effort
>>> for, using overlayfs + erofs for container images. I'm reluctant to
>>> merge this DM based container image approach without wider consensus
>>> from other container stakeholders.
>>>
>>> But short of reaching wider consensus on the need for these DM
>>> targets: there is nothing preventing you from carrying these changes
>>> in your alibaba kernel.
>> Erofs already has some block-level support for container images
>> (with
>> nydus), and composefs works with current in-kernel EROFS+overlayfs.
>> And this new approach doesn't help for the IMHO current weak spot we
>> have, which is unprivileged container images.
>> Also, while OCI artifacts can be used to store any kind of image
>> formats (or any other kind of file) I think for an actual standardized
>> new image format it would be better to work with the OCI org to come
>> up with a OCI v2 standard image format.
> Agreed, I hope you guys could actually sit down and evaluate a proper
> solution on the next OCI v2, currently I know there are:
>
> - Composefs
> - (e)stargz https://github.com/containerd/stargz-snapshotter
> - Nydus https://github.com/containerd/nydus-snapshotter
> - OverlayBD https://github.com/containerd/accelerated-container-image
> - SOCI https://github.com/awslabs/soci-snapshotter
> - Tarfs
> - (maybe even more..)
>
> Honestly, I do think OSTree/Composefs is the best approach for now for
> deduplication and page cache sharing (due to kernel limitation of page
> cache sharing and overlayfs copyup limitation). I'm too tired of
> container image stuffs honestly. Too much unnecessary manpower waste.
for a file-based storage model, I am not sure a new format would really
buy us much or it can be significantly different.
Without a proper support from the kernel, a new format would still need
to create the layout overlay expects, so it won't be much different than
what we have now.
The current OCI format, with some tweaks like (e)stargz or zstd:chunked,
already make its content addressable and a client can retrieve only the
subset of the files that are needed. At the same time we maintain the
simplicity of a tarball and it won't break existing clients.
IMO, the most interesting problem is how to store these images locally
and how the kernel can help with that.
The idea behind composefs is to replace the existing storage model used
for overlay, where each layer has its own directory, with a single
directory where all the files are stored by their checksum. The
expected layout then is recreated at runtime.
Powered by blists - more mailing lists