lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAH5Ym4j9Sgzng9SUB8ONcX1nLCcdRn7A9G1YbpZXOi3ctQT5BQ@mail.gmail.com>
Date: Mon, 5 Jan 2026 22:53:24 -0800
From: Sam Edwards <cfsworks@...il.com>
To: Viacheslav Dubeyko <Slava.Dubeyko@....com>
Cc: Xiubo Li <xiubli@...hat.com>, "idryomov@...il.com" <idryomov@...il.com>, 
	Milind Changire <mchangir@...hat.com>, "stable@...r.kernel.org" <stable@...r.kernel.org>, 
	"ceph-devel@...r.kernel.org" <ceph-devel@...r.kernel.org>, "brauner@...nel.org" <brauner@...nel.org>, 
	"jlayton@...nel.org" <jlayton@...nel.org>, 
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 5/5] ceph: Fix write storm on fscrypted files

On Mon, Jan 5, 2026 at 2:34 PM Viacheslav Dubeyko <Slava.Dubeyko@....com> wrote:
>
> On Tue, 2025-12-30 at 18:43 -0800, Sam Edwards wrote:
> > CephFS stores file data across multiple RADOS objects. An object is the
> > atomic unit of storage, so the writeback code must clean only folios
> > that belong to the same object with each OSD request.
> >
> > CephFS also supports RAID0-style striping of file contents: if enabled,
> > each object stores multiple unbroken "stripe units" covering different
> > portions of the file; if disabled, a "stripe unit" is simply the whole
> > object. The stripe unit is (usually) reported as the inode's block size.
> >
> > Though the writeback logic could, in principle, lock all dirty folios
> > belonging to the same object, its current design is to lock only a
> > single stripe unit at a time. Ever since this code was first written,
> > it has determined this size by checking the inode's block size.
> > However, the relatively-new fscrypt support needed to reduce the block
> > size for encrypted inodes to the crypto block size (see 'fixes' commit),
> > which causes an unnecessarily high number of write operations (~1024x as
> > many, with 4MiB objects) and grossly degraded performance.

Hi Slava,

> Do you have any benchmarking results that prove your point?

I haven't done any "real" benchmarking for this change. On my setup
(closer to a home server than a typical Ceph deployment), sequential
write throughput increased from ~1.7 to ~66 MB/s with this patch
applied. I don't consider this single datapoint representative, so
rather than presenting it as a general benchmark in the commit
message, I chose the qualitative wording "grossly degraded
performance." Actual impact will vary depending on workload, disk
type, OSD count, etc.

Those curious about the bug's performance impact in their environment
can find out without enabling fscrypt, using: mount -o wsize=4096

However, the core rationale for my claim is based on principles, not
on measurements: batching writes into fewer operations necessarily
spreads per-operation overhead across more bytes. So this change
removes an artificial per-op bottleneck on sequential write
performance. The exact impact varies, but the patch does improve
(fscrypt-enabled) write throughput in nearly every case.

Warm regards,
Sam


>
> Thanks,
> Slava.
>
> >
> > Fix this (and clarify intent) by using i_layout.stripe_unit directly in
> > ceph_define_write_size() so that encrypted inodes are written back with
> > the same number of operations as if they were unencrypted.
> >
> > Fixes: 94af0470924c ("ceph: add some fscrypt guardrails")
> > Cc: stable@...r.kernel.org
> > Signed-off-by: Sam Edwards <CFSworks@...il.com>
> > ---
> >  fs/ceph/addr.c | 3 ++-
> >  1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
> > index b3569d44d510..cb1da8e27c2b 100644
> > --- a/fs/ceph/addr.c
> > +++ b/fs/ceph/addr.c
> > @@ -1000,7 +1000,8 @@ unsigned int ceph_define_write_size(struct address_space *mapping)
> >  {
> >       struct inode *inode = mapping->host;
> >       struct ceph_fs_client *fsc = ceph_inode_to_fs_client(inode);
> > -     unsigned int wsize = i_blocksize(inode);
> > +     struct ceph_inode_info *ci = ceph_inode(inode);
> > +     unsigned int wsize = ci->i_layout.stripe_unit;
> >
> >       if (fsc->mount_options->wsize < wsize)
> >               wsize = fsc->mount_options->wsize;

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ