lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 04 Jan 2022 07:00:31 -0500
From:   Jeff Layton <jlayton@...nel.org>
To:     Bastian Blank <bastian.blank@...dativ.de>,
        Ilya Dryomov <idryomov@...il.com>
Cc:     ceph-devel@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: PROBLEM: SLAB use-after-free with ceph(fs)

On Tue, 2022-01-04 at 10:49 +0100, Bastian Blank wrote:
> Moin
> 
> A customer reported panics inside memory management.  Before all
> occurances there are reports about SLAB missmatch in the log.  The
> "crash" tool shows freelist corruption in the memory dump.  This makes
> this problem a use-after-free somewhere inside the ceph module.
> 
> The crashs happen during high load situations, while copying data
> between two cephfs.
> 
> > [152791.777454] ceph:  dropping dirty+flushing - state for 00000000c039d4cc 1099526092092
> > [152791.777457] ------------[ cut here ]------------
> > [152791.777458] cache_from_obj: Wrong slab cache. jbd2_journal_handle but object is from kmalloc-256
> > [152791.777473] WARNING: CPU: 76 PID: 2676615 at mm/slab.h:521 kmem_cache_free+0x260/0x2b0
> […]
> > [152791.777530] CPU: 76 PID: 2676615 Comm: kworker/76:2 Kdump: loaded Not tainted 5.4.0-81-generic #91-Ubuntu
> > [152791.777531] Hardware name: HPE ProLiant DL385 Gen10/ProLiant DL385 Gen10, BIOS A40 10/28/2021
> > [152791.777540] Workqueue: ceph-msgr ceph_con_workfn [libceph]
> > [152791.777542] RIP: 0010:kmem_cache_free+0x260/0x2b0
> […]
> > [152791.777550] Call Trace:
> > [152791.777562]  ceph_free_cap_flush+0x1d/0x20 [ceph]
> > [152791.777568]  remove_session_caps_cb+0xcf/0x4b0 [ceph]
> > [152791.777573]  ceph_iterate_session_caps+0xc8/0x2a0 [ceph]
> > [152791.777578]  ? wake_up_session_cb+0xe0/0xe0 [ceph]
> > [152791.777583]  remove_session_caps+0x55/0x190 [ceph]
> > [152791.777587]  ? cleanup_session_requests+0x104/0x130 [ceph]
> > [152791.777592]  handle_session+0x4c7/0x5e0 [ceph]
> > [152791.777597]  dispatch+0x279/0x610 [ceph]
> > [152791.777602]  try_read+0x566/0x8c0 [libceph]
> 
> They reported the same in all tested kernels since 5.4, up to 5.15.5 or
> so.  Sadly I have no tests with newer builds available.
> 
> Any ideas how I can debug this further?
> 
> Regards,
> Bastian
> 

At first blush, this looks like the same problem as:

    https://tracker.ceph.com/issues/52283

...but that should have been fixed in v5.14.

Do you have a more complete stack trace, preferably from your v5.15-ish
kernel? Log messages leading up to the WARNING may also be helpful. It
may be best to open a bug at https://tracker.ceph.com.

The log message before the [ cut here ] line, indicates that the client
was trying to drop caps in response to a session message from the MDS or
maybe a map change. Was the mount force-umounted or the client
blacklisted or something?

You may also want to try v5.16-rc8 if you're able to build your own
kernels. There were some patches that went in to improve how the client
handles inodes that become inaccessible.
-- 
Jeff Layton <jlayton@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ