lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20251217164602.563ddae3@xps15mal>
Date: Wed, 17 Dec 2025 16:46:02 +1000
From: Mal Haak <malcolm@...k.id.au>
To: "David Wang" <00107082@....com>
Cc: "Viacheslav Dubeyko" <Slava.Dubeyko@....com>,
 "ceph-devel@...r.kernel.org" <ceph-devel@...r.kernel.org>, "Xiubo Li"
 <xiubli@...hat.com>, "idryomov@...il.com" <idryomov@...il.com>,
 "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
 "surenb@...gle.com" <surenb@...gle.com>
Subject: Re: Possible memory leak in 6.17.7

On Wed, 17 Dec 2025 13:59:47 +0800 (CST)
"David Wang" <00107082@....com> wrote:

> At 2025-12-16 09:26:47, "Mal Haak" <malcolm@...k.id.au> wrote:
> >On Mon, 15 Dec 2025 19:42:56 +0000
> >Viacheslav Dubeyko <Slava.Dubeyko@....com> wrote:
> >  
> >> Hi Mal,
> >>   
> ><SNIP>   
> >> 
> >> Thanks a lot for reporting the issue. Finally, I can see the
> >> discussion in email list. :) Are you working on the patch with the
> >> fix? Should we wait for the fix or I need to start the issue
> >> reproduction and investigation? I am simply trying to avoid patches
> >> collision and, also, I have multiple other issues for the fix in
> >> CephFS kernel client. :)
> >> 
> >> Thanks,
> >> Slava.  
> >
> >Hello,
> >
> >Unfortunately creating a patch is just outside my comfort zone, I've
> >lived too long in Lustre land.
> >
> >I've have been trying to narrow down a consistent reproducer that's
> >as fast as my production workload. (It crashes a 32GB VM in 2hrs)
> >And I haven't got it quite as fast. I think the dd workload is too
> >well behaved. 
> >
> >I can confirm the issue appeared in the major patch set that was
> >applied as part of the 6.15 kernel. So during the more complete pages
> >to folios switch and that nothing has changed in the bug behaviour
> >since then. I did have a look at all the diffs from 6.14 to 6.18 on
> >addr.c and didn't see any changes post 6.15 that looked like they
> >would impact the bug behavior.   
> 
> Hi,
> Just a suggestion, in case you run out of idea for further
> investigation. I think you can bisect *manually*  targeting changes
> of fs/cephfs between 6.14 and 6.15
> 
> 
> $ git log  --pretty='format:%h %an' v6.14..v6.15 fs/ceph
> 349b7d77f5a1 Linus Torvalds
> b261d2222063 Eric Biggers
> f452a2204614 David Howells
> e63046adefc0 Linus Torvalds
> 59b59a943177 Matthew Wilcox (Oracle)  <-----------3
> efbdd92ed9f6 Matthew Wilcox (Oracle)
> d1b452673af4 Matthew Wilcox (Oracle)
> ad49fe2b3d54 Matthew Wilcox (Oracle)
> a55cf4fd8fae Matthew Wilcox (Oracle)
> 15fdaf2fd60d Matthew Wilcox (Oracle)
> 62171c16da60 Matthew Wilcox (Oracle)
> baff9740bc8f Matthew Wilcox (Oracle)
> f9707a8b5b9d Matthew Wilcox (Oracle)
> 88a59bda3f37 Matthew Wilcox (Oracle)
> 19a288110435 Matthew Wilcox (Oracle)
> fd7449d937e7 Viacheslav Dubeyko  <---------2
> 1551ec61dc55 Viacheslav Dubeyko
> ce80b76dd327 Viacheslav Dubeyko
> f08068df4aa4 Viacheslav Dubeyko
> 3f92c7b57687 NeilBrown               <-----------1
> 88d5baf69082 NeilBrown
> 
> There were 3 major patch set (group by author),  the suspect could be
> narrowed down further.
> 
> 
> (Bisect, even over a short range of patch, is quite an unpleasant
> experience though....)
> 
> FYI
> David
> 
> >
<SNIP>

Yeah, I don't think it's a small patch that is the cause of the issue. 

It looks like there was a patch set that migrated cephfs off handling
individual pages and onto folios to enable wider use of netfs features
like local caching and encryption, as examples. I'm not sure that set
can be broken up and result in a working cephfs module. Which limits
the utility of a git-bisect. I'm pretty sure the issue is in addr.c
somewhere and most of the changes in there are one patch. That said,
after I get this crash dump I'll probably do it anyway. 

What I really need to do is get a crash dump to look at what state the
folios and their tracking is in. Assuming I can grok what I'm looking
at. This is the bit I'm most apprehensive of. I'm hoping I can find a
list of folios used by the reclaim machinery that is missing
a bunch of folios. That or a bunch with inflated refcounts or
something. 

Something is going awry, but it's not fast. I thought I had a quick
reproducer. I was wrong, I sized the DD workload incorrectly and
triggered the panic_on_oom due to that, not the bug. 

I'm re-running the reproducer now, on a VM with 2GB of ram and its been
running for around 3hrs and I think at most its leaked possibly
100MB-150MB of ram at most. (It was averaging 190-200MB of noncache
usage. It's now averaging 290-340MB). 

It does accelerate. The more folios that are in the weird state, the
more end up in the weird state. Which feels like fragmentation side
effects, but I'm just speculating.  

Anyway, one of the wonderful ceph developers is looking into it. I just
hope I can do enough to help them locate the issue. They are having
troubles reproducing last I heard from them but they might have been
expecting a slightly faster reproducer.

I have however recreated it on a physical host not just a vm. So I feel
like I can rule out being a VM as a cause.

Anyway thanks for your continued assistance!

Mal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ