lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <70ad6f1e.dba.19b99951948.Coremail.00107082@163.com>
Date: Thu, 8 Jan 2026 01:50:44 +0800 (CST)
From: "David Wang" <00107082@....com>
To: "Kent Overstreet" <kent.overstreet@...ux.dev>, malcolm@...k.id.au
Cc: "Suren Baghdasaryan" <surenb@...gle.com>, akpm@...ux-foundation.org,
	hannes@...xchg.org, pasha.tatashin@...een.com,
	souravpanda@...gle.com, vbabka@...e.cz, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH RFC] alloc_tag: add option to pick the first codetag
 along callchain



At 2026-01-08 00:13:25, "Kent Overstreet" <kent.overstreet@...ux.dev> wrote:
>On Wed, Jan 07, 2026 at 02:16:24PM +0800, David Wang wrote:
>> 
>> At 2026-01-07 12:07:34, "Kent Overstreet" <kent.overstreet@...ux.dev> wrote:
>> >I'm curious why you need to change __filemap_get_folio()? In filesystem
>> >land we just lump that under "pagecache", but I guess you're doing more
>> >interesting things with it in driver land?
>> 
>> Oh,  in [1],   there is a report about possible memory leak in cephfs, (The issue is still open, tracked in [2].), 
>> large trunk of memory could not be released even after dropcache.
>> memory allocation profiling shows those memory belongs to __filemap_get_folio,
>> something like 
>> >> ># sort -g /proc/allocinfo|tail|numfmt --to=iec
>> >> >         12M     2987 mm/execmem.c:41 func:execmem_vmalloc 
>> >> >         12M        3 kernel/dma/pool.c:96 func:atomic_pool_expand 
>> >> >         13M      751 mm/slub.c:3061 func:alloc_slab_page 
>> >> >         16M        8 mm/khugepaged.c:1069 func:alloc_charge_folio 
>> >> >         18M     4355 mm/memory.c:1190 func:folio_prealloc 
>> >> >         24M     6119 mm/memory.c:1192 func:folio_prealloc 
>> >> >         58M    14784 mm/page_ext.c:271 func:alloc_page_ext 
>> >> >         61M    15448 mm/readahead.c:189 func:ractl_alloc_folio 
>> >> >         79M     6726 mm/slub.c:3059 func:alloc_slab_page 
>> >> >         11G  2674488 mm/filemap.c:2012 func:__filemap_get_folio
>> 
>> After adding codetag to __filemap_get_folio, it shows
>> 
>> ># sort -g /proc/allocinfo|tail|numfmt --to=iec
>> >         10M     2541 drivers/block/zram/zram_drv.c:1597 [zram]
>> >func:zram_meta_alloc 12M     3001 mm/execmem.c:41 func:execmem_vmalloc 
>> >         12M     3605 kernel/fork.c:311 func:alloc_thread_stack_node 
>> >         16M      992 mm/slub.c:3061 func:alloc_slab_page 
>> >         20M    35544 lib/xarray.c:378 func:xas_alloc 
>> >         31M     7704 mm/memory.c:1192 func:folio_prealloc 
>> >         69M    17562 mm/memory.c:1190 func:folio_prealloc 
>> >        104M     8212 mm/slub.c:3059 func:alloc_slab_page 
>> >        124M    30075 mm/readahead.c:189 func:ractl_alloc_folio 
>> >        2.6G   661392 fs/netfs/buffered_read.c:635 [netfs] func:netfs_write_begin 
>> >
>> 
>> Helpful or not, I am not sure. So far no bug has been spotted in the cephfs write path, yet.
>> But at least, it provides more information and narrow down the scope of suspicious.
>> 
>> 
>> https://lore.kernel.org/lkml/2a9ba88e.3aa6.19b0b73dd4e.Coremail.00107082@163.com/  [1]
>> https://tracker.ceph.com/issues/74156   [2]
>
>Well, my first thought when looking at that is that memory allocation
>profiling is unlikely to be any more help there. Once you're dealing
>with the page cache, if you're looking at a genuine leak it would pretty
>much have to be a folio refcount leak, and the code that leaked the ref
>could be anything that touched that folio - you're looking at a pretty
>wide scope.
>
>Unfortunately, we're not great at visibility and introspection in mm/,
>and refcount bugs tend to be hard in general.
>
>Better mm introspection would be helpful to say definitively that you're
>looking at a refcount leak, but then once that's determined it's still
>going to be pretty painful to track down.
>
>The approach I took in bcachefs for refcount bugs was to write a small
>library that in debug mode splits a refcount into sub-refcounts, and
>then enumerate every single codepath that takes refs and gives them
>distinct sub-refs - this means in debug mode we can instantly pinpoint
>the function that's buggy (and even better, with the new CLASS() and
>guard() stuff these sorts of bugs have been going away).
>
>But grafting that onto folio refcounts would be a hell of a chore.
>
>OTOH, converting code to CLASS() and guards is much more
>straightforward - just a matter of writing little helpers if you need
>them and then a bunch of mechanical conversions, and it's well worth it.
>
>But, I'm reading through the Ceph code, and it has /less/ code involving
>folio refcounts than I would expect.
>
>Has anyone checked if the bug reproduces without zswap? I've definitely
>seen a lot of bug reports involving that code.

Thanks for the information, and your time~!
Add malcolm@...k.id.au

Actually I don't even have access to a cephfs to confirm the bug,
I was just   interested in  "memory leak" thing.. ,  and  try to "sell"  memory allocation profiling there. :)



Thanks
David

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ