[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Mon, 10 Oct 2022 10:57:37 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Zhaoyang Huang <huangzhaoyang@...il.com>
Cc: "zhaoyang.huang" <zhaoyang.huang@...soc.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Catalin Marinas <catalin.marinas@....com>,
Matthew Wilcox <willy@...radead.org>,
Marco Elver <elver@...gle.com>,
Imran Khan <imran.f.khan@...cle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, ke.wang@...soc.com,
steve.kang@...soc.com, Suren Baghdasaryan <surenb@...gle.com>,
Kent Overstreet <kent.overstreet@...il.com>
Subject: Re: [PATCH] mm: add stackdepot information on page->private for
tracking
On 10/9/22 04:25, Zhaoyang Huang wrote:
> On Fri, Oct 7, 2022 at 6:08 PM Vlastimil Babka <vbabka@...e.cz> wrote:
>>
>> On 10/6/22 05:19, zhaoyang.huang wrote:
>> > From: Zhaoyang Huang <zhaoyang.huang@...soc.com>
>> >
>> > Private is vacant for most of Non-LRU pages while the user has explicitly
>> > operation on page->private via set_page_private, I would like introduce
>> > stackdepot information on page->private for a simplified tracking mechanism
>> > which could be help for kernel driver's memory leak.
>> >
>> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@...soc.com>
>>
>> This duplicates the existing page_owner functionality in a way that
>> unconditionally adds overhead to all kernels that have CONFIG_STACKDEPOT
>> enabled build-time (and also misses the need to initialize stackdepot properly).
> Sure. This patch could be deemed as a light and complement of the page
> owner which depends on proc fs in lived system for showing the result.
> This patch could be mainly helpful on RAM dump as it is hard to find
> page_ext for page owners. I also would like to make this optional via
> defconfig item.
I'm still not convinced we need this, between existing page_owner and the
proposed code tagging framework.
https://lore.kernel.org/all/20220830214919.53220-1-surenb@google.com/
For finding page_ext in crash dumps, it's possible with a scriptable
debugger like drgn or crash-python.
>>
>> Also wouldn't be suprised if some existing page->private users were actually
>> confused by the field suddenly being non-zero without their own action.
> IMO, the existing page->private users will cover this field directly
> without distrubed by handle.
Well the bot wasn't happy so far
https://lore.kernel.org/all/202210072204.cfea59d3-oliver.sang@intel.com/
>>
>> > ---
>> > mm/page_alloc.c | 28 +++++++++++++++++++++++++++-
>> > 1 file changed, 27 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> > index e5486d4..b79a503 100644
>> > --- a/mm/page_alloc.c
>> > +++ b/mm/page_alloc.c
>> > @@ -75,6 +75,7 @@
>> > #include <linux/khugepaged.h>
>> > #include <linux/buffer_head.h>
>> > #include <linux/delayacct.h>
>> > +#include <linux/stackdepot.h>
>> > #include <asm/sections.h>
>> > #include <asm/tlbflush.h>
>> > #include <asm/div64.h>
>> > @@ -2464,6 +2465,25 @@ static inline bool should_skip_init(gfp_t flags)
>> > return (flags & __GFP_SKIP_ZERO);
>> > }
>> >
>> > +#ifdef CONFIG_STACKDEPOT
>> > +static noinline depot_stack_handle_t set_track_prepare(void)
>> > +{
>> > + depot_stack_handle_t trace_handle;
>> > + unsigned long entries[16];
>> > + unsigned int nr_entries;
>> > +
>> > + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 3);
>> > + trace_handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT);
>> > +
>> > + return trace_handle;
>> > +}
>> > +#else
>> > +static inline depot_stack_handle_t set_track_prepare(void)
>> > +{
>> > + return 0;
>> > +}
>> > +#endif
>> > +
>> > inline void post_alloc_hook(struct page *page, unsigned int order,
>> > gfp_t gfp_flags)
>> > {
>> > @@ -2471,8 +2491,14 @@ inline void post_alloc_hook(struct page *page, unsigned int order,
>> > !should_skip_init(gfp_flags);
>> > bool init_tags = init && (gfp_flags & __GFP_ZEROTAGS);
>> > int i;
>> > + depot_stack_handle_t stack_handle = set_track_prepare();
>> >
>> > - set_page_private(page, 0);
>> > + /*
>> > + * Don't worry, user will cover private directly without checking
>> > + * this field and has ability to trace the page. This also will not
>> > + * affect expected state when freeing
>> > + */
>> > + set_page_private(page, stack_handle);
>> > set_page_refcounted(page);
>> >
>> > arch_alloc_page(page, order);
>>
Powered by blists - more mailing lists