[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20210401170519.00824fbdf8ab60b720609422@linux-foundation.org>
Date: Thu, 1 Apr 2021 17:05:19 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Sergei Trofimovich <slyfox@...too.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: Re: [PATCH] mm: page_owner: detect page_owner recursion via
task_struct
On Thu, 1 Apr 2021 23:30:10 +0100 Sergei Trofimovich <slyfox@...too.org> wrote:
> Before the change page_owner recursion was detected via fetching
> backtrace and inspecting it for current instruction pointer.
> It has a few problems:
> - it is slightly slow as it requires extra backtrace and a linear
> stack scan of the result
> - it is too late to check if backtrace fetching required memory
> allocation itself (ia64's unwinder requires it).
>
> To simplify recursion tracking let's use page_owner recursion depth
> as a counter in 'struct task_struct'.
Seems like a better approach.
> The change make page_owner=on work on ia64 bu avoiding infinite
> recursion in:
> kmalloc()
> -> __set_page_owner()
> -> save_stack()
> -> unwind() [ia64-specific]
> -> build_script()
> -> kmalloc()
> -> __set_page_owner() [we short-circuit here]
> -> save_stack()
> -> unwind() [recursion]
>
> ...
>
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -1371,6 +1371,15 @@ struct task_struct {
> struct llist_head kretprobe_instances;
> #endif
>
> +#ifdef CONFIG_PAGE_OWNER
> + /*
> + * Used by page_owner=on to detect recursion in page tracking.
> + * Is it fine to have non-atomic ops here if we ever access
> + * this variable via current->page_owner_depth?
Yes, it is fine. This part of the comment can be removed.
> + */
> + unsigned int page_owner_depth;
> +#endif
Adding to the task_struct has a cost. But I don't expect that
PAGE_OWNER is commonly used in prodction builds (correct?).
> --- a/init/init_task.c
> +++ b/init/init_task.c
> @@ -213,6 +213,9 @@ struct task_struct init_task
> #ifdef CONFIG_SECCOMP
> .seccomp = { .filter_count = ATOMIC_INIT(0) },
> #endif
> +#ifdef CONFIG_PAGE_OWNER
> + .page_owner_depth = 0,
> +#endif
> };
> EXPORT_SYMBOL(init_task);
It will be initialized to zero by the compiler. We can omit this hunk
entirely.
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -20,6 +20,16 @@
> */
> #define PAGE_OWNER_STACK_DEPTH (16)
>
> +/*
> + * How many reenters we allow to page_owner.
> + *
> + * Sometimes metadata allocation tracking requires more memory to be allocated:
> + * - when new stack trace is saved to stack depot
> + * - when backtrace itself is calculated (ia64)
> + * Instead of falling to infinite recursion give it a chance to recover.
> + */
> +#define PAGE_OWNER_MAX_RECURSION_DEPTH (1)
So this is presently a boolean. Is there any expectation that
PAGE_OWNER_MAX_RECURSION_DEPTH will ever be greater than 1? If not, we
could use a single bit in the task_struct. Add it to the
"Unserialized, strictly 'current'" bitfields. Could make it a 2-bit field if we want
to permit PAGE_OWNER_MAX_RECURSION_DEPTH=larger.
Powered by blists - more mailing lists