[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ea7e62f1-d8a3-0ece-c373-931b85de7b5d@suse.cz>
Date: Wed, 7 Apr 2021 14:32:03 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Sergei Trofimovich <slyfox@...too.org>,
Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org
Cc: linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Bristot de Oliveira <bristot@...hat.com>
Subject: Re: [PATCH v2] mm: page_owner: detect page_owner recursion via
task_struct
On 4/2/21 1:53 PM, Sergei Trofimovich wrote:
> Before the change page_owner recursion was detected via fetching
> backtrace and inspecting it for current instruction pointer.
> It has a few problems:
> - it is slightly slow as it requires extra backtrace and a linear
> stack scan of the result
> - it is too late to check if backtrace fetching required memory
> allocation itself (ia64's unwinder requires it).
>
> To simplify recursion tracking let's use page_owner recursion flag
> in 'struct task_struct'.
>
> The change make page_owner=on work on ia64 by avoiding infinite
> recursion in:
> kmalloc()
> -> __set_page_owner()
> -> save_stack()
> -> unwind() [ia64-specific]
> -> build_script()
> -> kmalloc()
> -> __set_page_owner() [we short-circuit here]
> -> save_stack()
> -> unwind() [recursion]
>
> CC: Ingo Molnar <mingo@...hat.com>
> CC: Peter Zijlstra <peterz@...radead.org>
> CC: Juri Lelli <juri.lelli@...hat.com>
> CC: Vincent Guittot <vincent.guittot@...aro.org>
> CC: Dietmar Eggemann <dietmar.eggemann@....com>
> CC: Steven Rostedt <rostedt@...dmis.org>
> CC: Ben Segall <bsegall@...gle.com>
> CC: Mel Gorman <mgorman@...e.de>
> CC: Daniel Bristot de Oliveira <bristot@...hat.com>
> CC: Andrew Morton <akpm@...ux-foundation.org>
> CC: linux-mm@...ck.org
> Signed-off-by: Sergei Trofimovich <slyfox@...too.org>
Much better indeed, thanks.
Acked-by: Vlastimil Babka <vbabka@...e.cz>
> ---
> Change since v1:
> - use bit from task_struct instead of a new field
> - track only one recursion depth level so far
>
> include/linux/sched.h | 4 ++++
> mm/page_owner.c | 32 ++++++++++----------------------
> 2 files changed, 14 insertions(+), 22 deletions(-)
>
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index ef00bb22164c..00986450677c 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -841,6 +841,10 @@ struct task_struct {
> /* Stalled due to lack of memory */
> unsigned in_memstall:1;
> #endif
> +#ifdef CONFIG_PAGE_OWNER
> + /* Used by page_owner=on to detect recursion in page tracking. */
> + unsigned in_page_owner:1;
> +#endif
>
> unsigned long atomic_flags; /* Flags requiring atomic access. */
>
> diff --git a/mm/page_owner.c b/mm/page_owner.c
> index 7147fd34a948..64b2e4c6afb7 100644
> --- a/mm/page_owner.c
> +++ b/mm/page_owner.c
> @@ -97,42 +97,30 @@ static inline struct page_owner *get_page_owner(struct page_ext *page_ext)
> return (void *)page_ext + page_owner_ops.offset;
> }
>
> -static inline bool check_recursive_alloc(unsigned long *entries,
> - unsigned int nr_entries,
> - unsigned long ip)
> -{
> - unsigned int i;
> -
> - for (i = 0; i < nr_entries; i++) {
> - if (entries[i] == ip)
> - return true;
> - }
> - return false;
> -}
> -
> static noinline depot_stack_handle_t save_stack(gfp_t flags)
> {
> unsigned long entries[PAGE_OWNER_STACK_DEPTH];
> depot_stack_handle_t handle;
> unsigned int nr_entries;
>
> - nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2);
> -
> /*
> - * We need to check recursion here because our request to
> - * stackdepot could trigger memory allocation to save new
> - * entry. New memory allocation would reach here and call
> - * stack_depot_save_entries() again if we don't catch it. There is
> - * still not enough memory in stackdepot so it would try to
> - * allocate memory again and loop forever.
> + * Avoid recursion.
> + *
> + * Sometimes page metadata allocation tracking requires more
> + * memory to be allocated:
> + * - when new stack trace is saved to stack depot
> + * - when backtrace itself is calculated (ia64)
> */
> - if (check_recursive_alloc(entries, nr_entries, _RET_IP_))
> + if (current->in_page_owner)
> return dummy_handle;
> + current->in_page_owner = 1;
>
> + nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 2);
> handle = stack_depot_save(entries, nr_entries, flags);
> if (!handle)
> handle = failure_handle;
>
> + current->in_page_owner = 0;
> return handle;
> }
>
>
Powered by blists - more mailing lists