[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <279f10c2-3eaa-c641-094f-3070db67d84f@suse.cz>
Date: Thu, 19 Jan 2017 08:29:45 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...nel.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Mel Gorman <mgorman@...e.de>,
linux-mm@...ck.org, LKML <linux-kernel@...r.kernel.org>
Subject: Re: [patch -mm] mm, page_alloc: warn_alloc nodemask is NULL when
cpusets are disabled
On 01/18/2017 10:51 PM, David Rientjes wrote:
> The patch "mm, page_alloc: warn_alloc print nodemask" implicitly sets the
> allocation nodemask to cpuset_current_mems_allowed when there is no
> effective mempolicy. cpuset_current_mems_allowed is only effective when
> cpusets are enabled, which is also printed by warn_alloc(), so setting
> the nodemask to cpuset_current_mems_allowed is redundant and prevents
> debugging issues where ac->nodemask is not set properly in the page
> allocator.
>
> This provides better debugging output since
> cpuset_print_current_mems_allowed() is already provided.
>
> Signed-off-by: David Rientjes <rientjes@...gle.com>
Yes, with my current cpuset vs mempolicy debugging experience, this is
more useful (except how both nodemask and mems_allowed can change under
us, so what we print here is not necessarily the same that what
get_page_from_freelist() has seen, but that's another thing...).
But I would suggest you change the oom killer's dump_header() the same
way than warn_alloc().
Thanks,
Vlastimil
> ---
> mm/page_alloc.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3037,7 +3037,6 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
> va_list args;
> static DEFINE_RATELIMIT_STATE(nopage_rs, DEFAULT_RATELIMIT_INTERVAL,
> DEFAULT_RATELIMIT_BURST);
> - nodemask_t *nm = (nodemask) ? nodemask : &cpuset_current_mems_allowed;
>
> if ((gfp_mask & __GFP_NOWARN) || !__ratelimit(&nopage_rs) ||
> debug_guardpage_minorder() > 0)
> @@ -3051,11 +3050,16 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
> pr_cont("%pV", &vaf);
> va_end(args);
>
> - pr_cont(", mode:%#x(%pGg), nodemask=%*pbl\n", gfp_mask, &gfp_mask, nodemask_pr_args(nm));
> + pr_cont(", mode:%#x(%pGg), nodemask=", gfp_mask, &gfp_mask);
> + if (nodemask)
> + pr_cont("%*pbl\n", nodemask_pr_args(nodemask));
> + else
> + pr_cont("(null)\n");
> +
> cpuset_print_current_mems_allowed();
>
> dump_stack();
> - warn_alloc_show_mem(gfp_mask, nm);
> + warn_alloc_show_mem(gfp_mask, nodemask);
> }
>
> static inline struct page *
>
Powered by blists - more mailing lists