[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1804181152240.227784@chino.kir.corp.google.com>
Date: Wed, 18 Apr 2018 11:58:00 -0700 (PDT)
From: David Rientjes <rientjes@...gle.com>
To: Michal Hocko <mhocko@...nel.org>
cc: Minchan Kim <minchan@...nel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>
Subject: Re: [PATCH] mm:memcg: add __GFP_NOWARN in
__memcg_schedule_kmem_cache_create
On Wed, 18 Apr 2018, Michal Hocko wrote:
> > Okay, no problem. However, I don't feel we need ratelimit at this moment.
> > We can do when we got real report. Let's add just one line warning.
> > However, I have no talent to write a poem to express with one line.
> > Could you help me?
>
> What about
> pr_info("Failed to create memcg slab cache. Report if you see floods of these\n");
>
Um, there's nothing actionable here for the user. Even if the message
directed them to a specific email address, what would you ask the user for
in response if they show a kernel log with 100 of these? Probably ask
them to use sysrq at the time it happens to get meminfo. But any user
initiated sysrq is going to reveal very different state of memory compared
to when the kmalloc() actually failed.
If this really needs a warning, I think it only needs to be done once and
reveal the state of memory similar to how slub emits oom warnings. But as
the changelog indicates, the system is oom and we couldn't reclaim. We
can expect this happens a lot on systems with memory pressure. What is
the warning revealing that would be actionable?
Powered by blists - more mailing lists