[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190410153449.GA14915@chrisdown.name>
Date: Wed, 10 Apr 2019 16:34:49 +0100
From: Chris Down <chris@...isdown.name>
To: Michal Hocko <mhocko@...nel.org>
Cc: Johannes Weiner <hannes@...xchg.org>, Tejun Heo <tj@...nel.org>,
Roman Gushchin <guro@...com>, linux-kernel@...r.kernel.org,
cgroups@...r.kernel.org, linux-mm@...ck.org, kernel-team@...com,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH REBASED] mm: Throttle allocators when failing reclaim
over memory.high
Hey Michal,
Just to come back to your last e-mail about how this interacts with OOM.
Michal Hocko writes:
> I am not really opposed to the throttling in the absence of a reclaimable
> memory. We do that for the regular allocation paths already
> (should_reclaim_retry). A swapless system with anon memory is very likely to
> oom too quickly and this sounds like a real problem. But I do not think that
> we should throttle the allocation to freeze it completely. We should
> eventually OOM. And that was my question about essentially. How much we
> can/should throttle to give a high limit events consumer enough time to
> intervene. I am sorry to still not have time to study the patch more closely
> but this should be explained in the changelog. Are we talking about
> seconds/minutes or simply freeze each allocator to death?
Per-allocation, the maximum is 2 seconds (MEMCG_MAX_HIGH_DELAY_JIFFIES), so we
don't freeze things to death -- they can recover if they are amenable to it.
The idea here is that primarily you handle it, just like memory.oom_control in
v1 (as mentioned in the commit message, or as a last resort, the kernel will
still OOM if our userspace daemon has kicked the bucket or is otherwise
ineffective.
If you're setting memory.high and memory.max together, then setting memory.high
always has to come with a.) tolerance of heavy throttling by your application,
and b.) userspace intervention in the case of high memory pressure resulting.
This patch doesn't really change those semantics.
Powered by blists - more mailing lists