[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200821193716.GU3982@worktop.programming.kicks-ass.net>
Date: Fri, 21 Aug 2020 21:37:16 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Johannes Weiner <hannes@...xchg.org>
Cc: Michal Hocko <mhocko@...e.com>, Waiman Long <longman@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Jonathan Corbet <corbet@....net>,
Alexey Dobriyan <adobriyan@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
linux-fsdevel@...r.kernel.org, cgroups@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [RFC PATCH 0/8] memcg: Enable fine-grained per process memory
control
On Tue, Aug 18, 2020 at 09:49:00AM -0400, Johannes Weiner wrote:
> On Tue, Aug 18, 2020 at 12:18:44PM +0200, peterz@...radead.org wrote:
> > What you need is a feeback loop against the rate of freeing pages, and
> > when you near the saturation point, the allocation rate should exactly
> > match the freeing rate.
>
> IO throttling solves a slightly different problem.
>
> IO occurs in parallel to the workload's execution stream, and you're
> trying to take the workload from dirtying at CPU speed to rate match
> to the independent IO stream.
>
> With memory allocations, though, freeing happens from inside the
> execution stream of the workload. If you throttle allocations, you're
For a single task, but even then you're making the argument that we need
to allocate memory to free memory, and we all know where that gets us.
But we're actually talking about a cgroup here, which is a collection of
tasks all doing things in parallel.
> most likely throttling the freeing rate as well. And you'll slow down
> reclaim scanning by the same amount as the page references, so it's
> not making reclaim more successful either. The alloc/use/free
> (im)balance is an inherent property of the workload, regardless of the
> speed you're executing it at.
Arguably seeing the rate drop to near 0 is a very good point to consider
running cgroup-OOM.
Powered by blists - more mailing lists