[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200818101844.GO2674@hirez.programming.kicks-ass.net>
Date: Tue, 18 Aug 2020 12:18:44 +0200
From: peterz@...radead.org
To: Michal Hocko <mhocko@...e.com>
Cc: Waiman Long <longman@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov.dev@...il.com>,
Jonathan Corbet <corbet@....net>,
Alexey Dobriyan <adobriyan@...il.com>,
Ingo Molnar <mingo@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
linux-kernel@...r.kernel.org, linux-doc@...r.kernel.org,
linux-fsdevel@...r.kernel.org, cgroups@...r.kernel.org,
linux-mm@...ck.org
Subject: Re: [RFC PATCH 0/8] memcg: Enable fine-grained per process memory
control
On Tue, Aug 18, 2020 at 12:05:16PM +0200, Michal Hocko wrote:
> > But then how can it run-away like Waiman suggested?
>
> As Chris mentioned in other reply. This functionality is quite new.
>
> > /me goes look... and finds MEMCG_MAX_HIGH_DELAY_JIFFIES.
>
> We can certainly tune a different backoff delays but I suspect this is
> not the problem here.
Tuning? That thing needs throwing out, it's fundamentally buggered. Why
didn't anybody look at how the I/O drtying thing works first?
What you need is a feeback loop against the rate of freeing pages, and
when you near the saturation point, the allocation rate should exactly
match the freeing rate.
But this thing has nothing what so ever like that.
Powered by blists - more mailing lists