[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZO48h7c9qwQxEPPA@slm.duckdns.org>
Date: Tue, 29 Aug 2023 08:44:23 -1000
From: Tejun Heo <tj@...nel.org>
To: Michal Hocko <mhocko@...e.com>
Cc: Yosry Ahmed <yosryahmed@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Shakeel Butt <shakeelb@...gle.com>,
Muchun Song <muchun.song@...ux.dev>,
Ivan Babrou <ivan@...udflare.com>, linux-mm@...ck.org,
cgroups@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/3] mm: memcg: use non-unified stats flushing for
userspace reads
Hello,
On Fri, Aug 25, 2023 at 09:05:46AM +0200, Michal Hocko wrote:
> > > I think that's how it was always meant to be when it was designed. The
> > > global rstat lock has always existed and was always available to
> > > userspace readers. The memory controller took a different path at some
> > > point with unified flushing, but that was mainly because of high
> > > concurrency from in-kernel flushers, not because userspace readers
> > > caused a problem. Outside of memcg, the core cgroup code has always
> > > exercised this global lock when reading cpu.stat since rstat's
> > > introduction. I assume there hasn't been any problems since it's still
> > > there.
>
> I suspect nobody has just considered a malfunctioning or adversary
> workloads so far.
>
> > > I was hoping Tejun would confirm/deny this.
>
> Yes, that would be interesting to hear.
So, the assumptions in the original design were:
* Writers are high freq but readers are lower freq and can block.
* The global lock is mutex.
* Back-to-back reads won't have too much to do because it only has to flush
what's been accumulated since the last flush which took place just before.
It's likely that the userspace side is gonna be just fine if we restore the
global lock to be a mutex and let them be. Most of the problems are caused
by trying to allow flushing from non-sleepable and kernel contexts. Would it
make sense to distinguish what can and can't wait and make the latter group
always use cached value? e.g. even in kernel, during oom kill, waiting
doesn't really matter and it can just wait to obtain the up-to-date numbers.
Thanks.
--
tejun
Powered by blists - more mailing lists